Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ./bin/elasticsearch
- [2016-05-10 12:22:17,099][WARN ][bootstrap ] Unable to lock JVM Memory: error=78,reason=Function not implemented
- [2016-05-10 12:22:17,099][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
- [2016-05-10 12:22:17,251][INFO ][node ] [node-1] version[2.3.2], pid[2177], build[b9e4a6a/2016-04-21T16:03:47Z]
- [2016-05-10 12:22:17,251][INFO ][node ] [node-1] initializing ...
- [2016-05-10 12:22:17,720][INFO ][plugins ] [node-1] modules [reindex, lang-expression, lang-groovy], plugins [license, marvel-agent], sites []
- [2016-05-10 12:22:17,738][INFO ][env ] [node-1] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [416.7gb], net total_space [464.7gb], spins? [unknown], types [hfs]
- [2016-05-10 12:22:17,738][INFO ][env ] [node-1] heap size [990.7mb], compressed ordinary object pointers [true]
- [2016-05-10 12:22:17,738][WARN ][env ] [node-1] max file descriptors [10240] for elasticsearch process likely too low, consider increasing to at least [65536]
- [2016-05-10 12:22:19,297][INFO ][node ] [node-1] initialized
- [2016-05-10 12:22:19,298][INFO ][node ] [node-1] starting ...
- [2016-05-10 12:22:19,376][INFO ][transport ] [node-1] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
- [2016-05-10 12:22:19,379][INFO ][discovery ] [node-1] elasticsearch/a4OfoT0DSmGFiweQCiYsUQ
- [2016-05-10 12:22:19,923][WARN ][transport.netty ] [node-1] exception caught on transport layer [[id: 0xcf5cc200, /127.0.0.1:50011 => /127.0.0.1:9300]], closing connection
- java.lang.IllegalStateException: Message not fully read (request) for requestId [18], action [cluster/nodes/info], readerIndex [39] vs expected [57]; resetting
- at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:121)
- at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
- at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
- at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
- at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
- at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
- at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
- at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
- at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
- at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
- at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
- at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
- at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
- at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
- at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
- at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
- at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
- at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
- at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
- at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
- at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
- at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
- at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- [2016-05-10 12:22:22,408][INFO ][cluster.service ] [node-1] new_master {node-1}{a4OfoT0DSmGFiweQCiYsUQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
- [2016-05-10 12:22:22,417][INFO ][http ] [node-1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
- [2016-05-10 12:22:22,417][INFO ][node ] [node-1] started
- [2016-05-10 12:22:22,625][INFO ][license.plugin.core ] [node-1] license [2df57f29-058a-4629-8a56-d5d431278cb1] - valid
- [2016-05-10 12:22:22,627][ERROR][license.plugin.core ] [node-1]
- #
- # License will expire on [Sunday, June 05, 2016]. If you have a new license, please update it.
- # Otherwise, please reach out to your support contact.
- #
- # Commercial plugins operate with reduced functionality on license expiration:
- # - marvel
- # - The agent will stop collecting cluster and indices metrics
- # - The agent will stop automatically cleaning indices older than [marvel.history.duration]
- [2016-05-10 12:22:22,646][INFO ][gateway ] [node-1] recovered [3] indices into cluster_state
- [2016-05-10 12:22:23,146][INFO ][cluster.routing.allocation] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.marvel-es-1-2016.05.10][0], [.marvel-es-1-2016.05.06][0], [.marvel-es-1-2016.05.06][0], [.marvel-es-1-2016.05.10][0]] ...]).
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement