Advertisement
Guest User

Problems Connecting To Elasticsearch

a guest
May 10th, 2016
220
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 5.60 KB | None | 0 0
  1. ./bin/elasticsearch
  2. [2016-05-10 12:22:17,099][WARN ][bootstrap                ] Unable to lock JVM Memory: error=78,reason=Function not implemented
  3. [2016-05-10 12:22:17,099][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
  4. [2016-05-10 12:22:17,251][INFO ][node                     ] [node-1] version[2.3.2], pid[2177], build[b9e4a6a/2016-04-21T16:03:47Z]
  5. [2016-05-10 12:22:17,251][INFO ][node                     ] [node-1] initializing ...
  6. [2016-05-10 12:22:17,720][INFO ][plugins                  ] [node-1] modules [reindex, lang-expression, lang-groovy], plugins [license, marvel-agent], sites []
  7. [2016-05-10 12:22:17,738][INFO ][env                      ] [node-1] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [416.7gb], net total_space [464.7gb], spins? [unknown], types [hfs]
  8. [2016-05-10 12:22:17,738][INFO ][env                      ] [node-1] heap size [990.7mb], compressed ordinary object pointers [true]
  9. [2016-05-10 12:22:17,738][WARN ][env                      ] [node-1] max file descriptors [10240] for elasticsearch process likely too low, consider increasing to at least [65536]
  10. [2016-05-10 12:22:19,297][INFO ][node                     ] [node-1] initialized
  11. [2016-05-10 12:22:19,298][INFO ][node                     ] [node-1] starting ...
  12. [2016-05-10 12:22:19,376][INFO ][transport                ] [node-1] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
  13. [2016-05-10 12:22:19,379][INFO ][discovery                ] [node-1] elasticsearch/a4OfoT0DSmGFiweQCiYsUQ
  14. [2016-05-10 12:22:19,923][WARN ][transport.netty          ] [node-1] exception caught on transport layer [[id: 0xcf5cc200, /127.0.0.1:50011 => /127.0.0.1:9300]], closing connection
  15. java.lang.IllegalStateException: Message not fully read (request) for requestId [18], action [cluster/nodes/info], readerIndex [39] vs expected [57]; resetting
  16.     at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:121)
  17.     at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
  18.     at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  19.     at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
  20.     at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
  21.     at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
  22.     at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
  23.     at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
  24.     at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
  25.     at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  26.     at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
  27.     at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
  28.     at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  29.     at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
  30.     at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
  31.     at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
  32.     at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
  33.     at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
  34.     at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
  35.     at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
  36.     at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
  37.     at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
  38.     at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
  39.     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  40.     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  41.     at java.lang.Thread.run(Thread.java:745)
  42. [2016-05-10 12:22:22,408][INFO ][cluster.service          ] [node-1] new_master {node-1}{a4OfoT0DSmGFiweQCiYsUQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
  43. [2016-05-10 12:22:22,417][INFO ][http                     ] [node-1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
  44. [2016-05-10 12:22:22,417][INFO ][node                     ] [node-1] started
  45. [2016-05-10 12:22:22,625][INFO ][license.plugin.core      ] [node-1] license [2df57f29-058a-4629-8a56-d5d431278cb1] - valid
  46. [2016-05-10 12:22:22,627][ERROR][license.plugin.core      ] [node-1]
  47. #
  48. # License will expire on [Sunday, June 05, 2016]. If you have a new license, please update it.
  49. # Otherwise, please reach out to your support contact.
  50. #
  51. # Commercial plugins operate with reduced functionality on license expiration:
  52. # - marvel
  53. #  - The agent will stop collecting cluster and indices metrics
  54. #  - The agent will stop automatically cleaning indices older than [marvel.history.duration]
  55. [2016-05-10 12:22:22,646][INFO ][gateway                  ] [node-1] recovered [3] indices into cluster_state
  56. [2016-05-10 12:22:23,146][INFO ][cluster.routing.allocation] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.marvel-es-1-2016.05.10][0], [.marvel-es-1-2016.05.06][0], [.marvel-es-1-2016.05.06][0], [.marvel-es-1-2016.05.10][0]] ...]).
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement