Advertisement
sunnn

kafkaconnect logs

Jul 30th, 2016
57
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 21.97 KB | None | 0 0
  1. [2016-07-31 03:18:36,005] INFO StandaloneConfig values:
  2. value.converter = class io.confluent.connect.avro.AvroConverter
  3. offset.storage.file.filename = /tmp/connect.offsets
  4. access.control.allow.methods =
  5. key.converter = class io.confluent.connect.avro.AvroConverter
  6. offset.flush.timeout.ms = 5000
  7. rest.port = 8083
  8. rest.advertised.port = null
  9. access.control.allow.origin =
  10. rest.advertised.host.name = null
  11. bootstrap.servers = [localhost:9092]
  12. task.shutdown.graceful.timeout.ms = 5000
  13. rest.host.name = null
  14. internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
  15. cluster = connect
  16. internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
  17. offset.flush.interval.ms = 60000
  18. (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:178)
  19. [2016-07-31 03:18:36,203] INFO Logging initialized @1642ms (org.eclipse.jetty.util.log:186)
  20. [2016-07-31 03:18:36,976] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:52)
  21. [2016-07-31 03:18:36,977] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:71)
  22. [2016-07-31 03:18:36,977] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:102)
  23. [2016-07-31 03:18:36,991] INFO ProducerConfig values:
  24. interceptor.classes = null
  25. request.timeout.ms = 2147483647
  26. ssl.truststore.password = null
  27. retry.backoff.ms = 100
  28. buffer.memory = 33554432
  29. batch.size = 16384
  30. ssl.keymanager.algorithm = SunX509
  31. receive.buffer.bytes = 32768
  32. ssl.key.password = null
  33. ssl.cipher.suites = null
  34. sasl.kerberos.ticket.renew.jitter = 0.05
  35. sasl.kerberos.service.name = null
  36. ssl.provider = null
  37. max.in.flight.requests.per.connection = 1
  38. sasl.kerberos.ticket.renew.window.factor = 0.8
  39. sasl.mechanism = GSSAPI
  40. bootstrap.servers = [localhost:9092]
  41. client.id =
  42. max.request.size = 1048576
  43. acks = all
  44. linger.ms = 0
  45. sasl.kerberos.kinit.cmd = /usr/bin/kinit
  46. ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  47. metadata.fetch.timeout.ms = 60000
  48. ssl.endpoint.identification.algorithm = null
  49. ssl.keystore.location = null
  50. value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
  51. ssl.truststore.location = null
  52. ssl.keystore.password = null
  53. block.on.buffer.full = false
  54. key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
  55. metrics.sample.window.ms = 30000
  56. security.protocol = PLAINTEXT
  57. metadata.max.age.ms = 300000
  58. ssl.protocol = TLS
  59. sasl.kerberos.min.time.before.relogin = 60000
  60. timeout.ms = 30000
  61. connections.max.idle.ms = 540000
  62. ssl.trustmanager.algorithm = PKIX
  63. metric.reporters = []
  64. ssl.truststore.type = JKS
  65. compression.type = none
  66. retries = 2147483647
  67. max.block.ms = 9223372036854775807
  68. partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
  69. send.buffer.bytes = 131072
  70. reconnect.backoff.ms = 50
  71. metrics.num.samples = 2
  72. ssl.keystore.type = JKS
  73. (org.apache.kafka.clients.producer.ProducerConfig:178)
  74. [2016-07-31 03:18:37,034] INFO ProducerConfig values:
  75. interceptor.classes = null
  76. request.timeout.ms = 2147483647
  77. ssl.truststore.password = null
  78. retry.backoff.ms = 100
  79. buffer.memory = 33554432
  80. batch.size = 16384
  81. ssl.keymanager.algorithm = SunX509
  82. receive.buffer.bytes = 32768
  83. ssl.key.password = null
  84. ssl.cipher.suites = null
  85. sasl.kerberos.ticket.renew.jitter = 0.05
  86. sasl.kerberos.service.name = null
  87. ssl.provider = null
  88. max.in.flight.requests.per.connection = 1
  89. sasl.kerberos.ticket.renew.window.factor = 0.8
  90. sasl.mechanism = GSSAPI
  91. bootstrap.servers = [localhost:9092]
  92. client.id = producer-1
  93. max.request.size = 1048576
  94. acks = all
  95. linger.ms = 0
  96. sasl.kerberos.kinit.cmd = /usr/bin/kinit
  97. ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  98. metadata.fetch.timeout.ms = 60000
  99. ssl.endpoint.identification.algorithm = null
  100. ssl.keystore.location = null
  101. value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
  102. ssl.truststore.location = null
  103. ssl.keystore.password = null
  104. block.on.buffer.full = false
  105. key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
  106. metrics.sample.window.ms = 30000
  107. security.protocol = PLAINTEXT
  108. metadata.max.age.ms = 300000
  109. ssl.protocol = TLS
  110. sasl.kerberos.min.time.before.relogin = 60000
  111. timeout.ms = 30000
  112. connections.max.idle.ms = 540000
  113. ssl.trustmanager.algorithm = PKIX
  114. metric.reporters = []
  115. ssl.truststore.type = JKS
  116. compression.type = none
  117. retries = 2147483647
  118. max.block.ms = 9223372036854775807
  119. partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
  120. send.buffer.bytes = 131072
  121. reconnect.backoff.ms = 50
  122. metrics.num.samples = 2
  123. ssl.keystore.type = JKS
  124. (org.apache.kafka.clients.producer.ProducerConfig:178)
  125. [2016-07-31 03:18:37,040] INFO Kafka version : 0.10.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
  126. [2016-07-31 03:18:37,040] INFO Kafka commitId : 7aeb2e89dbc741f6 (org.apache.kafka.common.utils.AppInfoParser:84)
  127. [2016-07-31 03:18:37,041] INFO Starting FileOffsetBackingStore with file /tmp/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:60)
  128. [2016-07-31 03:18:37,047] INFO Worker started (org.apache.kafka.connect.runtime.Worker:124)
  129. [2016-07-31 03:18:37,048] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:73)
  130. [2016-07-31 03:18:37,048] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:98)
  131. [2016-07-31 03:18:37,433] INFO jetty-9.2.15.v20160210 (org.eclipse.jetty.server.Server:327)
  132. [2016-07-31 03:18:40,025] INFO Started o.e.j.s.ServletContextHandler@625800d1{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
  133. [2016-07-31 03:18:40,068] INFO Started ServerConnector@6de34792{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
  134. [2016-07-31 03:18:40,070] INFO Started @5510ms (org.eclipse.jetty.server.Server:379)
  135. [2016-07-31 03:18:40,070] INFO REST server listening at http://192.168.0.4:8083/, advertising URL http://192.168.0.4:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:150)
  136. [2016-07-31 03:18:40,070] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:58)
  137. [2016-07-31 03:18:40,083] INFO ConnectorConfig values:
  138. name = test-mysql-jdbc
  139. tasks.max = 1
  140. connector.class = io.confluent.connect.jdbc.JdbcSourceConnector
  141. (org.apache.kafka.connect.runtime.ConnectorConfig:178)
  142. [2016-07-31 03:18:40,087] INFO Creating connector test-mysql-jdbc of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:168)
  143. [2016-07-31 03:18:40,107] INFO Instantiated connector test-mysql-jdbc with version 3.0.0 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:176)
  144. [2016-07-31 03:18:40,110] INFO JdbcSourceConnectorConfig values:
  145. query =
  146. validate.non.null = true
  147. connection.url = jdbc:mysql://localhost:3306/demo?user=root&password=mypassword
  148. topic.prefix = test_jdbc_
  149. table.blacklist = []
  150. mode = timestamp+incrementing
  151. table.poll.interval.ms = 60000
  152. timestamp.delay.interval.ms = 0
  153. incrementing.column.name = id
  154. timestamp.column.name = modified
  155. poll.interval.ms = 5000
  156. batch.max.rows = 100
  157. table.whitelist = []
  158. (io.confluent.connect.jdbc.JdbcSourceConnectorConfig:178)
  159. [2016-07-31 03:18:40,947] INFO Finished creating connector test-mysql-jdbc (org.apache.kafka.connect.runtime.Worker:181)
  160. [2016-07-31 03:18:40,950] INFO SourceConnectorConfig values:
  161. name = test-mysql-jdbc
  162. tasks.max = 1
  163. connector.class = io.confluent.connect.jdbc.JdbcSourceConnector
  164. (org.apache.kafka.connect.runtime.SourceConnectorConfig:178)
  165. [2016-07-31 03:18:40,959] INFO TaskConfig values:
  166. task.class = class io.confluent.connect.jdbc.JdbcSourceTask
  167. (org.apache.kafka.connect.runtime.TaskConfig:178)
  168. [2016-07-31 03:18:40,959] INFO Creating task test-mysql-jdbc-0 (org.apache.kafka.connect.runtime.Worker:315)
  169. [2016-07-31 03:18:40,959] INFO Instantiated task test-mysql-jdbc-0 with version 3.0.0 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:326)
  170. [2016-07-31 03:18:40,986] INFO Created connector test-mysql-jdbc (org.apache.kafka.connect.cli.ConnectStandalone:91)
  171. [2016-07-31 03:18:40,986] INFO JdbcSourceTaskConfig values:
  172. query =
  173. validate.non.null = true
  174. connection.url = jdbc:mysql://localhost:3306/demo?user=root&password=mypassword
  175. topic.prefix = test_jdbc_
  176. table.blacklist = []
  177. mode = timestamp+incrementing
  178. tables = [users]
  179. table.poll.interval.ms = 60000
  180. timestamp.delay.interval.ms = 0
  181. incrementing.column.name = id
  182. timestamp.column.name = modified
  183. poll.interval.ms = 5000
  184. batch.max.rows = 100
  185. table.whitelist = []
  186. (io.confluent.connect.jdbc.JdbcSourceTaskConfig:178)
  187. [2016-07-31 03:18:40,996] INFO ConnectorConfig values:
  188. name = hdfs-sink
  189. tasks.max = 1
  190. connector.class = io.confluent.connect.hdfs.HdfsSinkConnector
  191. (org.apache.kafka.connect.runtime.ConnectorConfig:178)
  192. [2016-07-31 03:18:40,997] INFO Creating connector hdfs-sink of type io.confluent.connect.hdfs.HdfsSinkConnector (org.apache.kafka.connect.runtime.Worker:168)
  193. [2016-07-31 03:18:41,003] INFO Instantiated connector hdfs-sink with version 3.0.0 of type io.confluent.connect.hdfs.HdfsSinkConnector (org.apache.kafka.connect.runtime.Worker:176)
  194. [2016-07-31 03:18:41,011] INFO HdfsSinkConnectorConfig values:
  195. kerberos.ticket.renew.period.ms = 3600000
  196. rotate.interval.ms = -1
  197. hadoop.home =
  198. partition.duration.ms = -1
  199. hdfs.namenode.principal =
  200. schema.cache.size = 1000
  201. format.class = io.confluent.connect.hdfs.avro.AvroFormat
  202. locale =
  203. hive.integration = true
  204. hive.metastore.uris = thrift://localhost:9083
  205. storage.class = io.confluent.connect.hdfs.storage.HdfsStorage
  206. retry.backoff.ms = 5000
  207. timezone =
  208. hive.database = default
  209. partition.field.name = department
  210. hadoop.conf.dir =
  211. connect.hdfs.principal =
  212. path.format =
  213. filename.offset.zero.pad.width = 10
  214. hive.conf.dir =
  215. flush.size = 2
  216. topics.dir = topics
  217. schema.compatibility = BACKWARD
  218. shutdown.timeout.ms = 3000
  219. hdfs.url = hdfs://localhost:9000
  220. connect.hdfs.keytab =
  221. hdfs.authentication.kerberos = false
  222. partitioner.class = io.confluent.connect.hdfs.partitioner.FieldPartitioner
  223. hive.home =
  224. logs.dir = logs
  225. (io.confluent.connect.hdfs.HdfsSinkConnectorConfig:178)
  226. [2016-07-31 03:18:41,012] INFO Finished creating connector hdfs-sink (org.apache.kafka.connect.runtime.Worker:181)
  227. [2016-07-31 03:18:41,012] INFO SourceConnectorConfig values:
  228. name = hdfs-sink
  229. tasks.max = 1
  230. connector.class = io.confluent.connect.hdfs.HdfsSinkConnector
  231. (org.apache.kafka.connect.runtime.SourceConnectorConfig:178)
  232. [2016-07-31 03:18:41,013] INFO TaskConfig values:
  233. task.class = class io.confluent.connect.hdfs.HdfsSinkTask
  234. (org.apache.kafka.connect.runtime.TaskConfig:178)
  235. [2016-07-31 03:18:41,016] INFO Creating task hdfs-sink-0 (org.apache.kafka.connect.runtime.Worker:315)
  236. [2016-07-31 03:18:41,016] INFO Instantiated task hdfs-sink-0 with version 3.0.0 of type io.confluent.connect.hdfs.HdfsSinkTask (org.apache.kafka.connect.runtime.Worker:326)
  237. [2016-07-31 03:18:41,050] INFO ConsumerConfig values:
  238. interceptor.classes = null
  239. request.timeout.ms = 40000
  240. check.crcs = true
  241. ssl.truststore.password = null
  242. retry.backoff.ms = 100
  243. ssl.keymanager.algorithm = SunX509
  244. receive.buffer.bytes = 65536
  245. ssl.key.password = null
  246. ssl.cipher.suites = null
  247. sasl.kerberos.ticket.renew.jitter = 0.05
  248. sasl.kerberos.service.name = null
  249. ssl.provider = null
  250. session.timeout.ms = 30000
  251. sasl.kerberos.ticket.renew.window.factor = 0.8
  252. sasl.mechanism = GSSAPI
  253. max.poll.records = 2147483647
  254. bootstrap.servers = [localhost:9092]
  255. client.id =
  256. fetch.max.wait.ms = 500
  257. fetch.min.bytes = 1
  258. key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
  259. auto.offset.reset = earliest
  260. value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
  261. sasl.kerberos.kinit.cmd = /usr/bin/kinit
  262. ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  263. max.partition.fetch.bytes = 1048576
  264. partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
  265. ssl.endpoint.identification.algorithm = null
  266. ssl.keystore.location = null
  267. ssl.truststore.location = null
  268. exclude.internal.topics = true
  269. ssl.keystore.password = null
  270. metrics.sample.window.ms = 30000
  271. security.protocol = PLAINTEXT
  272. metadata.max.age.ms = 300000
  273. auto.commit.interval.ms = 5000
  274. ssl.protocol = TLS
  275. sasl.kerberos.min.time.before.relogin = 60000
  276. connections.max.idle.ms = 540000
  277. ssl.trustmanager.algorithm = PKIX
  278. group.id = connect-hdfs-sink
  279. enable.auto.commit = false
  280. metric.reporters = []
  281. ssl.truststore.type = JKS
  282. send.buffer.bytes = 131072
  283. reconnect.backoff.ms = 50
  284. metrics.num.samples = 2
  285. ssl.keystore.type = JKS
  286. heartbeat.interval.ms = 3000
  287. (org.apache.kafka.clients.consumer.ConsumerConfig:178)
  288. [2016-07-31 03:18:41,072] INFO ConsumerConfig values:
  289. interceptor.classes = null
  290. request.timeout.ms = 40000
  291. check.crcs = true
  292. ssl.truststore.password = null
  293. retry.backoff.ms = 100
  294. ssl.keymanager.algorithm = SunX509
  295. receive.buffer.bytes = 65536
  296. ssl.key.password = null
  297. ssl.cipher.suites = null
  298. sasl.kerberos.ticket.renew.jitter = 0.05
  299. sasl.kerberos.service.name = null
  300. ssl.provider = null
  301. session.timeout.ms = 30000
  302. sasl.kerberos.ticket.renew.window.factor = 0.8
  303. sasl.mechanism = GSSAPI
  304. max.poll.records = 2147483647
  305. bootstrap.servers = [localhost:9092]
  306. client.id = consumer-1
  307. fetch.max.wait.ms = 500
  308. fetch.min.bytes = 1
  309. key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
  310. auto.offset.reset = earliest
  311. value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
  312. sasl.kerberos.kinit.cmd = /usr/bin/kinit
  313. ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  314. max.partition.fetch.bytes = 1048576
  315. partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
  316. ssl.endpoint.identification.algorithm = null
  317. ssl.keystore.location = null
  318. ssl.truststore.location = null
  319. exclude.internal.topics = true
  320. ssl.keystore.password = null
  321. metrics.sample.window.ms = 30000
  322. security.protocol = PLAINTEXT
  323. metadata.max.age.ms = 300000
  324. auto.commit.interval.ms = 5000
  325. ssl.protocol = TLS
  326. sasl.kerberos.min.time.before.relogin = 60000
  327. connections.max.idle.ms = 540000
  328. ssl.trustmanager.algorithm = PKIX
  329. group.id = connect-hdfs-sink
  330. enable.auto.commit = false
  331. metric.reporters = []
  332. ssl.truststore.type = JKS
  333. send.buffer.bytes = 131072
  334. reconnect.backoff.ms = 50
  335. metrics.num.samples = 2
  336. ssl.keystore.type = JKS
  337. heartbeat.interval.ms = 3000
  338. (org.apache.kafka.clients.consumer.ConsumerConfig:178)
  339. [2016-07-31 03:18:41,104] INFO Kafka version : 0.10.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
  340. [2016-07-31 03:18:41,155] INFO Kafka commitId : 7aeb2e89dbc741f6 (org.apache.kafka.common.utils.AppInfoParser:84)
  341. [2016-07-31 03:18:41,160] INFO Created connector hdfs-sink (org.apache.kafka.connect.cli.ConnectStandalone:91)
  342. [2016-07-31 03:18:41,167] INFO HdfsSinkConnectorConfig values:
  343. kerberos.ticket.renew.period.ms = 3600000
  344. rotate.interval.ms = -1
  345. hadoop.home =
  346. partition.duration.ms = -1
  347. hdfs.namenode.principal =
  348. schema.cache.size = 1000
  349. format.class = io.confluent.connect.hdfs.avro.AvroFormat
  350. locale =
  351. hive.integration = true
  352. hive.metastore.uris = thrift://localhost:9083
  353. storage.class = io.confluent.connect.hdfs.storage.HdfsStorage
  354. retry.backoff.ms = 5000
  355. timezone =
  356. hive.database = default
  357. partition.field.name = department
  358. hadoop.conf.dir =
  359. connect.hdfs.principal =
  360. path.format =
  361. filename.offset.zero.pad.width = 10
  362. hive.conf.dir =
  363. flush.size = 2
  364. topics.dir = topics
  365. schema.compatibility = BACKWARD
  366. shutdown.timeout.ms = 3000
  367. hdfs.url = hdfs://localhost:9000
  368. connect.hdfs.keytab =
  369. hdfs.authentication.kerberos = false
  370. partitioner.class = io.confluent.connect.hdfs.partitioner.FieldPartitioner
  371. hive.home =
  372. logs.dir = logs
  373. (io.confluent.connect.hdfs.HdfsSinkConnectorConfig:178)
  374. [2016-07-31 03:18:41,175] INFO Source task WorkerSourceTask{id=test-mysql-jdbc-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:138)
  375. [2016-07-31 03:18:41,194] INFO Hadoop configuration directory (io.confluent.connect.hdfs.DataWriter:94)
  376. [2016-07-31 03:18:41,549] ERROR Task test-mysql-jdbc-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
  377. org.apache.kafka.connect.errors.DataException: Failed to serialize Avro data:
  378. at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:92)
  379. at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:183)
  380. at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:160)
  381. at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
  382. at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
  383. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
  384. at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  385. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  386. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  387. at java.lang.Thread.run(Thread.java:745)
  388. Caused by: org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
  389. Caused by: java.net.ConnectException: Connection refused
  390. at java.net.PlainSocketImpl.socketConnect(Native Method)
  391. at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
  392. at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
  393. at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
  394. at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
  395. at java.net.Socket.connect(Socket.java:579)
  396. at java.net.Socket.connect(Socket.java:528)
  397. at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
  398. at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
  399. at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
  400. at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
  401. at sun.net.www.http.HttpClient.New(HttpClient.java:308)
  402. at sun.net.www.http.HttpClient.New(HttpClient.java:326)
  403. at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:997)
  404. at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
  405. at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:851)
  406. at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1092)
  407. at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:141)
  408. at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:181)
  409. at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:232)
  410. at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:224)
  411. at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:219)
  412. at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:57)
  413. at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:89)
  414. at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:72)
  415. at io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:120)
  416. at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:90)
  417. at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:183)
  418. at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:160)
  419. at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
  420. at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
  421. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
  422. at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  423. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  424. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  425. at java.lang.Thread.run(Thread.java:745)
  426. [2016-07-31 03:18:41,550] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:143)
  427. [2016-07-31 03:18:42,322] WARN Unable to load native-hadoop library for your platform... using builtin-java classes where applicable (org.apache.hadoop.util.NativeCodeLoader:62)
  428. [2016-07-31 03:18:45,726] INFO Trying to connect to metastore with URI thrift://localhost:9083 (hive.metastore:376)
  429. [2016-07-31 03:18:47,440] INFO Connected to metastore. (hive.metastore:472)
  430. [2016-07-31 03:18:47,890] INFO Sink task WorkerSinkTask{id=hdfs-sink-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:208)
  431. [2016-07-31 03:18:48,157] WARN Error while fetching metadata with correlation id 1 : {test_jdbc_users=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient:600)
  432. [2016-07-31 03:18:48,180] INFO Discovered coordinator vagrant-ubuntu-trusty-64:9092 (id: 2147483647 rack: null) for group connect-hdfs-sink. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:505)
  433. [2016-07-31 03:18:48,181] INFO Revoking previously assigned partitions [] for group connect-hdfs-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:280)
  434. [2016-07-31 03:18:48,181] INFO (Re-)joining group connect-hdfs-sink (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:326)
  435. [2016-07-31 03:18:48,283] INFO Successfully joined group connect-hdfs-sink with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:434)
  436. [2016-07-31 03:18:48,283] INFO Setting newly assigned partitions [] for group connect-hdfs-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:219)
  437. [2016-07-31 03:18:53,238] INFO Reflections took 15972 ms to scan 261 urls, producing 12056 keys and 79134 values (org.reflections.Reflections:229)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement