Advertisement
Guest User

Untitled

a guest
Jan 29th, 2020
443
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 61.41 KB | None | 0 0
  1.  
  2. End of LogType:stdout.This log file belongs to a running container (container_1579230610631_0052_01_000001) and so may not be complete.
  3. ***********************************************************************
  4.  
  5.  
  6. End of LogType:prelaunch.err.This log file belongs to a running container (container_1579230610631_0052_01_000001) and so may not be complete.
  7. ******************************************************************************
  8.  
  9.  
  10. Container: container_1579230610631_0052_01_000001 on cep-m.asia-southeast1-c.c.tngd-poc.internal:35671
  11. LogAggregationType: LOCAL
  12. ======================================================================================================
  13. LogType:stderr
  14. LogLastModifiedTime:Wed Jan 29 15:36:03 +0800 2020
  15. LogLength:562
  16. LogContents:
  17. SLF4J: Class path contains multiple SLF4J bindings.
  18. SLF4J: Found binding in [jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  19. SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  20. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  21. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  22. 20/01/29 15:36:03 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cep-m/10.148.0.24:8030
  23. End of LogType:stderr.This log file belongs to a running container (container_1579230610631_0052_01_000001) and so may not be complete.
  24. ***********************************************************************
  25.  
  26.  
  27. Container: container_1579230610631_0052_01_000001 on cep-m.asia-southeast1-c.c.tngd-poc.internal:35671
  28. LogAggregationType: LOCAL
  29. ======================================================================================================
  30. LogType:prelaunch.out
  31. LogLastModifiedTime:Wed Jan 29 15:36:00 +0800 2020
  32. LogLength:70
  33. LogContents:
  34. Setting up env variables
  35. Setting up job resources
  36. Launching container
  37. End of LogType:prelaunch.out.This log file belongs to a running container (container_1579230610631_0052_01_000001) and so may not be complete.
  38. ******************************************************************************
  39.  
  40.  
  41. End of LogType:stdout.This log file belongs to a running container (container_1579230610631_0052_01_000002) and so may not be complete.
  42. ***********************************************************************
  43.  
  44.  
  45. End of LogType:prelaunch.err.This log file belongs to a running container (container_1579230610631_0052_01_000002) and so may not be complete.
  46. ******************************************************************************
  47.  
  48.  
  49. Container: container_1579230610631_0052_01_000002 on cep-m.asia-southeast1-c.c.tngd-poc.internal:35671
  50. LogAggregationType: LOCAL
  51. ======================================================================================================
  52. LogType:stderr
  53. LogLastModifiedTime:Wed Jan 29 15:37:13 +0800 2020
  54. LogLength:55275
  55. LogContents:
  56. SLF4J: Class path contains multiple SLF4J bindings.
  57. SLF4J: Found binding in [jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  58. SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  59. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  60. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  61. 20/01/29 15:36:59 INFO org.apache.kafka.clients.consumer.ConsumerConfig: ConsumerConfig values:
  62. auto.commit.interval.ms = 5000
  63. auto.offset.reset = none
  64. bootstrap.servers = [10.148.15.235:9092, 10.148.15.236:9092, 10.148.15.233:9092]
  65. check.crcs = true
  66. client.id =
  67. connections.max.idle.ms = 540000
  68. default.api.timeout.ms = 60000
  69. enable.auto.commit = false
  70. exclude.internal.topics = true
  71. fetch.max.bytes = 52428800
  72. fetch.max.wait.ms = 500
  73. fetch.min.bytes = 1
  74. group.id = spark-kafka-source-6c0b12be-bce1-40fb-ab1f-b1b724c913ad--2015268696-executor
  75. heartbeat.interval.ms = 3000
  76. interceptor.classes = []
  77. internal.leave.group.on.close = true
  78. isolation.level = read_uncommitted
  79. key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
  80. max.partition.fetch.bytes = 1048576
  81. max.poll.interval.ms = 300000
  82. max.poll.records = 500
  83. metadata.max.age.ms = 300000
  84. metric.reporters = []
  85. metrics.num.samples = 2
  86. metrics.recording.level = INFO
  87. metrics.sample.window.ms = 30000
  88. partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  89. receive.buffer.bytes = 65536
  90. reconnect.backoff.max.ms = 1000
  91. reconnect.backoff.ms = 50
  92. request.timeout.ms = 30000
  93. retry.backoff.ms = 100
  94. sasl.client.callback.handler.class = null
  95. sasl.jaas.config = null
  96. sasl.kerberos.kinit.cmd = /usr/bin/kinit
  97. sasl.kerberos.min.time.before.relogin = 60000
  98. sasl.kerberos.service.name = null
  99. sasl.kerberos.ticket.renew.jitter = 0.05
  100. sasl.kerberos.ticket.renew.window.factor = 0.8
  101. sasl.login.callback.handler.class = null
  102. sasl.login.class = null
  103. sasl.login.refresh.buffer.seconds = 300
  104. sasl.login.refresh.min.period.seconds = 60
  105. sasl.login.refresh.window.factor = 0.8
  106. sasl.login.refresh.window.jitter = 0.05
  107. sasl.mechanism = GSSAPI
  108. security.protocol = PLAINTEXT
  109. send.buffer.bytes = 131072
  110. session.timeout.ms = 10000
  111. ssl.cipher.suites = null
  112. ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  113. ssl.endpoint.identification.algorithm = https
  114. ssl.key.password = null
  115. ssl.keymanager.algorithm = SunX509
  116. ssl.keystore.location = null
  117. ssl.keystore.password = null
  118. ssl.keystore.type = JKS
  119. ssl.protocol = TLS
  120. ssl.provider = null
  121. ssl.secure.random.implementation = null
  122. ssl.trustmanager.algorithm = PKIX
  123. ssl.truststore.location = null
  124. ssl.truststore.password = null
  125. ssl.truststore.type = JKS
  126. value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
  127.  
  128. 20/01/29 15:36:59 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka version : 2.0.0
  129. 20/01/29 15:36:59 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka commitId : 3402a8361b734732
  130. 20/01/29 15:37:01 INFO org.apache.kafka.clients.Metadata: Cluster ID: e1RbbPptT8-w_CPlEK1hCQ
  131. 20/01/29 15:37:05 INFO org.apache.kafka.clients.producer.ProducerConfig: ProducerConfig values:
  132. acks = 1
  133. batch.size = 16384
  134. bootstrap.servers = [10.148.15.235:9092, 10.148.15.236:9092, 10.148.15.233:9092]
  135. buffer.memory = 33554432
  136. client.id =
  137. compression.type = none
  138. connections.max.idle.ms = 540000
  139. enable.idempotence = false
  140. interceptor.classes = []
  141. key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
  142. linger.ms = 0
  143. max.block.ms = 60000
  144. max.in.flight.requests.per.connection = 5
  145. max.request.size = 1048576
  146. metadata.max.age.ms = 300000
  147. metric.reporters = []
  148. metrics.num.samples = 2
  149. metrics.recording.level = INFO
  150. metrics.sample.window.ms = 30000
  151. partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
  152. receive.buffer.bytes = 32768
  153. reconnect.backoff.max.ms = 1000
  154. reconnect.backoff.ms = 50
  155. request.timeout.ms = 30000
  156. retries = 0
  157. retry.backoff.ms = 100
  158. sasl.client.callback.handler.class = null
  159. sasl.jaas.config = null
  160. sasl.kerberos.kinit.cmd = /usr/bin/kinit
  161. sasl.kerberos.min.time.before.relogin = 60000
  162. sasl.kerberos.service.name = null
  163. sasl.kerberos.ticket.renew.jitter = 0.05
  164. sasl.kerberos.ticket.renew.window.factor = 0.8
  165. sasl.login.callback.handler.class = null
  166. sasl.login.class = null
  167. sasl.login.refresh.buffer.seconds = 300
  168. sasl.login.refresh.min.period.seconds = 60
  169. sasl.login.refresh.window.factor = 0.8
  170. sasl.login.refresh.window.jitter = 0.05
  171. sasl.mechanism = GSSAPI
  172. security.protocol = PLAINTEXT
  173. send.buffer.bytes = 131072
  174. ssl.cipher.suites = null
  175. ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  176. ssl.endpoint.identification.algorithm = https
  177. ssl.key.password = null
  178. ssl.keymanager.algorithm = SunX509
  179. ssl.keystore.location = null
  180. ssl.keystore.password = null
  181. ssl.keystore.type = JKS
  182. ssl.protocol = TLS
  183. ssl.provider = null
  184. ssl.secure.random.implementation = null
  185. ssl.trustmanager.algorithm = PKIX
  186. ssl.truststore.location = null
  187. ssl.truststore.password = null
  188. ssl.truststore.type = JKS
  189. transaction.timeout.ms = 60000
  190. transactional.id = null
  191. value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
  192.  
  193. 20/01/29 15:37:05 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka version : 2.0.0
  194. 20/01/29 15:37:05 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka commitId : 3402a8361b734732
  195. 20/01/29 15:37:05 INFO org.apache.kafka.clients.Metadata: Cluster ID: e1RbbPptT8-w_CPlEK1hCQ
  196. 20/01/29 15:37:05 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  197. java.lang.NullPointerException
  198. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  199. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  200. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  201. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  202. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  203. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  204. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  205. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  206. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  207. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  208. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  209. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  210. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  211. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  212. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  213. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  214. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  215. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  216. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  217. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  218. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  219. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  220. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  221. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  222. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  223. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  224. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  225. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  226. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  227. at java.lang.Thread.run(Thread.java:748)
  228. 20/01/29 15:37:05 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1.0 (TID 2)
  229. org.apache.spark.util.TaskCompletionListenerException: null
  230. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  231. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  232. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  233. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  234. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  235. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  236. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  237. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  238. at java.lang.Thread.run(Thread.java:748)
  239. 20/01/29 15:37:05 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  240. java.lang.NullPointerException
  241. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  242. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  243. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  244. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  245. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  246. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  247. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  248. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  249. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  250. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  251. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  252. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  253. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  254. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  255. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  256. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  257. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  258. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  259. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  260. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  261. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  262. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  263. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  264. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  265. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  266. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  267. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  268. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  269. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  270. at java.lang.Thread.run(Thread.java:748)
  271. 20/01/29 15:37:05 ERROR org.apache.spark.executor.Executor: Exception in task 152.0 in stage 1.0 (TID 1)
  272. org.apache.spark.util.TaskCompletionListenerException: null
  273. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  274. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  275. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  276. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  277. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  278. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  279. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  280. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  281. at java.lang.Thread.run(Thread.java:748)
  282. 20/01/29 15:37:07 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  283. java.lang.NullPointerException
  284. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  285. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  286. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  287. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  288. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  289. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  290. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  291. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  292. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  293. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  294. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  295. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  296. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  297. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  298. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  299. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  300. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  301. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  302. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  303. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  304. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  305. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  306. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  307. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  308. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  309. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  310. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  311. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  312. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  313. at java.lang.Thread.run(Thread.java:748)
  314. 20/01/29 15:37:07 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 1.0 (TID 3)
  315. org.apache.spark.util.TaskCompletionListenerException: null
  316. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  317. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  318. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  319. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  320. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  321. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  322. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  323. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  324. at java.lang.Thread.run(Thread.java:748)
  325. 20/01/29 15:37:07 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  326. java.lang.NullPointerException
  327. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  328. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  329. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  330. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  331. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  332. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  333. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  334. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  335. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  336. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  337. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  338. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  339. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  340. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  341. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  342. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  343. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  344. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  345. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  346. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  347. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  348. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  349. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  350. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  351. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  352. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  353. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  354. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  355. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  356. at java.lang.Thread.run(Thread.java:748)
  357. 20/01/29 15:37:07 ERROR org.apache.spark.executor.Executor: Exception in task 0.1 in stage 1.0 (TID 4)
  358. org.apache.spark.util.TaskCompletionListenerException: null
  359. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  360. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  361. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  362. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  363. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  364. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  365. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  366. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  367. at java.lang.Thread.run(Thread.java:748)
  368. 20/01/29 15:37:09 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  369. java.lang.NullPointerException
  370. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  371. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  372. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  373. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  374. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  375. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  376. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  377. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  378. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  379. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  380. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  381. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  382. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  383. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  384. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  385. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  386. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  387. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  388. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  389. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  390. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  391. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  392. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  393. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  394. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  395. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  396. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  397. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  398. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  399. at java.lang.Thread.run(Thread.java:748)
  400. 20/01/29 15:37:09 ERROR org.apache.spark.executor.Executor: Exception in task 152.1 in stage 1.0 (TID 5)
  401. org.apache.spark.util.TaskCompletionListenerException: null
  402. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  403. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  404. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  405. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  406. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  407. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  408. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  409. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  410. at java.lang.Thread.run(Thread.java:748)
  411. 20/01/29 15:37:09 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  412. java.lang.NullPointerException
  413. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  414. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  415. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  416. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  417. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  418. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  419. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  420. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  421. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  422. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  423. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  424. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  425. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  426. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  427. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  428. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  429. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  430. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  431. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  432. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  433. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  434. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  435. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  436. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  437. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  438. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  439. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  440. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  441. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  442. at java.lang.Thread.run(Thread.java:748)
  443. 20/01/29 15:37:09 ERROR org.apache.spark.executor.Executor: Exception in task 1.1 in stage 1.0 (TID 6)
  444. org.apache.spark.util.TaskCompletionListenerException: null
  445. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  446. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  447. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  448. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  449. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  450. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  451. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  452. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  453. at java.lang.Thread.run(Thread.java:748)
  454. 20/01/29 15:37:10 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  455. java.lang.NullPointerException
  456. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  457. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  458. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  459. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  460. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  461. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  462. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  463. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  464. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  465. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  466. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  467. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  468. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  469. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  470. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  471. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  472. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  473. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  474. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  475. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  476. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  477. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  478. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  479. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  480. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  481. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  482. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  483. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  484. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  485. at java.lang.Thread.run(Thread.java:748)
  486. 20/01/29 15:37:10 ERROR org.apache.spark.executor.Executor: Exception in task 0.2 in stage 1.0 (TID 7)
  487. org.apache.spark.util.TaskCompletionListenerException: null
  488. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  489. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  490. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  491. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  492. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  493. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  494. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  495. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  496. at java.lang.Thread.run(Thread.java:748)
  497. 20/01/29 15:37:11 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  498. java.lang.NullPointerException
  499. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  500. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  501. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  502. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  503. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  504. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  505. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  506. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  507. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  508. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  509. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  510. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  511. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  512. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  513. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  514. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  515. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  516. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  517. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  518. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  519. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  520. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  521. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  522. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  523. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  524. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  525. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  526. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  527. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  528. at java.lang.Thread.run(Thread.java:748)
  529. 20/01/29 15:37:11 ERROR org.apache.spark.executor.Executor: Exception in task 152.2 in stage 1.0 (TID 8)
  530. org.apache.spark.util.TaskCompletionListenerException: null
  531. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  532. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  533. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  534. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  535. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  536. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  537. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  538. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  539. at java.lang.Thread.run(Thread.java:748)
  540. 20/01/29 15:37:12 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  541. java.lang.NullPointerException
  542. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  543. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  544. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  545. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  546. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  547. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  548. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  549. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  550. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  551. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  552. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  553. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  554. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  555. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  556. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  557. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  558. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  559. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  560. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  561. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  562. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  563. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  564. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  565. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  566. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  567. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  568. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  569. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  570. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  571. at java.lang.Thread.run(Thread.java:748)
  572. 20/01/29 15:37:12 ERROR org.apache.spark.executor.Executor: Exception in task 1.2 in stage 1.0 (TID 9)
  573. org.apache.spark.util.TaskCompletionListenerException: null
  574. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  575. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  576. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  577. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  578. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  579. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  580. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  581. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  582. at java.lang.Thread.run(Thread.java:748)
  583. 20/01/29 15:37:12 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  584. java.lang.NullPointerException
  585. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  586. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  587. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  588. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  589. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  590. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  591. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  592. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  593. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  594. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  595. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  596. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  597. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  598. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  599. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  600. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  601. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  602. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  603. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  604. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  605. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  606. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  607. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  608. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  609. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  610. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  611. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  612. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  613. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  614. at java.lang.Thread.run(Thread.java:748)
  615. 20/01/29 15:37:12 ERROR org.apache.spark.executor.Executor: Exception in task 0.3 in stage 1.0 (TID 10)
  616. org.apache.spark.util.TaskCompletionListenerException: null
  617. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
  618. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  619. at org.apache.spark.scheduler.Task.run(Task.scala:139)
  620. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  621. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  622. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  623. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  624. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  625. at java.lang.Thread.run(Thread.java:748)
  626. 20/01/29 15:37:12 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  627. java.io.IOException: Filed to get file info for 'gs://gcp-datawarehouse/streaming/checkpoints/streaming_test1-aggregated_test_input/state/0/1/.1.delta.53e2201e-4d95-42ca-bf35-6bef4fa4015c.TID12.tmp/'
  628. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfoInternal(GoogleCloudStorageFileSystem.java:1171)
  629. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1116)
  630. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.exists(GoogleCloudStorageFileSystem.java:440)
  631. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.create(GoogleCloudStorageFileSystem.java:252)
  632. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.createChannel(GoogleHadoopOutputStream.java:82)
  633. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:74)
  634. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:797)
  635. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.createInternal(GoogleHadoopFS.java:95)
  636. at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605)
  637. at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:703)
  638. at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:699)
  639. at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
  640. at org.apache.hadoop.fs.FileContext.create(FileContext.java:705)
  641. at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:311)
  642. at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:133)
  643. at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:136)
  644. at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:318)
  645. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream$lzycompute(HDFSBackedStateStoreProvider.scala:95)
  646. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream(HDFSBackedStateStoreProvider.scala:95)
  647. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream$lzycompute(HDFSBackedStateStoreProvider.scala:96)
  648. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream(HDFSBackedStateStoreProvider.scala:96)
  649. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  650. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  651. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  652. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  653. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  654. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  655. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  656. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  657. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  658. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  659. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  660. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  661. at org.apache.spark.scheduler.Task.run(Task.scala:133)
  662. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  663. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  664. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  665. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  666. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  667. at java.lang.Thread.run(Thread.java:748)
  668. Caused by: java.lang.InterruptedException
  669. at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
  670. at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
  671. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.LazyExecutorService$ExecutingFutureImpl$Delegated.get(LazyExecutorService.java:529)
  672. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.LazyExecutorService$ExecutingFutureImpl$Created.get(LazyExecutorService.java:420)
  673. at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
  674. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfoInternal(GoogleCloudStorageFileSystem.java:1160)
  675. ... 39 more
  676. 20/01/29 15:37:12 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  677. java.io.IOException: Filed to get file info for 'gs://gcp-datawarehouse/streaming/checkpoints/streaming_test1-aggregated_test_input/state/0/1/.1.delta.a3d881ae-2199-4cab-8533-9744cee5d480.TID12.tmp/'
  678. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfoInternal(GoogleCloudStorageFileSystem.java:1171)
  679. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1116)
  680. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.exists(GoogleCloudStorageFileSystem.java:440)
  681. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.create(GoogleCloudStorageFileSystem.java:252)
  682. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.createChannel(GoogleHadoopOutputStream.java:82)
  683. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:74)
  684. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:797)
  685. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.createInternal(GoogleHadoopFS.java:95)
  686. at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605)
  687. at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:703)
  688. at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:699)
  689. at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
  690. at org.apache.hadoop.fs.FileContext.create(FileContext.java:705)
  691. at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:311)
  692. at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:133)
  693. at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:136)
  694. at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:318)
  695. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream$lzycompute(HDFSBackedStateStoreProvider.scala:95)
  696. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream(HDFSBackedStateStoreProvider.scala:95)
  697. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream$lzycompute(HDFSBackedStateStoreProvider.scala:96)
  698. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream(HDFSBackedStateStoreProvider.scala:96)
  699. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  700. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  701. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  702. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  703. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  704. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  705. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  706. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  707. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  708. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  709. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  710. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  711. at org.apache.spark.scheduler.Task.run(Task.scala:133)
  712. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  713. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  714. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  715. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  716. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  717. at java.lang.Thread.run(Thread.java:748)
  718. Caused by: java.lang.InterruptedException
  719. at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
  720. at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
  721. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.LazyExecutorService$ExecutingFutureImpl$Delegated.get(LazyExecutorService.java:529)
  722. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.LazyExecutorService$ExecutingFutureImpl$Created.get(LazyExecutorService.java:420)
  723. at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
  724. at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfoInternal(GoogleCloudStorageFileSystem.java:1160)
  725. ... 39 more
  726. 20/01/29 15:37:13 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
  727. java.lang.NullPointerException
  728. at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
  729. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  730. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  731. at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  732. at java.io.DataOutputStream.write(DataOutputStream.java:107)
  733. at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
  734. at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
  735. at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
  736. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
  737. at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
  738. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
  739. at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
  740. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
  741. at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
  742. at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
  743. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  744. at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
  745. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
  746. at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
  747. at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  748. at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  749. at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
  750. at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
  751. at org.apache.spark.scheduler.Task.run(Task.scala:133)
  752. at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  753. at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  754. at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  755. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  756. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  757. at java.lang.Thread.run(Thread.java:748)
  758. End of LogType:stderr.This log file belongs to a running container (container_1579230610631_0052_01_000002) and so may not be complete.
  759. ***********************************************************************
  760.  
  761.  
  762. Container: container_1579230610631_0052_01_000002 on cep-m.asia-southeast1-c.c.tngd-poc.internal:35671
  763. LogAggregationType: LOCAL
  764. ======================================================================================================
  765. LogType:prelaunch.out
  766. LogLastModifiedTime:Wed Jan 29 15:36:05 +0800 2020
  767. LogLength:70
  768. LogContents:
  769. Setting up env variables
  770. Setting up job resources
  771. Launching container
  772. End of LogType:prelaunch.out.This log file belongs to a running container (container_1579230610631_0052_01_000002) and so may not be complete.
  773. ******************************************************************************
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement