Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- End of LogType:stdout.This log file belongs to a running container (container_1579230610631_0052_01_000001) and so may not be complete.
- ***********************************************************************
- End of LogType:prelaunch.err.This log file belongs to a running container (container_1579230610631_0052_01_000001) and so may not be complete.
- ******************************************************************************
- Container: container_1579230610631_0052_01_000001 on cep-m.asia-southeast1-c.c.tngd-poc.internal:35671
- LogAggregationType: LOCAL
- ======================================================================================================
- LogType:stderr
- LogLastModifiedTime:Wed Jan 29 15:36:03 +0800 2020
- LogLength:562
- LogContents:
- SLF4J: Class path contains multiple SLF4J bindings.
- SLF4J: Found binding in [jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
- SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
- 20/01/29 15:36:03 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cep-m/10.148.0.24:8030
- End of LogType:stderr.This log file belongs to a running container (container_1579230610631_0052_01_000001) and so may not be complete.
- ***********************************************************************
- Container: container_1579230610631_0052_01_000001 on cep-m.asia-southeast1-c.c.tngd-poc.internal:35671
- LogAggregationType: LOCAL
- ======================================================================================================
- LogType:prelaunch.out
- LogLastModifiedTime:Wed Jan 29 15:36:00 +0800 2020
- LogLength:70
- LogContents:
- Setting up env variables
- Setting up job resources
- Launching container
- End of LogType:prelaunch.out.This log file belongs to a running container (container_1579230610631_0052_01_000001) and so may not be complete.
- ******************************************************************************
- End of LogType:stdout.This log file belongs to a running container (container_1579230610631_0052_01_000002) and so may not be complete.
- ***********************************************************************
- End of LogType:prelaunch.err.This log file belongs to a running container (container_1579230610631_0052_01_000002) and so may not be complete.
- ******************************************************************************
- Container: container_1579230610631_0052_01_000002 on cep-m.asia-southeast1-c.c.tngd-poc.internal:35671
- LogAggregationType: LOCAL
- ======================================================================================================
- LogType:stderr
- LogLastModifiedTime:Wed Jan 29 15:37:13 +0800 2020
- LogLength:55275
- LogContents:
- SLF4J: Class path contains multiple SLF4J bindings.
- SLF4J: Found binding in [jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
- SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
- 20/01/29 15:36:59 INFO org.apache.kafka.clients.consumer.ConsumerConfig: ConsumerConfig values:
- auto.commit.interval.ms = 5000
- auto.offset.reset = none
- bootstrap.servers = [10.148.15.235:9092, 10.148.15.236:9092, 10.148.15.233:9092]
- check.crcs = true
- client.id =
- connections.max.idle.ms = 540000
- default.api.timeout.ms = 60000
- enable.auto.commit = false
- exclude.internal.topics = true
- fetch.max.bytes = 52428800
- fetch.max.wait.ms = 500
- fetch.min.bytes = 1
- group.id = spark-kafka-source-6c0b12be-bce1-40fb-ab1f-b1b724c913ad--2015268696-executor
- heartbeat.interval.ms = 3000
- interceptor.classes = []
- internal.leave.group.on.close = true
- isolation.level = read_uncommitted
- key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- max.partition.fetch.bytes = 1048576
- max.poll.interval.ms = 300000
- max.poll.records = 500
- metadata.max.age.ms = 300000
- metric.reporters = []
- metrics.num.samples = 2
- metrics.recording.level = INFO
- metrics.sample.window.ms = 30000
- partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
- receive.buffer.bytes = 65536
- reconnect.backoff.max.ms = 1000
- reconnect.backoff.ms = 50
- request.timeout.ms = 30000
- retry.backoff.ms = 100
- sasl.client.callback.handler.class = null
- sasl.jaas.config = null
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- sasl.kerberos.min.time.before.relogin = 60000
- sasl.kerberos.service.name = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.login.callback.handler.class = null
- sasl.login.class = null
- sasl.login.refresh.buffer.seconds = 300
- sasl.login.refresh.min.period.seconds = 60
- sasl.login.refresh.window.factor = 0.8
- sasl.login.refresh.window.jitter = 0.05
- sasl.mechanism = GSSAPI
- security.protocol = PLAINTEXT
- send.buffer.bytes = 131072
- session.timeout.ms = 10000
- ssl.cipher.suites = null
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- ssl.endpoint.identification.algorithm = https
- ssl.key.password = null
- ssl.keymanager.algorithm = SunX509
- ssl.keystore.location = null
- ssl.keystore.password = null
- ssl.keystore.type = JKS
- ssl.protocol = TLS
- ssl.provider = null
- ssl.secure.random.implementation = null
- ssl.trustmanager.algorithm = PKIX
- ssl.truststore.location = null
- ssl.truststore.password = null
- ssl.truststore.type = JKS
- value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
- 20/01/29 15:36:59 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka version : 2.0.0
- 20/01/29 15:36:59 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka commitId : 3402a8361b734732
- 20/01/29 15:37:01 INFO org.apache.kafka.clients.Metadata: Cluster ID: e1RbbPptT8-w_CPlEK1hCQ
- 20/01/29 15:37:05 INFO org.apache.kafka.clients.producer.ProducerConfig: ProducerConfig values:
- acks = 1
- batch.size = 16384
- bootstrap.servers = [10.148.15.235:9092, 10.148.15.236:9092, 10.148.15.233:9092]
- buffer.memory = 33554432
- client.id =
- compression.type = none
- connections.max.idle.ms = 540000
- enable.idempotence = false
- interceptor.classes = []
- key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
- linger.ms = 0
- max.block.ms = 60000
- max.in.flight.requests.per.connection = 5
- max.request.size = 1048576
- metadata.max.age.ms = 300000
- metric.reporters = []
- metrics.num.samples = 2
- metrics.recording.level = INFO
- metrics.sample.window.ms = 30000
- partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
- receive.buffer.bytes = 32768
- reconnect.backoff.max.ms = 1000
- reconnect.backoff.ms = 50
- request.timeout.ms = 30000
- retries = 0
- retry.backoff.ms = 100
- sasl.client.callback.handler.class = null
- sasl.jaas.config = null
- sasl.kerberos.kinit.cmd = /usr/bin/kinit
- sasl.kerberos.min.time.before.relogin = 60000
- sasl.kerberos.service.name = null
- sasl.kerberos.ticket.renew.jitter = 0.05
- sasl.kerberos.ticket.renew.window.factor = 0.8
- sasl.login.callback.handler.class = null
- sasl.login.class = null
- sasl.login.refresh.buffer.seconds = 300
- sasl.login.refresh.min.period.seconds = 60
- sasl.login.refresh.window.factor = 0.8
- sasl.login.refresh.window.jitter = 0.05
- sasl.mechanism = GSSAPI
- security.protocol = PLAINTEXT
- send.buffer.bytes = 131072
- ssl.cipher.suites = null
- ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
- ssl.endpoint.identification.algorithm = https
- ssl.key.password = null
- ssl.keymanager.algorithm = SunX509
- ssl.keystore.location = null
- ssl.keystore.password = null
- ssl.keystore.type = JKS
- ssl.protocol = TLS
- ssl.provider = null
- ssl.secure.random.implementation = null
- ssl.trustmanager.algorithm = PKIX
- ssl.truststore.location = null
- ssl.truststore.password = null
- ssl.truststore.type = JKS
- transaction.timeout.ms = 60000
- transactional.id = null
- value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
- 20/01/29 15:37:05 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka version : 2.0.0
- 20/01/29 15:37:05 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka commitId : 3402a8361b734732
- 20/01/29 15:37:05 INFO org.apache.kafka.clients.Metadata: Cluster ID: e1RbbPptT8-w_CPlEK1hCQ
- 20/01/29 15:37:05 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:05 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1.0 (TID 2)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:05 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:05 ERROR org.apache.spark.executor.Executor: Exception in task 152.0 in stage 1.0 (TID 1)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:07 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:07 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 1.0 (TID 3)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:07 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:07 ERROR org.apache.spark.executor.Executor: Exception in task 0.1 in stage 1.0 (TID 4)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:09 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:09 ERROR org.apache.spark.executor.Executor: Exception in task 152.1 in stage 1.0 (TID 5)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:09 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:09 ERROR org.apache.spark.executor.Executor: Exception in task 1.1 in stage 1.0 (TID 6)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:10 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:10 ERROR org.apache.spark.executor.Executor: Exception in task 0.2 in stage 1.0 (TID 7)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:11 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:11 ERROR org.apache.spark.executor.Executor: Exception in task 152.2 in stage 1.0 (TID 8)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:12 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:12 ERROR org.apache.spark.executor.Executor: Exception in task 1.2 in stage 1.0 (TID 9)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:12 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:12 ERROR org.apache.spark.executor.Executor: Exception in task 0.3 in stage 1.0 (TID 10)
- org.apache.spark.util.TaskCompletionListenerException: null
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:139)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- 20/01/29 15:37:12 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.io.IOException: Filed to get file info for 'gs://gcp-datawarehouse/streaming/checkpoints/streaming_test1-aggregated_test_input/state/0/1/.1.delta.53e2201e-4d95-42ca-bf35-6bef4fa4015c.TID12.tmp/'
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfoInternal(GoogleCloudStorageFileSystem.java:1171)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1116)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.exists(GoogleCloudStorageFileSystem.java:440)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.create(GoogleCloudStorageFileSystem.java:252)
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.createChannel(GoogleHadoopOutputStream.java:82)
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:74)
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:797)
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.createInternal(GoogleHadoopFS.java:95)
- at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605)
- at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:703)
- at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:699)
- at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
- at org.apache.hadoop.fs.FileContext.create(FileContext.java:705)
- at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:311)
- at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:133)
- at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:136)
- at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:318)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream$lzycompute(HDFSBackedStateStoreProvider.scala:95)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream(HDFSBackedStateStoreProvider.scala:95)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream$lzycompute(HDFSBackedStateStoreProvider.scala:96)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream(HDFSBackedStateStoreProvider.scala:96)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:133)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- Caused by: java.lang.InterruptedException
- at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.LazyExecutorService$ExecutingFutureImpl$Delegated.get(LazyExecutorService.java:529)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.LazyExecutorService$ExecutingFutureImpl$Created.get(LazyExecutorService.java:420)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfoInternal(GoogleCloudStorageFileSystem.java:1160)
- ... 39 more
- 20/01/29 15:37:12 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.io.IOException: Filed to get file info for 'gs://gcp-datawarehouse/streaming/checkpoints/streaming_test1-aggregated_test_input/state/0/1/.1.delta.a3d881ae-2199-4cab-8533-9744cee5d480.TID12.tmp/'
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfoInternal(GoogleCloudStorageFileSystem.java:1171)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1116)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.exists(GoogleCloudStorageFileSystem.java:440)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.create(GoogleCloudStorageFileSystem.java:252)
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.createChannel(GoogleHadoopOutputStream.java:82)
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:74)
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:797)
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.createInternal(GoogleHadoopFS.java:95)
- at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605)
- at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:703)
- at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:699)
- at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
- at org.apache.hadoop.fs.FileContext.create(FileContext.java:705)
- at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:311)
- at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:133)
- at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:136)
- at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:318)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream$lzycompute(HDFSBackedStateStoreProvider.scala:95)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream(HDFSBackedStateStoreProvider.scala:95)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream$lzycompute(HDFSBackedStateStoreProvider.scala:96)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream(HDFSBackedStateStoreProvider.scala:96)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:133)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- Caused by: java.lang.InterruptedException
- at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.LazyExecutorService$ExecutingFutureImpl$Delegated.get(LazyExecutorService.java:529)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.LazyExecutorService$ExecutingFutureImpl$Created.get(LazyExecutorService.java:420)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
- at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfoInternal(GoogleCloudStorageFileSystem.java:1160)
- ... 39 more
- 20/01/29 15:37:13 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener
- java.lang.NullPointerException
- at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
- at java.io.DataOutputStream.write(DataOutputStream.java:107)
- at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:258)
- at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:190)
- at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
- at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
- at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
- at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
- at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
- at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
- at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
- at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
- at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
- at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
- at org.apache.spark.scheduler.Task.run(Task.scala:133)
- at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
- at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
- at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
- at java.lang.Thread.run(Thread.java:748)
- End of LogType:stderr.This log file belongs to a running container (container_1579230610631_0052_01_000002) and so may not be complete.
- ***********************************************************************
- Container: container_1579230610631_0052_01_000002 on cep-m.asia-southeast1-c.c.tngd-poc.internal:35671
- LogAggregationType: LOCAL
- ======================================================================================================
- LogType:prelaunch.out
- LogLastModifiedTime:Wed Jan 29 15:36:05 +0800 2020
- LogLength:70
- LogContents:
- Setting up env variables
- Setting up job resources
- Launching container
- End of LogType:prelaunch.out.This log file belongs to a running container (container_1579230610631_0052_01_000002) and so may not be complete.
- ******************************************************************************
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement