Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- WARN Utils: Your hostname, fhm1g612sp8ubi9h9lgd resolves to a loopback address: 127.0.1.1; using 10.128.0.7 instead (on interface eth0)
- 23/01/31 18:42:37 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
- :: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
- Ivy Default Cache set to: /root/.ivy2/cache
- The jars for the packages stored in: /root/.ivy2/jars
- org.apache.spark#spark-sql-kafka-0-10_2.12 added as a dependency
- org.postgresql#postgresql added as a dependency
- :: resolving dependencies :: org.apache.spark#spark-submit-parent-59e260ee-8ee9-4cf6-96be-52538701cc7c;1.0
- confs: [default]
- found org.apache.spark#spark-sql-kafka-0-10_2.12;3.3.0 in central
- found org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.3.0 in central
- found org.apache.kafka#kafka-clients;2.8.1 in central
- found org.lz4#lz4-java;1.8.0 in central
- found org.xerial.snappy#snappy-java;1.1.8.4 in central
- found org.slf4j#slf4j-api;1.7.32 in central
- found org.apache.hadoop#hadoop-client-runtime;3.3.2 in central
- found org.spark-project.spark#unused;1.0.0 in central
- found org.apache.hadoop#hadoop-client-api;3.3.2 in central
- found commons-logging#commons-logging;1.1.3 in central
- found com.google.code.findbugs#jsr305;3.0.0 in central
- found org.apache.commons#commons-pool2;2.11.1 in central
- found org.postgresql#postgresql;42.4.0 in central
- found org.checkerframework#checker-qual;3.5.0 in central
- :: resolution report :: resolve 1099ms :: artifacts dl 31ms
- :: modules in use:
- com.google.code.findbugs#jsr305;3.0.0 from central in [default]
- commons-logging#commons-logging;1.1.3 from central in [default]
- org.apache.commons#commons-pool2;2.11.1 from central in [default]
- org.apache.hadoop#hadoop-client-api;3.3.2 from central in [default]
- org.apache.hadoop#hadoop-client-runtime;3.3.2 from central in [default]
- org.apache.kafka#kafka-clients;2.8.1 from central in [default]
- org.apache.spark#spark-sql-kafka-0-10_2.12;3.3.0 from central in [default]
- org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.3.0 from central in [default]
- org.checkerframework#checker-qual;3.5.0 from central in [default]
- org.lz4#lz4-java;1.8.0 from central in [default]
- org.postgresql#postgresql;42.4.0 from central in [default]
- org.slf4j#slf4j-api;1.7.32 from central in [default]
- org.spark-project.spark#unused;1.0.0 from central in [default]
- org.xerial.snappy#snappy-java;1.1.8.4 from central in [default]
- ---------------------------------------------------------------------
- | | modules || artifacts |
- | conf | number| search|dwnlded|evicted|| number|dwnlded|
- ---------------------------------------------------------------------
- | default | 14 | 0 | 0 | 0 || 14 | 0 |
- ---------------------------------------------------------------------
- :: retrieving :: org.apache.spark#spark-submit-parent-59e260ee-8ee9-4cf6-96be-52538701cc7c
- confs: [default]
- 0 artifacts copied, 14 already retrieved (0kB/14ms)
- 23/01/31 18:42:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Setting default log level to "WARN".
- To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
- SPARK_INIT DONE
- INCOMING MESSAGE SCHEMA: StructType([StructField('restaurant_id', StringType(), True), StructField('adv_campaign_id', StringType(), True), StructField('adv_campaign_content', StringType(), True), StructField('adv_campaign_owner', StringType(), True), StructField('adv_campaign_owner_contact', StringType(), True), StructField('adv_campaign_datetime_start', LongType(), True), StructField('adv_campaign_datetime_end', LongType(), True), StructField('datetime_created', LongType(), True)])
- 23/01/31 18:42:48 WARN ResolveWriteToStream: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled.
- root@fhm1g612sp8ubi9h9lgd:/lessons# 23/01/31 18:42:58 ERROR Inbox: Ignoring error
- java.util.concurrent.RejectedExecutionException: Task org.apache.spark.executor.Executor$TaskRunner@5036216 rejected from java.util.concurrent.ThreadPoolExecutor@6d007da6[Shutting down, pool size = 2, active threads = 2, queued tasks = 0, completed tasks = 203]
- at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(Unknown Source)
- at java.base/java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
- at java.base/java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
- at org.apache.spark.executor.Executor.launchTask(Executor.scala:305)
- at org.apache.spark.scheduler.local.LocalEndpoint.$anonfun$reviveOffers$1(LocalSchedulerBackend.scala:93)
- at org.apache.spark.scheduler.local.LocalEndpoint.$anonfun$reviveOffers$1$adapted(LocalSchedulerBackend.scala:91)
- at scala.collection.Iterator.foreach(Iterator.scala:943)
- at scala.collection.Iterator.foreach$(Iterator.scala:943)
- at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
- at scala.collection.IterableLike.foreach(IterableLike.scala:74)
- at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
- at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
- at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalSchedulerBackend.scala:91)
- at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalSchedulerBackend.scala:74)
- at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:115)
- at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
- at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
- at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
- at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
- at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
- at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
- at java.base/java.lang.Thread.run(Unknown Source)
- 23/01/31 18:42:58 ERROR Inbox: Ignoring error
- java.util.concurrent.RejectedExecutionException: Task org.apache.spark.executor.Executor$TaskRunner@33b615fa rejected from java.util.concurrent.ThreadPoolExecutor@6d007da6[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 204]
- at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(Unknown Source)
- at java.base/java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
- at java.base/java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
- at org.apache.spark.executor.Executor.launchTask(Executor.scala:305)
- at org.apache.spark.scheduler.local.LocalEndpoint.$anonfun$reviveOffers$1(LocalSchedulerBackend.scala:93)
- at org.apache.spark.scheduler.local.LocalEndpoint.$anonfun$reviveOffers$1$adapted(LocalSchedulerBackend.scala:91)
- at scala.collection.Iterator.foreach(Iterator.scala:943)
- at scala.collection.Iterator.foreach$(Iterator.scala:943)
- at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
- at scala.collection.IterableLike.foreach(IterableLike.scala:74)
- at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
- at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
- at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalSchedulerBackend.scala:91)
- at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalSchedulerBackend.scala:74)
- at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:115)
- at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
- at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
- at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
- at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
- at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
- at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
- at java.base/java.lang.Thread.run(Unknown Source)
- 23/01/31 18:42:58 ERROR WriteToDataSourceV2Exec: Data source write support org.apache.spark.sql.execution.streaming.sources.MicroBatchWrite@1e0deee0 is aborting.
- 23/01/31 18:42:58 ERROR WriteToDataSourceV2Exec: Data source write support org.apache.spark.sql.execution.streaming.sources.MicroBatchWrite@1e0deee0 aborted.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement