Advertisement
laris_fdz

Untitled

Jan 31st, 2023
67
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.91 KB | None | 0 0
  1. WARN Utils: Your hostname, fhm1g612sp8ubi9h9lgd resolves to a loopback address: 127.0.1.1; using 10.128.0.7 instead (on interface eth0)
  2. 23/01/31 18:42:37 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
  3. :: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
  4. Ivy Default Cache set to: /root/.ivy2/cache
  5. The jars for the packages stored in: /root/.ivy2/jars
  6. org.apache.spark#spark-sql-kafka-0-10_2.12 added as a dependency
  7. org.postgresql#postgresql added as a dependency
  8. :: resolving dependencies :: org.apache.spark#spark-submit-parent-59e260ee-8ee9-4cf6-96be-52538701cc7c;1.0
  9. confs: [default]
  10. found org.apache.spark#spark-sql-kafka-0-10_2.12;3.3.0 in central
  11. found org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.3.0 in central
  12. found org.apache.kafka#kafka-clients;2.8.1 in central
  13. found org.lz4#lz4-java;1.8.0 in central
  14. found org.xerial.snappy#snappy-java;1.1.8.4 in central
  15. found org.slf4j#slf4j-api;1.7.32 in central
  16. found org.apache.hadoop#hadoop-client-runtime;3.3.2 in central
  17. found org.spark-project.spark#unused;1.0.0 in central
  18. found org.apache.hadoop#hadoop-client-api;3.3.2 in central
  19. found commons-logging#commons-logging;1.1.3 in central
  20. found com.google.code.findbugs#jsr305;3.0.0 in central
  21. found org.apache.commons#commons-pool2;2.11.1 in central
  22. found org.postgresql#postgresql;42.4.0 in central
  23. found org.checkerframework#checker-qual;3.5.0 in central
  24. :: resolution report :: resolve 1099ms :: artifacts dl 31ms
  25. :: modules in use:
  26. com.google.code.findbugs#jsr305;3.0.0 from central in [default]
  27. commons-logging#commons-logging;1.1.3 from central in [default]
  28. org.apache.commons#commons-pool2;2.11.1 from central in [default]
  29. org.apache.hadoop#hadoop-client-api;3.3.2 from central in [default]
  30. org.apache.hadoop#hadoop-client-runtime;3.3.2 from central in [default]
  31. org.apache.kafka#kafka-clients;2.8.1 from central in [default]
  32. org.apache.spark#spark-sql-kafka-0-10_2.12;3.3.0 from central in [default]
  33. org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.3.0 from central in [default]
  34. org.checkerframework#checker-qual;3.5.0 from central in [default]
  35. org.lz4#lz4-java;1.8.0 from central in [default]
  36. org.postgresql#postgresql;42.4.0 from central in [default]
  37. org.slf4j#slf4j-api;1.7.32 from central in [default]
  38. org.spark-project.spark#unused;1.0.0 from central in [default]
  39. org.xerial.snappy#snappy-java;1.1.8.4 from central in [default]
  40. ---------------------------------------------------------------------
  41. | | modules || artifacts |
  42. | conf | number| search|dwnlded|evicted|| number|dwnlded|
  43. ---------------------------------------------------------------------
  44. | default | 14 | 0 | 0 | 0 || 14 | 0 |
  45. ---------------------------------------------------------------------
  46. :: retrieving :: org.apache.spark#spark-submit-parent-59e260ee-8ee9-4cf6-96be-52538701cc7c
  47. confs: [default]
  48. 0 artifacts copied, 14 already retrieved (0kB/14ms)
  49. 23/01/31 18:42:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  50. Setting default log level to "WARN".
  51. To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
  52. SPARK_INIT DONE
  53. INCOMING MESSAGE SCHEMA: StructType([StructField('restaurant_id', StringType(), True), StructField('adv_campaign_id', StringType(), True), StructField('adv_campaign_content', StringType(), True), StructField('adv_campaign_owner', StringType(), True), StructField('adv_campaign_owner_contact', StringType(), True), StructField('adv_campaign_datetime_start', LongType(), True), StructField('adv_campaign_datetime_end', LongType(), True), StructField('datetime_created', LongType(), True)])
  54. 23/01/31 18:42:48 WARN ResolveWriteToStream: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled.
  55. root@fhm1g612sp8ubi9h9lgd:/lessons# 23/01/31 18:42:58 ERROR Inbox: Ignoring error
  56. java.util.concurrent.RejectedExecutionException: Task org.apache.spark.executor.Executor$TaskRunner@5036216 rejected from java.util.concurrent.ThreadPoolExecutor@6d007da6[Shutting down, pool size = 2, active threads = 2, queued tasks = 0, completed tasks = 203]
  57. at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(Unknown Source)
  58. at java.base/java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
  59. at java.base/java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
  60. at org.apache.spark.executor.Executor.launchTask(Executor.scala:305)
  61. at org.apache.spark.scheduler.local.LocalEndpoint.$anonfun$reviveOffers$1(LocalSchedulerBackend.scala:93)
  62. at org.apache.spark.scheduler.local.LocalEndpoint.$anonfun$reviveOffers$1$adapted(LocalSchedulerBackend.scala:91)
  63. at scala.collection.Iterator.foreach(Iterator.scala:943)
  64. at scala.collection.Iterator.foreach$(Iterator.scala:943)
  65. at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
  66. at scala.collection.IterableLike.foreach(IterableLike.scala:74)
  67. at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
  68. at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
  69. at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalSchedulerBackend.scala:91)
  70. at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalSchedulerBackend.scala:74)
  71. at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:115)
  72. at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  73. at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  74. at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  75. at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  76. at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
  77. at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  78. at java.base/java.lang.Thread.run(Unknown Source)
  79. 23/01/31 18:42:58 ERROR Inbox: Ignoring error
  80. java.util.concurrent.RejectedExecutionException: Task org.apache.spark.executor.Executor$TaskRunner@33b615fa rejected from java.util.concurrent.ThreadPoolExecutor@6d007da6[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 204]
  81. at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(Unknown Source)
  82. at java.base/java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
  83. at java.base/java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
  84. at org.apache.spark.executor.Executor.launchTask(Executor.scala:305)
  85. at org.apache.spark.scheduler.local.LocalEndpoint.$anonfun$reviveOffers$1(LocalSchedulerBackend.scala:93)
  86. at org.apache.spark.scheduler.local.LocalEndpoint.$anonfun$reviveOffers$1$adapted(LocalSchedulerBackend.scala:91)
  87. at scala.collection.Iterator.foreach(Iterator.scala:943)
  88. at scala.collection.Iterator.foreach$(Iterator.scala:943)
  89. at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
  90. at scala.collection.IterableLike.foreach(IterableLike.scala:74)
  91. at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
  92. at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
  93. at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalSchedulerBackend.scala:91)
  94. at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalSchedulerBackend.scala:74)
  95. at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:115)
  96. at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  97. at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  98. at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  99. at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  100. at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
  101. at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  102. at java.base/java.lang.Thread.run(Unknown Source)
  103. 23/01/31 18:42:58 ERROR WriteToDataSourceV2Exec: Data source write support org.apache.spark.sql.execution.streaming.sources.MicroBatchWrite@1e0deee0 is aborting.
  104. 23/01/31 18:42:58 ERROR WriteToDataSourceV2Exec: Data source write support org.apache.spark.sql.execution.streaming.sources.MicroBatchWrite@1e0deee0 aborted.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement