Advertisement
Guest User

Untitled

a guest
Jan 21st, 2018
416
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 28.60 KB | None | 0 0
  1. LogType:stdout
  2. Log Upload Time:Sun Jan 21 12:27:32 +0000 2018
  3. LogLength:206
  4. Log Contents:
  5. Traceback (most recent call last):
  6. File "script_2018-01-21-12-27-05.py", line 8, in <module>
  7. import pyRserve
  8. ImportError: No module named pyRserve
  9. End of LogType:stdout
  10.  
  11. $ aws glue get-job --job-name test_trunc
  12. {
  13. "Job": {
  14. "Name": "test_trunc",
  15. "Role": "arn:aws:iam::#CLIPPED#:role/AWSGlueServiceRoleDefault",
  16. "CreatedOn": 1516192543.117,
  17. "LastModifiedOn": 1516537317.889,
  18. "ExecutionProperty": {
  19. "MaxConcurrentRuns": 1
  20. },
  21. "Command": {
  22. "Name": "glueetl",
  23. "ScriptLocation": "s3://#CLIPPED#/gluescripts/test_trunc"
  24. },
  25. "DefaultArguments": {
  26. "--TempDir": "s3://#CLIPPED#/jobs/test_trunc/scripts",
  27. "--extra-py-files": "s3://#CLIPPED#/jobs/test_trunc/python-libs/pyRserve.zip",
  28. "--job-bookmark-option": "job-bookmark-disable",
  29. "--job-language": "python"
  30. },
  31. "Connections": {
  32. "Connections": [
  33. "redshift"
  34. ]
  35. },
  36. "MaxRetries": 0,
  37. "AllocatedCapacity": 10
  38. }
  39. }
  40.  
  41. import sys
  42. from awsglue.transforms import *
  43. from awsglue.utils import getResolvedOptions
  44. from pyspark.context import SparkContext
  45. from awsglue.context import GlueContext
  46. from awsglue.job import Job
  47. import pprint
  48. import pyRserve
  49.  
  50. Jan 21, 2018, 9:01:40 PM Pending execution
  51. --conf spark.hadoop.yarn.resourcemanager.connect.max-wait.ms=60000 --conf spark.hadoop.fs.defaultFS=hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020 --conf spark.hadoop.yarn.resourcemanager.address=ip-10-0-1-48.us-west-2.compute.internal:8032 --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.dynamicAllocation.minExecutors=1 --conf spark.dynamicAllocation.maxExecutors=18 --conf spark.executor.memory=5g --conf spark.executor.cores=4 --JOB_ID j_e5de928b745ac948b9e0299b9c1983bea68d9383a53d12d9c9738efd77128f04 --extra-py-files s3://###CLIPPED###/jobs/test_trunc/python-libs/pyRserve.zip --JOB_RUN_ID jr_13d318641534349c645a5622c62d56d0804929911f5c326a19cc872be957a9a4 --scriptLocation s3://###CLIPPED###/gluescripts/test_trunc --job-bookmark-option job-bookmark-disable --job-language python --TempDir s3://###CLIPPED###/jobs/test_trunc/scripts --JOB_NAME test_trunc
  52. YARN_RM_DNS=ip-10-0-1-48.us-west-2.compute.internal
  53. Detected region us-west-2
  54. JOB_NAME = test_trunc
  55. Specifying us-west-2 while copying script.
  56. Completed 229 Bytes/229 Bytes (3.7 KiB/s) with 1 file(s) remaining
  57. download: s3://###CLIPPED###/gluescripts/test_trunc to ./script_2018-01-21-13-02-26.py
  58. SCRIPT_URL = /tmp/g-06dbb7fa17bfe5fff90406afac80ef44367183a9-3027215820432691858/script_2018-01-21-13-02-26.py
  59. ------------------EXECUTING SCRIPT------------------
  60. import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job import pprint import pyRserve
  61. ----------------------------------------------------
  62. /usr/lib/spark/bin/spark-submit --conf spark.hadoop.yarn.resourcemanager.connect.max-wait.ms=60000 --conf spark.hadoop.fs.defaultFS=hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020 --conf spark.hadoop.yarn.resourcemanager.address=ip-10-0-1-48.us-west-2.compute.internal:8032 --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.dynamicAllocation.minExecutors=1 --conf spark.dynamicAllocation.maxExecutors=18 --conf spark.executor.memory=5g --conf spark.executor.cores=4 --name tape --master yarn --deploy-mode cluster --jars /opt/amazon/superjar/glue-assembly.jar --files /tmp/glue-default.conf,/tmp/glue-override.conf,/opt/amazon/certs/InternalAndExternalAndAWSTrustStore.jks,/opt/amazon/certs/rds-combined-ca-bundle.pem,/tmp/g-06dbb7fa17bfe5fff90406afac80ef44367183a9-3027215820432691858/script_2018-01-21-13-02-26.py --py-files /tmp/PyGlue.zip,s3://###CLIPPED###/jobs/test_trunc/python-libs/pyRserve.zip --driver-memory 5g --executor-memory 5g /tmp/runscript.py script_2018-01-21-13-02-26.py --JOB_NAME test_trunc --JOB_ID j_e5de928b745ac948b9e0299b9c1983bea68d9383a53d12d9c9738efd77128f04 --JOB_RUN_ID jr_13d318641534349c645a5622c62d56d0804929911f5c326a19cc872be957a9a4 --job-bookmark-option job-bookmark-disable --TempDir s3://###CLIPPED###/jobs/test_trunc/scripts
  63. 18/01/21 13:02:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 18/01/21 13:02:29 INFO RMProxy: Connecting to ResourceManager at ip-10-0-1-48.us-west-2.compute.internal/10.0.1.48:8032 18/01/21 13:02:29 INFO Client: Requesting a new application from cluster with 10 NodeManagers 18/01/21 13:02:29 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (12288 MB per container) 18/01/21 13:02:29 INFO Client: Will allocate AM container, with 5632 MB memory including 512 MB overhead 18/01/21 13:02:29 INFO Client: Setting up container launch context for our AM 18/01/21 13:02:29 INFO Client: Setting up the launch environment for our AM container 18/01/21 13:02:29 DEBUG Client: Using the default YARN application classpath: $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/* 18/01/21 13:02:29 DEBUG Client: Using the default MR application classpath: $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* 18/01/21 13:02:29 INFO Client: Preparing resources for our AM container 18/01/21 13:02:30 DEBUG Client: 18/01/21 13:02:30 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 18/01/21 13:02:33 INFO Client: Uploading resource file:/tmp/spark-e753e2e5-a1f0-4c6f-a6df-52614abda117/__spark_libs__4940228112749105289.zip -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/__spark_libs__4940228112749105289.zip 18/01/21 13:02:35 INFO Client: Uploading resource file:/opt/amazon/superjar/glue-assembly.jar -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/glue-assembly.jar 18/01/21 13:02:39 INFO Client: Uploading resource file:/tmp/glue-default.conf -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/glue-default.conf 18/01/21 13:02:39 INFO Client: Uploading resource file:/tmp/glue-override.conf -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/glue-override.conf 18/01/21 13:02:39 INFO Client: Uploading resource file:/opt/amazon/certs/InternalAndExternalAndAWSTrustStore.jks -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/InternalAndExternalAndAWSTrustStore.jks 18/01/21 13:02:39 INFO Client: Uploading resource file:/opt/amazon/certs/rds-combined-ca-bundle.pem -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/rds-combined-ca-bundle.pem 18/01/21 13:02:39 INFO Client: Uploading resource file:/tmp/g-06dbb7fa17bfe5fff90406afac80ef44367183a9-3027215820432691858/script_2018-01-21-13-02-26.py -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/script_2018-01-21-13-02-26.py 18/01/21 13:02:39 INFO Client: Uploading resource file:/tmp/runscript.py -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/runscript.py 18/01/21 13:02:39 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/pyspark.zip -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/pyspark.zip 18/01/21 13:02:39 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/py4j-0.10.4-src.zip -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/py4j-0.10.4-src.zip 18/01/21 13:02:39 INFO Client: Uploading resource file:/tmp/PyGlue.zip -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/PyGlue.zip 18/01
  64. /21 13:02:40 INFO PlatformInfo: Unable to read clusterId from http://localhost:8321/configuration, trying extra instance data file: /var/lib/instance-controller/extraInstanceData.json 18/01/21 13:02:40 INFO PlatformInfo: Unable to read clusterId from /var/lib/instance-controller/extraInstanceData.json, trying EMR job-flow data file: /var/lib/info/job-flow.json 18/01/21 13:02:40 INFO PlatformInfo: Unable to read clusterId from /var/lib/info/job-flow.json, out of places to look 18/01/21 13:02:41 INFO Client: Uploading resource s3://###CLIPPED###/jobs/test_trunc/python-libs/pyRserve.zip -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/pyRserve.zip 18/01/21 13:02:41 INFO S3NativeFileSystem: Opening 's3://###CLIPPED###/jobs/test_trunc/python-libs/pyRserve.zip' for reading 18/01/21 13:02:41 INFO Client: Uploading resource file:/tmp/spark-e753e2e5-a1f0-4c6f-a6df-52614abda117/__spark_conf__1785103944074202664.zip -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010/__spark_conf__.zip 18/01/21 13:02:41 DEBUG Client: =============================================================================== 18/01/21 13:02:41 DEBUG Client: YARN AM launch context: 18/01/21 13:02:41 DEBUG Client: user class: org.apache.spark.deploy.PythonRunner 18/01/21 13:02:41 DEBUG Client: env: 18/01/21 13:02:41 DEBUG Client: CLASSPATH -> ./*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/* 18/01/21 13:02:41 DEBUG Client: SPARK_YARN_STAGING_DIR -> hdfs://ip-10-0-1-48.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1516530473599_0010 18/01/21 13:02:41 DEBUG Client: SPARK_USER -> root 18/01/21 13:02:41 DEBUG Client: SPARK_YARN_MODE -> true 18/01/21 13:02:41 DEBUG Client: PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.10.4-src.zip<CPS>{{PWD}}/PyGlue.zip<CPS>{{PWD}}/pyRserve.zip 18/01/21 13:02:41 DEBUG Client: resources: 18/01/21 13:02:41 DEBUG Client: py4j-0.10.4-src.zip -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/py4j-0.10.4-src.zip" } size: 74096 timestamp: 1516539759729 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: glue-assembly.jar -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/glue-assembly.jar" } size: 377465495 timestamp: 1516539759567 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: pyspark.zip -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/pyspark.zip" } size: 452353 timestamp: 1516539759710 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: __spark_libs__ -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/__spark_libs__4940228112749105289.zip" } size: 196716025 timestamp: 1516539755676 type: ARCHIVE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: rds-combined-ca-bundle.pem -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/rds-combined-ca-bundle.pem" } size: 21672 timestamp: 1516539759653 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG
  65. Client: glue-default.conf -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/glue-default.conf" } size: 224 timestamp: 1516539759588 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: runscript.py -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/runscript.py" } size: 2370 timestamp: 1516539759689 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: glue-override.conf -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/glue-override.conf" } size: 271 timestamp: 1516539759609 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: InternalAndExternalAndAWSTrustStore.jks -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/InternalAndExternalAndAWSTrustStore.jks" } size: 118806 timestamp: 1516539759630 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: PyGlue.zip -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/PyGlue.zip" } size: 99983 timestamp: 1516539759757 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: script_2018-01-21-13-02-26.py -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/script_2018-01-21-13-02-26.py" } size: 229 timestamp: 1516539759671 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: __spark_conf__ -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/__spark_conf__.zip" } size: 7501 timestamp: 1516539761171 type: ARCHIVE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: pyRserve.zip -> resource { scheme: "hdfs" host: "ip-10-0-1-48.us-west-2.compute.internal" port: 8020 file: "/user/root/.sparkStaging/application_1516530473599_0010/pyRserve.zip" } size: 59979 timestamp: 1516539761107 type: FILE visibility: PRIVATE 18/01/21 13:02:41 DEBUG Client: command: 18/01/21 13:02:41 DEBUG Client: LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' '-Djavax.net.ssl.trustStore=InternalAndExternalAndAWSTrustStore.jks' '-Djavax.net.ssl.trustStoreType=JKS' '-Djavax.net.ssl.trustStorePassword=amazon' '-DRDS_ROOT_CERT_PATH=rds-combined-ca-bundle.pem' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file runscript.py --arg 'script_2018-01-21-13-02-26.py' --arg '--JOB_NAME' --arg 'test_trunc' --arg '--JOB_ID' --arg 'j_e5de928b745ac948b9e0299b9c1983bea68d9383a53d12d9c9738efd77128f04' --arg '--JOB_RUN_ID' --arg 'jr_13d318641534349c645a5622c62d56d0804929911f5c326a19cc872be957a9a4' --arg '--job-bookmark-option' --arg 'job-bookmark-disable' --arg '--TempDir' --arg 's3://###CLIPPED###/jobs/test_trunc/scripts' --properties-file {{PWD}}/__spark_conf__/__spark_conf__.properties 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr 18/01/21 13:02:41 DEBUG Client: =============================================================================== 18/01/21 13:02:41 INFO SecurityManager: Changing view acls to: root 18/01/21 13:02:41 INFO SecurityManager: Changing modify acls to: root 18/01/21 13:02:41 INFO SecurityManager: Changing view acls groups to: 18/01/21 13:02:41 INFO SecurityManager: Changing modify acls groups to: 18/01/21 13:02:41 INFO SecurityMa
  66. nager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() 18/01/21 13:02:41 DEBUG Client: spark.yarn.maxAppAttempts is not set. Cluster's default value will be used. 18/01/21 13:02:41 INFO Client: Submitting application application_1516530473599_0010 to ResourceManager 18/01/21 13:02:41 INFO YarnClientImpl: Submitted application application_1516530473599_0010 18/01/21 13:02:42 INFO Client: Application report for application_1516530473599_0010 (state: ACCEPTED)
  67. applicationid is application_1516530473599_0010, yarnRMDNS is ip-10-0-1-48.us-west-2.compute.internal Application info reporting is enabled.
  68. ----------Recording application Id and Yarn RM DNS for cancellation-----------------
  69. ) 18/01/21 13:02:50 DEBUG Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1516539761299 final status: UNDEFINED tracking URL: http://ip-10-0-1-48.us-west-2.compute.internal:20888/proxy/application_1516530473599_0010/ user: root
  70. 18/01/21 13:02:51 INFO Client: Application report for application_1516530473599_0010 (state: FAILED)
  71. 18/01/21 13:02:51 DEBUG Client: client token: N/A diagnostics: Application application_1516530473599_0010 failed 2 times due to AM Container for appattempt_1516530473599_0010_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://169.254.76.1:8088/cluster/app/application_1516530473599_0010Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1516530473599_0010_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:582) at org.apache.hadoop.util.Shell.run(Shell.java:479) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1516539761299 final status: FAILED tracking URL: http://169.254.76.1:8088/cluster/app/application_1516530473599_0010 user: root
  72. Exception in thread "main" org.apache.spark.SparkException: Application application_1516530473599_0010 finished with failed status at org.apache.spark.deploy.yarn.Client.run(Client.scala:1167) at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1213) at org.apache.spark.deploy.yarn.Client.main(Client.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  73. 18/01/21 13:02:51 INFO ShutdownHookManager: Shutdown hook called
  74. 18/01/21 13:02:51 INFO ShutdownHookManager: Deleting directory /tmp/spark-e753e2e5-a1f0-4c6f-a6df-52614abda117
  75. Container: container_1516530473599_0010_02_000001 on ip-10-0-1-230.us-west-2.compute.internal_8041 ==================================================================================================== LogType:stderr Log Upload Time:Sun Jan 21 13:02:52 +0000 2018 LogLength:3934 Log Contents: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/root/filecache/22/glue-assembly.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/root/filecache/30/__spark_libs__4940228112749105289.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 18/01/21 13:02:48 INFO SignalUtils: Registered signal handler for TERM 18/01/21 13:02:48 INFO SignalUtils: Registered signal handler for HUP 18/01/21 13:02:48 INFO SignalUtils: Registered signal handler for INT 18/01/21 13:02:49 INFO ApplicationMaster: Preparing Local resources 18/01/21 13:02:49 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1516530473599_0010_000002 18/01/21 13:02:49 INFO SecurityManager: Changing view acls to: yarn,root 18/01/21 13:02:49 INFO SecurityManager: Changing modify acls to: yarn,root 18/01/21 13:02:49 INFO SecurityManager: Changing view acls groups to: 18/01/21 13:02:49 INFO SecurityManager: Changing modify acls groups to: 18/01/21 13:02:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, root); groups with view permissions: Set(); users with modify permissions: Set(yarn, root); groups with modify permissions: Set() 18/01/21 13:02:49 INFO ApplicationMaster: Starting the user application in a separate Thread 18/01/21 13:02:49 INFO ApplicationMaster: Waiting for spark context initialization... 18/01/21 13:02:50 ERROR ApplicationMaster: User application exited with status 1 18/01/21 13:02:50 INFO ApplicationMaster: Final app status: FAILED, exitCode: 1, (reason: User application exited with status 1) 18/01/21 13:02:50 ERROR ApplicationMaster: Uncaught exception: org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194) at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:401) at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:254) at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:766) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:66) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66) at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:764) at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) Caused by: org.apache.spark.SparkUserAppException: User application exited with 1 at org.apache.spark.deploy.PythonRunner$.main(PythonRunner.scala:96) at org.apache.spark.deploy.PythonRunner.main(PythonRunner.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637) 18/01/21 13:02:50 INFO ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User application exited with status 1) 18/01/21 13:02:50 INFO ApplicationMaster: Deleting staging directory hdfs://ip-10-0-1-48.us-west-2.compute.int
  76. ernal:8020/user/root/.sparkStaging/application_1516530473599_0010 18/01/21 13:02:50 INFO ShutdownHookManager: Shutdown hook called End of LogType:stderr LogType:stdout Log Upload Time:Sun Jan 21 13:02:52 +0000 2018 LogLength:206 Log Contents: Traceback (most recent call last): File "script_2018-01-21-13-02-26.py", line 8, in <module> import pyRserve ImportError: No module named pyRserve End of LogType:stdout Container: container_1516530473599_0010_01_000001 on ip-10-0-1-56.us-west-2.compute.internal_8041 =================================================================================================== LogType:stderr Log Upload Time:Sun Jan 21 13:02:52 +0000 2018 LogLength:3618 Log Contents: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/root/filecache/62/glue-assembly.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/root/filecache/70/__spark_libs__4940228112749105289.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 18/01/21 13:02:43 INFO SignalUtils: Registered signal handler for TERM 18/01/21 13:02:43 INFO SignalUtils: Registered signal handler for HUP 18/01/21 13:02:43 INFO SignalUtils: Registered signal handler for INT 18/01/21 13:02:44 INFO ApplicationMaster: Preparing Local resources 18/01/21 13:02:44 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1516530473599_0010_000001 18/01/21 13:02:44 INFO SecurityManager: Changing view acls to: yarn,root 18/01/21 13:02:44 INFO SecurityManager: Changing modify acls to: yarn,root 18/01/21 13:02:44 INFO SecurityManager: Changing view acls groups to: 18/01/21 13:02:44 INFO SecurityManager: Changing modify acls groups to: 18/01/21 13:02:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, root); groups with view permissions: Set(); users with modify permissions: Set(yarn, root); groups with modify permissions: Set() 18/01/21 13:02:44 INFO ApplicationMaster: Starting the user application in a separate Thread 18/01/21 13:02:44 INFO ApplicationMaster: Waiting for spark context initialization... 18/01/21 13:02:45 ERROR ApplicationMaster: User application exited with status 1 18/01/21 13:02:45 INFO ApplicationMaster: Final app status: FAILED, exitCode: 1, (reason: User application exited with status 1) 18/01/21 13:02:45 ERROR ApplicationMaster: Uncaught exception: org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194) at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:401) at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:254) at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:766) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:66) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66) at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:764) at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) Caused by: org.apache.spark.SparkUserAppException: User application exited with 1 at org.apache.spark.deploy.PythonRunner$.main(PythonRunner.scala:96) at org.apache.spark.deploy.PythonRunner.main(PythonRunner.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637) 18/01/21 13:02:45 INFO ShutdownHookManager: Shutdown hook called End of LogType:stderr LogType:stdout Log Upload Time:Sun Jan 21 13:02:52 +0000 2018 LogLength:206 Log Contents: Traceback (most recent call last): File "script_2018-01-21-13-02-26.py", line 8, in <module> import pyRserve ImportError: No module named pyRserve End of LogType:stdout
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement