Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- /usr/local/hadoop/libexec/hadoop-functions.sh: line 2399: HADOOP_ORG.APACHE.SQOOP.SQOOP_USER: invalid variable name
- /usr/local/hadoop/libexec/hadoop-functions.sh: line 2364: HADOOP_ORG.APACHE.SQOOP.SQOOP_USER: invalid variable name
- /usr/local/hadoop/libexec/hadoop-functions.sh: line 2459: HADOOP_ORG.APACHE.SQOOP.SQOOP_OPTS: invalid variable name
- 2019-05-07 05:21:13,209 INFO [main] sqoop.Sqoop: Running Sqoop version: 1.4.7
- Enter password:
- 2019-05-07 05:21:18,817 INFO [main] manager.MySQLManager: Preparing to use a MySQL streaming resultset.
- 2019-05-07 05:21:18,860 INFO [main] tool.CodeGenTool: Beginning code generation
- SLF4J: Class path contains multiple SLF4J bindings.
- SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
- Tue May 07 05:21:19 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- 2019-05-07 05:21:20,539 INFO [main] manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
- 2019-05-07 05:21:20,666 INFO [main] manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
- 2019-05-07 05:21:20,724 INFO [main] orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
- Note: /tmp/sqoop-root/compile/152d233f78e17bb1d8e0f57be9cc041f/employee.java uses or overrides a deprecated API.
- Note: Recompile with -Xlint:deprecation for details.
- 2019-05-07 05:21:28,574 INFO [main] orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/152d233f78e17bb1d8e0f57be9cc041f/employee.jar
- 2019-05-07 05:21:28,612 WARN [main] manager.MySQLManager: It looks like you are importing from mysql.
- 2019-05-07 05:21:28,612 WARN [main] manager.MySQLManager: This transfer can be faster! Use the --direct
- 2019-05-07 05:21:28,613 WARN [main] manager.MySQLManager: option to exercise a MySQL-specific fast path.
- 2019-05-07 05:21:28,617 INFO [main] manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
- 2019-05-07 05:21:28,631 INFO [main] mapreduce.ImportJobBase: Beginning import of employee
- 2019-05-07 05:21:29,337 INFO [main] Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
- 2019-05-07 05:21:31,595 INFO [main] Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
- 2019-05-07 05:21:32,050 INFO [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
- 2019-05-07 05:21:34,542 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/parquet-avro-1.6.0.jar] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
- 2019-05-07 05:21:34,608 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/libthrift-0.9.3.jar block BP-2014112863-127.0.1.1-1556278796917:blk_1073742257_1433] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:684)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:680)
- 2019-05-07 05:21:34,680 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/commons-logging-1.1.1.jar block BP-2014112863-127.0.1.1-1556278796917:blk_1073742258_1434] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:684)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:680)
- 2019-05-07 05:21:34,928 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/mysql-connector-java-5.1.47.jar] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
- 2019-05-07 05:21:35,005 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/jackson-databind-2.3.1.jar block BP-2014112863-127.0.1.1-1556278796917:blk_1073742263_1439] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:684)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:680)
- 2019-05-07 05:21:35,074 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/parquet-column-1.6.0.jar block BP-2014112863-127.0.1.1-1556278796917:blk_1073742264_1440] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:684)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:680)
- 2019-05-07 05:21:35,413 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/kite-data-hive-1.1.0.jar] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
- 2019-05-07 05:21:35,567 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/opencsv-2.3.jar] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
- 2019-05-07 05:21:36,027 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/parquet-encoding-1.6.0.jar] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
- 2019-05-07 05:21:36,267 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/slf4j-api-1.6.1.jar] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
- 2019-05-07 05:21:36,350 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/ant-contrib-1.0b3.jar] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
- 2019-05-07 05:21:36,490 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/libjars/snappy-java-1.1.1.6.jar] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
- Tue May 07 05:21:36 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- 2019-05-07 05:21:36,974 INFO [main] db.DBInputFormat: Using read commited transaction isolation
- 2019-05-07 05:21:36,975 INFO [main] db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `employee`
- 2019-05-07 05:21:36,995 INFO [main] db.IntegerSplitter: Split size: 1; Num splits: 4 from: 1201 to: 1205
- 2019-05-07 05:21:37,151 WARN [DataStreamer for file /tmp/hadoop-yarn/staging/root/.staging/job_1557223071171_0009/job.splitmetainfo block BP-2014112863-127.0.1.1-1556278796917:blk_1073742292_1468] hdfs.DFSClient: Caught exception
- java.lang.InterruptedException
- at java.lang.Object.wait(Native Method)
- at java.lang.Thread.join(Thread.java:1252)
- at java.lang.Thread.join(Thread.java:1326)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:684)
- at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:680)
- 2019-05-07 05:21:37,152 INFO [main] mapreduce.JobSubmitter: number of splits:5
- 2019-05-07 05:21:37,450 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1557223071171_0009
- 2019-05-07 05:21:38,537 INFO [main] impl.YarnClientImpl: Submitted application application_1557223071171_0009
- 2019-05-07 05:21:38,682 INFO [main] mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1557223071171_0009/
- 2019-05-07 05:21:38,683 INFO [main] mapreduce.Job: Running job: job_1557223071171_0009
- 2019-05-07 05:21:58,166 INFO [main] mapreduce.Job: Job job_1557223071171_0009 running in uber mode : false
- 2019-05-07 05:21:58,169 INFO [main] mapreduce.Job: map 0% reduce 0%
- 2019-05-07 05:22:57,871 INFO [main] mapreduce.Job: map 20% reduce 0%
- 2019-05-07 05:22:58,922 INFO [main] mapreduce.Job: map 40% reduce 0%
- 2019-05-07 05:22:59,930 INFO [main] mapreduce.Job: map 80% reduce 0%
- 2019-05-07 05:23:00,967 INFO [main] mapreduce.Job: map 100% reduce 0%
- 2019-05-07 05:23:01,059 INFO [main] mapreduce.Job: Job job_1557223071171_0009 completed successfully
- 2019-05-07 05:23:01,466 WARN [main] counters.FileSystemCounterGroup: HDFS_BYTES_READ_EC is not a recognized counter.
- 2019-05-07 05:23:01,545 WARN [main] counters.FrameworkCounterGroup: MAP_PHYSICAL_MEMORY_BYTES_MAX is not a recognized counter.
- 2019-05-07 05:23:01,545 WARN [main] counters.FrameworkCounterGroup: MAP_VIRTUAL_MEMORY_BYTES_MAX is not a recognized counter.
- 2019-05-07 05:23:01,635 INFO [main] mapreduce.Job: =0
- HDFS: Number of bytes read=521
- HDFS: Number of bytes written=50
- HDFS: Number of read operations=20
- HDFS: Number of large read operations=0
- HDFS: Number of write operations=10
- Job Counters
- Killed map tasks=1
- Launched map tasks=5
- Other local map tasks=5
- Total time spent by all maps in occupied slots (ms)=282829
- Total time spent by all reduces in occupied slots (ms)=0
- Total time spent by all map tasks (ms)=282829
- Total vcore-milliseconds taken by all map tasks=282829
- Total megabyte-milliseconds taken by all map tasks=289616896
- Map-Reduce Framework
- Map input records=4
- Map output records=4
- Input split bytes=521
- Spilled Records=0
- Failed Shuffles=0
- Merged Map outputs=0
- GC time elapsed (ms)=4574
- CPU time spent (ms)=11550
- Physical memory (bytes) snapshot=655110144
- Virtual memory (bytes) snapshot=9562136576
- Total committed heap usage (bytes)=385679360
- File Input Format Counters
- Bytes Read=0
- File Output Format Counters
- Bytes Written=50
- 2019-05-07 05:23:01,644 WARN [main] counters.FileSystemCounterGroup: HDFS_BYTES_READ_EC is not a recognized counter.
- 2019-05-07 05:23:01,645 WARN [main] counters.FrameworkCounterGroup: MAP_PHYSICAL_MEMORY_BYTES_MAX is not a recognized counter.
- 2019-05-07 05:23:01,645 WARN [main] counters.FrameworkCounterGroup: MAP_VIRTUAL_MEMORY_BYTES_MAX is not a recognized counter.
- 2019-05-07 05:23:01,646 INFO [main] mapreduce.ImportJobBase: Transferred 50 bytes in 90.0107 seconds (0.5555 bytes/sec)
- 2019-05-07 05:23:01,688 WARN [main] counters.FileSystemCounterGroup: HDFS_BYTES_READ_EC is not a recognized counter.
- 2019-05-07 05:23:01,690 WARN [main] counters.FrameworkCounterGroup: MAP_PHYSICAL_MEMORY_BYTES_MAX is not a recognized counter.
- 2019-05-07 05:23:01,692 WARN [main] counters.FrameworkCounterGroup: MAP_VIRTUAL_MEMORY_BYTES_MAX is not a recognized counter.
- 2019-05-07 05:23:01,693 INFO [main] mapreduce.ImportJobBase: Retrieved 4 records.
- 2019-05-07 05:23:01,693 INFO [main] mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table employee
- Tue May 07 05:23:01 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- 2019-05-07 05:23:02,011 INFO [main] manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
- 2019-05-07 05:23:02,029 INFO [main] hive.HiveImport: Loading uploaded data into Hive
- 2019-05-07 05:23:02,158 INFO [main] conf.HiveConf: Found configuration file file:/usr/local/hive/conf/hive-site.xml
- 2019-05-07 05:23:10,859 main ERROR Could not register mbeans java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")
- at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
- at java.lang.SecurityManager.checkPermission(SecurityManager.java:585)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:322)
- at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
- at org.apache.logging.log4j.core.jmx.Server.register(Server.java:389)
- at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:167)
- at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
- at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:556)
- at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:261)
- at org.apache.logging.log4j.core.async.AsyncLoggerContext.start(AsyncLoggerContext.java:87)
- at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:240)
- at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:158)
- at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:131)
- at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:101)
- at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:188)
- at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jDefault(LogUtils.java:173)
- at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:106)
- at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:98)
- at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81)
- at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
- at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:331)
- at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241)
- at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:537)
- at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
- at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
- at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
- at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
- at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
- at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
- at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
- Hive Session ID = bb53ebb3-6291-458c-a869-398e88041cb5
- 2019-05-07 05:23:11,274 INFO [main] SessionState: Hive Session ID = bb53ebb3-6291-458c-a869-398e88041cb5
- Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-3.1.0.jar!/hive-log4j2.properties Async: true
- 2019-05-07 05:23:11,695 INFO [main] SessionState:
- Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-3.1.0.jar!/hive-log4j2.properties Async: true
- 2019-05-07 05:23:11,807 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/root/bb53ebb3-6291-458c-a869-398e88041cb5
- 2019-05-07 05:23:12,174 INFO [main] session.SessionState: Created local directory: /tmp/hive/bb53ebb3-6291-458c-a869-398e88041cb5
- 2019-05-07 05:23:12,178 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/root/bb53ebb3-6291-458c-a869-398e88041cb5/_tmp_space.db
- 2019-05-07 05:23:12,199 INFO [main] conf.HiveConf: Using the default value passed in for log id: bb53ebb3-6291-458c-a869-398e88041cb5
- 2019-05-07 05:23:12,210 INFO [main] session.SessionState: Updating thread name to bb53ebb3-6291-458c-a869-398e88041cb5 main
- 2019-05-07 05:23:18,253 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
- 2019-05-07 05:23:18,380 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
- 2019-05-07 05:23:18,407 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: ObjectStore, initialize called
- 2019-05-07 05:23:18,436 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] conf.MetastoreConf: Found configuration file file:/usr/local/hive/conf/hive-site.xml
- 2019-05-07 05:23:18,458 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] conf.MetastoreConf: Unable to find config file hivemetastore-site.xml
- 2019-05-07 05:23:18,458 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] conf.MetastoreConf: Found configuration file null
- 2019-05-07 05:23:18,467 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] conf.MetastoreConf: Unable to find config file metastore-site.xml
- 2019-05-07 05:23:18,467 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] conf.MetastoreConf: Found configuration file null
- 2019-05-07 05:23:20,362 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
- 2019-05-07 05:23:22,067 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] hikari.HikariDataSource: HikariPool-1 - Starting...
- Tue May 07 05:23:22 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- 2019-05-07 05:23:22,408 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] hikari.HikariDataSource: HikariPool-1 - Start completed.
- Tue May 07 05:23:22 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:22 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:22 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:22 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:22 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:22 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:22 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- 2019-05-07 05:23:23,320 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] hikari.HikariDataSource: HikariPool-2 - Starting...
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- 2019-05-07 05:23:23,339 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] hikari.HikariDataSource: HikariPool-2 - Start completed.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:23 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- Tue May 07 05:23:24 PDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
- 2019-05-07 05:23:24,304 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
- 2019-05-07 05:23:25,017 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
- 2019-05-07 05:23:25,022 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: Initialized ObjectStore
- 2019-05-07 05:23:26,382 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:26,391 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:26,391 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:26,392 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:26,392 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:26,392 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:32,683 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:32,684 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:32,690 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:32,690 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:32,691 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:32,693 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
- 2019-05-07 05:23:43,459 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: Added admin role in metastore
- 2019-05-07 05:23:43,481 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: Added public role in metastore
- 2019-05-07 05:23:43,621 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: No user is added in admin role, since config is empty
- 2019-05-07 05:23:46,015 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0
- 2019-05-07 05:23:46,135 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: 0: get_all_functions
- 2019-05-07 05:23:46,155 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_functions
- 2019-05-07 05:23:46,306 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] conf.HiveConf: Using the default value passed in for log id: bb53ebb3-6291-458c-a869-398e88041cb5
- Hive Session ID = 6ce48e99-40f8-4e9c-a80d-c2574e1113b2
- 2019-05-07 05:23:46,320 INFO [pool-9-thread-1] SessionState: Hive Session ID = 6ce48e99-40f8-4e9c-a80d-c2574e1113b2
- 2019-05-07 05:23:46,379 INFO [pool-9-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/root/6ce48e99-40f8-4e9c-a80d-c2574e1113b2
- 2019-05-07 05:23:46,383 INFO [pool-9-thread-1] session.SessionState: Created local directory: /tmp/hive/6ce48e99-40f8-4e9c-a80d-c2574e1113b2
- 2019-05-07 05:23:46,390 INFO [pool-9-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/root/6ce48e99-40f8-4e9c-a80d-c2574e1113b2/_tmp_space.db
- 2019-05-07 05:23:46,404 INFO [pool-9-thread-1] metastore.HiveMetaStore: 1: get_databases: @hive#
- 2019-05-07 05:23:46,405 INFO [pool-9-thread-1] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_databases: @hive#
- 2019-05-07 05:23:46,411 INFO [pool-9-thread-1] metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
- 2019-05-07 05:23:46,423 INFO [pool-9-thread-1] metastore.ObjectStore: ObjectStore, initialize called
- 2019-05-07 05:23:46,495 INFO [pool-9-thread-1] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
- 2019-05-07 05:23:46,501 INFO [pool-9-thread-1] metastore.ObjectStore: Initialized ObjectStore
- 2019-05-07 05:23:46,598 INFO [pool-9-thread-1] metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW
- 2019-05-07 05:23:46,600 INFO [pool-9-thread-1] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW
- 2019-05-07 05:23:46,688 INFO [pool-9-thread-1] metastore.HiveMetaStore: 1: get_multi_table : db=default tbls=
- 2019-05-07 05:23:46,688 INFO [pool-9-thread-1] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls=
- 2019-05-07 05:23:46,694 INFO [pool-9-thread-1] metastore.HiveMetaStore: 1: get_tables_by_type: db=@hive#hive_db pat=.*,type=MATERIALIZED_VIEW
- 2019-05-07 05:23:46,694 INFO [pool-9-thread-1] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#hive_db pat=.*,type=MATERIALIZED_VIEW
- 2019-05-07 05:23:46,722 INFO [pool-9-thread-1] metastore.HiveMetaStore: 1: get_multi_table : db=hive_db tbls=
- 2019-05-07 05:23:46,722 INFO [pool-9-thread-1] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=hive_db tbls=
- 2019-05-07 05:23:46,723 INFO [pool-9-thread-1] metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized
- 2019-05-07 05:23:47,207 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Compiling command(queryId=root_20190507052346_f16b38f5-ae5f-4a87-80c9-c37c079b35e8): CREATE TABLE `hive_db`.`employee_sqoop` ( `id` INT, `name` STRING) COMMENT 'Imported by sqoop on 2019/05/07 05:23:02' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054' LINES TERMINATED BY '\012' STORED AS TEXTFILE
- 2019-05-07 05:23:51,448 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager
- 2019-05-07 05:23:51,487 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] parse.CalcitePlanner: Starting Semantic Analysis
- 2019-05-07 05:23:51,597 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=bb53ebb3-6291-458c-a869-398e88041cb5, clientType=HIVECLI]
- 2019-05-07 05:23:51,607 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
- 2019-05-07 05:23:51,611 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStoreClient: Mestastore configuration metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
- 2019-05-07 05:23:51,621 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: 0: Cleaning up thread local RawStore...
- 2019-05-07 05:23:51,622 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
- 2019-05-07 05:23:51,630 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: 0: Done cleaning up thread local RawStore
- 2019-05-07 05:23:51,634 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
- 2019-05-07 05:23:51,640 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
- 2019-05-07 05:23:51,641 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
- 2019-05-07 05:23:51,642 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: ObjectStore, initialize called
- 2019-05-07 05:23:51,705 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
- 2019-05-07 05:23:51,706 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: Initialized ObjectStore
- 2019-05-07 05:23:51,714 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0
- 2019-05-07 05:23:51,861 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] parse.CalcitePlanner: Creating table hive_db.employee_sqoop position=13
- 2019-05-07 05:23:51,980 WARN [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
- 2019-05-07 05:23:51,980 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: ObjectStore, initialize called
- 2019-05-07 05:23:52,005 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
- 2019-05-07 05:23:52,005 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.ObjectStore: Initialized ObjectStore
- 2019-05-07 05:23:52,023 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0
- 2019-05-07 05:23:52,037 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: 0: get_database: @hive#hive_db
- 2019-05-07 05:23:52,038 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: @hive#hive_db
- 2019-05-07 05:23:52,218 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Semantic Analysis Completed (retrial = false)
- 2019-05-07 05:23:52,310 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
- 2019-05-07 05:23:52,350 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Completed compiling command(queryId=root_20190507052346_f16b38f5-ae5f-4a87-80c9-c37c079b35e8); Time taken: 5.335 seconds
- 2019-05-07 05:23:52,353 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] reexec.ReExecDriver: Execution #1 of query
- 2019-05-07 05:23:52,359 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager
- 2019-05-07 05:23:52,364 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Executing command(queryId=root_20190507052346_f16b38f5-ae5f-4a87-80c9-c37c079b35e8): CREATE TABLE `hive_db`.`employee_sqoop` ( `id` INT, `name` STRING) COMMENT 'Imported by sqoop on 2019/05/07 05:23:02' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054' LINES TERMINATED BY '\012' STORED AS TEXTFILE
- 2019-05-07 05:23:52,452 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Starting task [Stage-0:DDL] in serial mode
- 2019-05-07 05:23:52,455 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStoreClient: Mestastore configuration metastore.filter.hook changed from org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook to org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl
- 2019-05-07 05:23:52,455 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: 0: Cleaning up thread local RawStore...
- 2019-05-07 05:23:52,455 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
- 2019-05-07 05:23:52,455 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] metastore.HiveMetaStore: 0: Done cleaning up thread local RawStore
- 2019-05-07 05:23:52,456 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
- 2019-05-07 05:23:52,789 ERROR [bb53ebb3-6291-458c-a869-398e88041cb5 main] exec.DDLTask: Failed
- java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;
- at org.apache.hadoop.hive.common.StatsSetupConst$ColumnStatsAccurate.<clinit>(StatsSetupConst.java:166)
- at org.apache.hadoop.hive.common.StatsSetupConst.parseStatsAcc(StatsSetupConst.java:314)
- at org.apache.hadoop.hive.common.StatsSetupConst.setBasicStatsState(StatsSetupConst.java:231)
- at org.apache.hadoop.hive.common.StatsSetupConst.setStatsStateForCreateTable(StatsSetupConst.java:306)
- at org.apache.hadoop.hive.ql.plan.CreateTableDesc.toTable(CreateTableDesc.java:868)
- at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4913)
- at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:428)
- at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
- at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
- at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)
- at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)
- at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
- at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
- at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
- at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
- at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218)
- at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
- at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
- at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
- at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335)
- at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:471)
- at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:487)
- at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
- at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
- at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:331)
- at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241)
- at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:537)
- at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
- at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
- at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
- at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
- at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
- at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
- at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
- 2019-05-07 05:23:52,799 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] reexec.ReOptimizePlugin: ReOptimization: retryPossible: false
- FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;
- 2019-05-07 05:23:52,802 ERROR [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;
- 2019-05-07 05:23:52,802 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Completed executing command(queryId=root_20190507052346_f16b38f5-ae5f-4a87-80c9-c37c079b35e8); Time taken: 0.443 seconds
- 2019-05-07 05:23:52,802 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager
- 2019-05-07 05:23:52,819 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] conf.HiveConf: Using the default value passed in for log id: bb53ebb3-6291-458c-a869-398e88041cb5
- 2019-05-07 05:23:52,819 INFO [bb53ebb3-6291-458c-a869-398e88041cb5 main] session.SessionState: Resetting thread name to main
- 2019-05-07 05:23:52,821 INFO [main] conf.HiveConf: Using the default value passed in for log id: bb53ebb3-6291-458c-a869-398e88041cb5
- 2019-05-07 05:23:52,858 INFO [main] session.SessionState: Deleted directory: /tmp/hive/root/bb53ebb3-6291-458c-a869-398e88041cb5 on fs with scheme hdfs
- 2019-05-07 05:23:52,861 INFO [main] session.SessionState: Deleted directory: /tmp/hive/bb53ebb3-6291-458c-a869-398e88041cb5 on fs with scheme file
- 2019-05-07 05:23:52,928 ERROR [main] tool.ImportTool: Import failed: java.io.IOException: Hive CliDriver exited with status=1
- at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:355)
- at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241)
- at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:537)
- at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
- at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
- at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
- at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
- at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
- at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
- at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Add Comment
Please, Sign In to add comment