Advertisement
Guest User

Untitled

a guest
May 20th, 2012
68
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 23.69 KB | None | 0 0
  1. [INFO] Scanning for projects...
  2. [WARNING]
  3. [WARNING] Some problems were encountered while building the effective model for org.anahata:anahata:jar:1.0-SNAPSHOT
  4. [WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-jar-plugin is missing. @ line 125, column 15
  5. [WARNING]
  6. [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
  7. [WARNING]
  8. [WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
  9. [WARNING]
  10. [INFO]
  11. [INFO] ------------------------------------------------------------------------
  12. [INFO] Building anahata 1.0-SNAPSHOT
  13. [INFO] ------------------------------------------------------------------------
  14. [INFO]
  15. [INFO] --- maven-resources-plugin:2.4.3:resources (default-resources) @ anahata ---
  16. [INFO] Using 'UTF-8' encoding to copy filtered resources.
  17. [INFO] skip non existing resourceDirectory /home/apurv/Desktop/anahad/src/main/resources
  18. [INFO]
  19. [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ anahata ---
  20. [INFO] Compiling 32 source files to /home/apurv/Desktop/anahad/target/classes
  21. [INFO]
  22. [INFO] --- maven-resources-plugin:2.4.3:testResources (default-testResources) @ anahata ---
  23. [INFO] Using 'UTF-8' encoding to copy filtered resources.
  24. [INFO] skip non existing resourceDirectory /home/apurv/Desktop/anahad/src/test/resources
  25. [INFO]
  26. [INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ anahata ---
  27. [INFO] Compiling 3 source files to /home/apurv/Desktop/anahad/target/test-classes
  28. [INFO]
  29. [INFO] --- maven-surefire-plugin:2.7.2:test (default-test) @ anahata ---
  30. [INFO] Surefire report directory: /home/apurv/Desktop/anahad/target/surefire-reports
  31.  
  32. -------------------------------------------------------
  33. T E S T S
  34. -------------------------------------------------------
  35. Running org.anahata.play.hadoop.ShowFileStatusTest
  36. 12/05/20 13:44:02 INFO util.GSet: VM type = 32-bit
  37. 12/05/20 13:44:02 INFO util.GSet: 2% max memory = 17.7425 MB
  38. 12/05/20 13:44:02 INFO util.GSet: capacity = 2^22 = 4194304 entries
  39. 12/05/20 13:44:02 INFO util.GSet: recommended=4194304, actual=4194304
  40. 12/05/20 13:44:02 INFO namenode.FSNamesystem: fsOwner=apurv
  41. 12/05/20 13:44:02 INFO namenode.FSNamesystem: supergroup=supergroup
  42. 12/05/20 13:44:02 INFO namenode.FSNamesystem: isPermissionEnabled=true
  43. 12/05/20 13:44:02 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
  44. 12/05/20 13:44:02 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  45. 12/05/20 13:44:02 INFO namenode.NameNode: Caching file names occuring more than 10 times
  46. 12/05/20 13:44:02 INFO common.Storage: Image file of size 111 saved in 0 seconds.
  47. 12/05/20 13:44:03 INFO common.Storage: Storage directory /tmp/dfs/name1 has been successfully formatted.
  48. 12/05/20 13:44:03 INFO common.Storage: Image file of size 111 saved in 0 seconds.
  49. 12/05/20 13:44:03 INFO common.Storage: Storage directory /tmp/dfs/name2 has been successfully formatted.
  50. 12/05/20 13:44:03 WARN impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-namenode.properties, hadoop-metrics2.properties
  51. 12/05/20 13:44:03 INFO util.GSet: VM type = 32-bit
  52. 12/05/20 13:44:03 INFO util.GSet: 2% max memory = 17.7425 MB
  53. 12/05/20 13:44:03 INFO util.GSet: capacity = 2^22 = 4194304 entries
  54. 12/05/20 13:44:03 INFO util.GSet: recommended=4194304, actual=4194304
  55. 12/05/20 13:44:03 INFO namenode.FSNamesystem: fsOwner=apurv
  56. 12/05/20 13:44:03 INFO namenode.FSNamesystem: supergroup=supergroup
  57. 12/05/20 13:44:03 INFO namenode.FSNamesystem: isPermissionEnabled=true
  58. 12/05/20 13:44:03 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
  59. 12/05/20 13:44:03 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  60. 12/05/20 13:44:03 INFO namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
  61. 12/05/20 13:44:03 INFO namenode.NameNode: Caching file names occuring more than 10 times
  62. 12/05/20 13:44:03 INFO common.Storage: Number of files = 1
  63. 12/05/20 13:44:03 INFO common.Storage: Number of files under construction = 0
  64. 12/05/20 13:44:03 INFO common.Storage: Image file of size 111 loaded in 0 seconds.
  65. 12/05/20 13:44:03 INFO common.Storage: Edits file /tmp/dfs/name1/current/edits of size 4 edits # 0 loaded in 0 seconds.
  66. 12/05/20 13:44:03 INFO common.Storage: Image file of size 111 saved in 0 seconds.
  67. 12/05/20 13:44:03 INFO common.Storage: Image file of size 111 saved in 0 seconds.
  68. 12/05/20 13:44:04 INFO common.Storage: Image file of size 111 saved in 0 seconds.
  69. 12/05/20 13:44:04 INFO common.Storage: Image file of size 111 saved in 0 seconds.
  70. 12/05/20 13:44:04 INFO namenode.NameCache: initialized with 0 entries 0 lookups
  71. 12/05/20 13:44:04 INFO namenode.FSNamesystem: Finished loading FSImage in 913 msecs
  72. 12/05/20 13:44:04 INFO namenode.FSNamesystem: Total number of blocks = 0
  73. 12/05/20 13:44:04 INFO namenode.FSNamesystem: Number of invalid blocks = 0
  74. 12/05/20 13:44:04 INFO namenode.FSNamesystem: Number of under-replicated blocks = 0
  75. 12/05/20 13:44:04 INFO namenode.FSNamesystem: Number of over-replicated blocks = 0
  76. 12/05/20 13:44:04 INFO hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 12 msec
  77. 12/05/20 13:44:04 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
  78. 12/05/20 13:44:04 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
  79. 12/05/20 13:44:04 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
  80. 12/05/20 13:44:04 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
  81. 12/05/20 13:44:04 INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
  82. 12/05/20 13:44:04 INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
  83. 12/05/20 13:44:04 INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
  84. 12/05/20 13:44:04 INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
  85. 12/05/20 13:44:04 INFO ipc.Server: Starting SocketReader
  86. 12/05/20 13:44:04 INFO namenode.NameNode: Namenode up at: localhost/127.0.0.1:59798
  87. 12/05/20 13:44:04 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  88. 12/05/20 13:44:04 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  89. 12/05/20 13:44:04 INFO http.HttpServer: dfs.webhdfs.enabled = false
  90. 12/05/20 13:44:04 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
  91. 12/05/20 13:44:04 INFO http.HttpServer: listener.getLocalPort() returned 48665 webServer.getConnectors()[0].getLocalPort() returned 48665
  92. 12/05/20 13:44:04 INFO http.HttpServer: Jetty bound to port 48665
  93. 12/05/20 13:44:04 INFO mortbay.log: jetty-6.1.14
  94. 12/05/20 13:44:04 INFO mortbay.log: Extract jar:file:/home/apurv/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/hdfs to /tmp/Jetty_localhost_48665_hdfs____.1ih59h/webapp
  95. 12/05/20 13:44:05 INFO mortbay.log: Started SelectChannelConnector@localhost:48665
  96. 12/05/20 13:44:05 INFO namenode.NameNode: Web-server up at: localhost:48665
  97. 12/05/20 13:44:05 INFO ipc.Server: IPC Server Responder: starting
  98. 12/05/20 13:44:05 INFO ipc.Server: IPC Server listener on 59798: starting
  99. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 0 on 59798: starting
  100. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 1 on 59798: starting
  101. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 2 on 59798: starting
  102. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 3 on 59798: starting
  103. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 4 on 59798: starting
  104. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 5 on 59798: starting
  105. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 6 on 59798: starting
  106. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 7 on 59798: starting
  107. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 8 on 59798: starting
  108. 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 9 on 59798: starting
  109. Starting DataNode 0 with dfs.data.dir: /tmp/dfs/data/data1,/tmp/dfs/data/data2
  110. 12/05/20 13:44:05 WARN impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-datanode.properties, hadoop-metrics2.properties
  111. 12/05/20 13:44:05 WARN util.MBeans: Hadoop:service=DataNode,name=MetricsSystem,sub=Control
  112. javax.management.InstanceAlreadyExistsException: MXBean already registered with name Hadoop:service=NameNode,name=MetricsSystem,sub=Control
  113. at com.sun.jmx.mbeanserver.MXBeanLookup.addReference(MXBeanLookup.java:120)
  114. at com.sun.jmx.mbeanserver.MXBeanSupport.register(MXBeanSupport.java:143)
  115. at com.sun.jmx.mbeanserver.MBeanSupport.preRegister2(MBeanSupport.java:183)
  116. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:941)
  117. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
  118. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
  119. at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
  120. at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:56)
  121. at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.initSystemMBean(MetricsSystemImpl.java:500)
  122. at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:140)
  123. at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40)
  124. at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
  125. at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1491)
  126. at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1467)
  127. at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:417)
  128. at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:280)
  129. at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:124)
  130. at org.anahata.play.hadoop.ShowFileStatusTest.setUp(ShowFileStatusTest.java:57)
  131. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  132. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  133. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  134. at java.lang.reflect.Method.invoke(Method.java:597)
  135. at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
  136. at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
  137. at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
  138. at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
  139. at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
  140. at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
  141. at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
  142. at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
  143. at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
  144. at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
  145. at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
  146. at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
  147. at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
  148. at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
  149. at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:35)
  150. at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:115)
  151. at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:97)
  152. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  153. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  154. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  155. at java.lang.reflect.Method.invoke(Method.java:597)
  156. at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103)
  157. at $Proxy0.invoke(Unknown Source)
  158. at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150)
  159. at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91)
  160. at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69)
  161. 12/05/20 13:44:05 INFO common.Storage: Storage directory /tmp/dfs/data/data1 is not formatted.
  162. 12/05/20 13:44:05 INFO common.Storage: Formatting ...
  163. 12/05/20 13:44:05 INFO common.Storage: Storage directory /tmp/dfs/data/data2 is not formatted.
  164. 12/05/20 13:44:05 INFO common.Storage: Formatting ...
  165. 12/05/20 13:44:05 INFO datanode.DataNode: Registered FSDatasetStatusMBean
  166. 12/05/20 13:44:05 INFO datanode.DataNode: Opened info server at 34276
  167. 12/05/20 13:44:05 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s
  168. 12/05/20 13:44:05 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  169. 12/05/20 13:44:05 INFO datanode.DataNode: dfs.webhdfs.enabled = false
  170. 12/05/20 13:44:05 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
  171. 12/05/20 13:44:05 INFO http.HttpServer: listener.getLocalPort() returned 48133 webServer.getConnectors()[0].getLocalPort() returned 48133
  172. 12/05/20 13:44:05 INFO http.HttpServer: Jetty bound to port 48133
  173. 12/05/20 13:44:05 INFO mortbay.log: jetty-6.1.14
  174. 12/05/20 13:44:05 INFO mortbay.log: Extract jar:file:/home/apurv/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/datanode to /tmp/Jetty_localhost_48133_datanode____1wzd10/webapp
  175. 12/05/20 13:44:05 INFO mortbay.log: Started SelectChannelConnector@localhost:48133
  176. 12/05/20 13:44:05 INFO datanode.DataNode: dnRegistration = DatanodeRegistration(127.0.0.1:34276, storageID=, infoPort=48133, ipcPort=56739)
  177. 12/05/20 13:44:05 INFO hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:34276 storage DS-1332539920-127.0.1.1-34276-1337501645976
  178. 12/05/20 13:44:05 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:34276
  179. 12/05/20 13:44:05 INFO ipc.Server: Starting SocketReader
  180. 12/05/20 13:44:06 INFO datanode.DataNode: New storage id DS-1332539920-127.0.1.1-34276-1337501645976 is assigned to data-node 127.0.0.1:34276
  181. 12/05/20 13:44:06 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:34276, storageID=DS-1332539920-127.0.1.1-34276-1337501645976, infoPort=48133, ipcPort=56739)In DataNode.run, data = FSDataset{dirpath='/tmp/dfs/data/data1/current,/tmp/dfs/data/data2/current'}
  182. 12/05/20 13:44:06 INFO ipc.Server: IPC Server Responder: starting
  183. 12/05/20 13:44:06 INFO ipc.Server: IPC Server listener on 56739: starting
  184. 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 0 on 56739: starting
  185. 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 1 on 56739: starting
  186. 12/05/20 13:44:06 INFO datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
  187. 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 2 on 56739: starting
  188. 12/05/20 13:44:06 INFO hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:34276, blocks: 0, processing time: 1 msecs
  189. 12/05/20 13:44:06 INFO datanode.DataNode: BlockReport of 0 blocks took 0 msec to generate and 23 msecs for RPC and NN processing
  190. 12/05/20 13:44:06 INFO datanode.DataNode: Starting Periodic block scanner.
  191. 12/05/20 13:44:06 ERROR security.UserGroupInformation: PriviledgedActionException as:apurv cause:java.io.IOException: java.lang.NoClassDefFoundError: javax/ws/rs/core/StreamingOutput
  192. 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 8 on 59798, call create(/dir/file, rwxr-xr-x, DFSClient_1676392726, true, true, 1, 67108864) from 127.0.0.1:47240: error: java.io.IOException: java.lang.NoClassDefFoundError: javax/ws/rs/core/StreamingOutput
  193. java.io.IOException: java.lang.NoClassDefFoundError: javax/ws/rs/core/StreamingOutput
  194. at org.apache.hadoop.hdfs.server.namenode.NameNode.getClientMachine(NameNode.java:589)
  195. at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:619)
  196. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  197. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  198. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  199. at java.lang.reflect.Method.invoke(Method.java:597)
  200. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  201. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  202. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  203. at java.security.AccessController.doPrivileged(Native Method)
  204. at javax.security.auth.Subject.doAs(Subject.java:396)
  205. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
  206. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
  207. Shutting down the Mini HDFS Cluster
  208. Shutting down DataNode 0
  209. 12/05/20 13:44:06 INFO ipc.Server: Stopping server on 56739
  210. 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 0 on 56739: exiting
  211. 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 1 on 56739: exiting
  212. 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 2 on 56739: exiting
  213. 12/05/20 13:44:06 INFO ipc.Server: Stopping IPC Server listener on 56739
  214. 12/05/20 13:44:06 INFO ipc.Server: Stopping IPC Server Responder
  215. 12/05/20 13:44:06 INFO metrics.RpcInstrumentation: shut down
  216. 12/05/20 13:44:06 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:34276, storageID=DS-1332539920-127.0.1.1-34276-1337501645976, infoPort=48133, ipcPort=56739):DataXceiveServer:java.nio.channels.AsynchronousCloseException
  217. at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
  218. at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
  219. at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
  220. at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
  221. at java.lang.Thread.run(Thread.java:662)
  222.  
  223. 12/05/20 13:44:06 INFO datanode.DataNode: Exiting DataXceiveServer
  224. 12/05/20 13:44:06 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
  225. 12/05/20 13:44:06 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
  226. 12/05/20 13:44:06 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:34276, storageID=DS-1332539920-127.0.1.1-34276-1337501645976, infoPort=48133, ipcPort=56739):Finishing DataNode in: FSDataset{dirpath='/tmp/dfs/data/data1/current,/tmp/dfs/data/data2/current'}
  227. 12/05/20 13:44:06 WARN util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
  228. javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo
  229. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094)
  230. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415)
  231. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403)
  232. at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506)
  233. at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
  234. at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:513)
  235. at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:726)
  236. at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1442)
  237. at java.lang.Thread.run(Thread.java:662)
  238. 12/05/20 13:44:06 INFO ipc.Server: Stopping server on 56739
  239. 12/05/20 13:44:06 INFO metrics.RpcInstrumentation: shut down
  240. 12/05/20 13:44:06 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
  241. 12/05/20 13:44:06 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
  242. 12/05/20 13:44:06 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
  243. 12/05/20 13:44:06 WARN util.MBeans: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2044648328
  244. javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2044648328
  245. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094)
  246. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415)
  247. at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403)
  248. at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506)
  249. at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
  250. at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934)
  251. at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788)
  252. at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566)
  253. at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550)
  254. at org.anahata.play.hadoop.ShowFileStatusTest.tearDown(ShowFileStatusTest.java:71)
  255. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  256. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  257. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  258. at java.lang.reflect.Method.invoke(Method.java:597)
  259. at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
  260. at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
  261. at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
  262. at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
  263. at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
  264. at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
  265. at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
  266. at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
  267. at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
  268. at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
  269. at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
  270. at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
  271. at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
  272. at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:35)
  273. at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:115)
  274. at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:97)
  275. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  276. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  277. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  278. at java.lang.reflect.Method.invoke(Method.java:597)
  279. at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103)
  280. at $Proxy0.invoke(Unknown Source)
  281. at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150)
  282. at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91)
  283. at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69)
  284. 12/05/20 13:44:06 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement