Advertisement
Guest User

Untitled

a guest
Oct 24th, 2013
88
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 129.99 KB | None | 0 0
  1.  
  2. 2013-10-23 19:44:02.219720: Beginning Progressive Cactus Alignment
  3.  
  4. Got message from job at time: 1382528642.58 : Starting preprocessor phase target at 1382528642.57 seconds
  5. Got message from job at time: 1382528656.02 : Blocking on ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB" in_memory="1" port="1984" snapshot="0" />
  6. with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/gTD15/tmp_3aXMfbyAFP/tmp_dOGDhjudGD_kill.txt
  7. Got message from job at time: 1382528694.84 : Starting caf phase target with index 0 at 1382528656.14 seconds (recursing = 1)
  8. Got message from job at time: 1382528694.84 : Pinch graph component with 1915 nodes and 2983 edges is being split up by breaking 826 edges to reduce size to less than 489 max, but found 0 pointless edges
  9. Got message from job at time: 1382528694.84 : Attaching the sequence to the cactus root 5120029826366796271, header SE007 with length 1698318 and 830977 total bases aligned and 0 bases aligned to other chromosome threads
  10. Got message from job at time: 1382528696.62 : Starting bar phase target with index 0 at 1382528694.84 seconds (recursing = 1)
  11. Got message from job at time: 1382529508.49 : Starting avg phase target with index 0 at 1382529508.48 seconds (recursing = 0)
  12. Got message from job at time: 1382529508.49 : Starting reference phase target with index 0 at 1382529508.48 seconds (recursing = 1)
  13. Got message from job at time: 1382529518.59 : Blocking on ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" in_memory="1" port="2084" snapshot="0" />
  14. with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_K7Bdb9V47I_kill.txt
  15. Got message from job at time: 1382529559.93 : Launching ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" in_memory="1" port="2084" snapshot="0" />
  16. with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_K7Bdb9V47I_kill.txt
  17. Got message from job at time: 1382529560.62 : Killing ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" host="iminkin-VirtualBox" in_memory="1" port="2084" snapshot="0" />
  18. with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_K7Bdb9V47I_kill.txt
  19. Got message from job at time: 1382529560.62 : Report for iminkin-VirtualBox:2084:
  20. cnt_get: 28605
  21. cnt_get_misses: 0
  22. cnt_misc: 0
  23. cnt_remove: 28604
  24. cnt_remove_misses: 0
  25. cnt_script: 0
  26. cnt_set: 28605
  27. cnt_set_misses: 0
  28. conf_kc_features: (atomic)(zlib)
  29. conf_kc_version: 1.2.76 (16.13)
  30. conf_kt_features: (epoll)
  31. conf_kt_version: 0.9.56 (2.19)
  32. conf_os_name: Linux
  33. db_0: count=1 size=269021116 path=:
  34. db_total_count: 1
  35. db_total_size: 269021116
  36. serv_conn_count: 1
  37. serv_current_time: 1382529556.100855
  38. serv_proc_id: 18557
  39. serv_running_term: 47.510640
  40. serv_task_count: 0
  41. serv_thread_count: 64
  42. sys_mem_cached: 33427456
  43. sys_mem_free: 1179172864
  44. sys_mem_peak: 1446133760
  45. sys_mem_rss: 69787648
  46. sys_mem_size: 1379700736
  47. sys_mem_total: 1714315264
  48. sys_ru_stime: 0.128000
  49. sys_ru_utime: 0.188000
  50.  
  51. Contents of /home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972:
  52. total 4.0K
  53. -rw-rw-r-- 1 iminkin iminkin 457 Oct 23 19:58 ktout.log
  54.  
  55.  
  56. Got message from job at time: 1382529565.84 : Starting reference phase target with index 0 at 1382529560.63 seconds (recursing = 1)
  57. Got message from job at time: 1382529565.84 : Starting Reference Extract Phase
  58. Got message from job at time: 1382529565.84 : Starting check phase target with index 0 at 1382529565.83 seconds (recursing = 0)
  59. Got message from job at time: 1382529575.87 : Blocking on ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" in_memory="1" port="2084" snapshot="0" />
  60. with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_TalSPAGl6l_kill.txt
  61. Got message from job at time: 1382529585.93 : Launching ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" in_memory="1" port="2084" snapshot="0" />
  62. with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_TalSPAGl6l_kill.txt
  63. Got message from job at time: 1382529586.17 : Killing ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" host="iminkin-VirtualBox" in_memory="1" port="2084" snapshot="0" />
  64. with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_TalSPAGl6l_kill.txt
  65. Got message from job at time: 1382529586.17 : Report for iminkin-VirtualBox:2084:
  66. cnt_get: 130576
  67. cnt_get_misses: 0
  68. cnt_misc: 0
  69. cnt_remove: 130572
  70. cnt_remove_misses: 0
  71. cnt_script: 0
  72. cnt_set: 130576
  73. cnt_set_misses: 0
  74. conf_kc_features: (atomic)(zlib)
  75. conf_kc_version: 1.2.76 (16.13)
  76. conf_kt_features: (epoll)
  77. conf_kt_version: 0.9.56 (2.19)
  78. conf_os_name: Linux
  79. db_0: count=4 size=271119180 path=:
  80. db_total_count: 4
  81. db_total_size: 271119180
  82. serv_conn_count: 1
  83. serv_current_time: 1382529583.019693
  84. serv_proc_id: 18959
  85. serv_running_term: 17.165976
  86. serv_task_count: 0
  87. serv_thread_count: 64
  88. sys_mem_cached: 49475584
  89. sys_mem_free: 1016328192
  90. sys_mem_peak: 1446133760
  91. sys_mem_rss: 207855616
  92. sys_mem_size: 1379729408
  93. sys_mem_total: 1714315264
  94. sys_ru_stime: 0.176000
  95. sys_ru_utime: 0.448000
  96.  
  97. Contents of /home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972:
  98. total 4.0K
  99. -rw-rw-r-- 1 iminkin iminkin 457 Oct 23 19:59 ktout.log
  100.  
  101.  
  102. The job seems to have left a log file, indicating failure: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t0/job
  103. Reporting file: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t0/log.txt
  104. log.txt: Traceback (most recent call last):
  105. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 223, in main
  106. log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
  107. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
  108. log.txt: self.target.run()
  109. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverJobTree.py", line 134, in run
  110. log.txt: killPingInterval=self.runTimestep)
  111. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverControl.py", line 129, in runKtserver
  112. log.txt: raise e
  113. log.txt: RuntimeError: Ktserver already found running with log /home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972/ktout.log
  114. log.txt: Exiting the slave because of a failed job on host iminkin-VirtualBox
  115. log.txt: Due to failure we are reducing the remaining retry count of job /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t0/job to 0
  116. log.txt: We have set the default memory of the failed job to 4294967296 bytes
  117. Job: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t0/job is completely failed
  118. The job seems to have left a log file, indicating failure: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t1/job
  119. Reporting file: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t1/log.txt
  120. log.txt: Traceback (most recent call last):
  121. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 223, in main
  122. log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
  123. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
  124. log.txt: self.target.run()
  125. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverJobTree.py", line 167, in run
  126. log.txt: self.blockTimeout, self.blockTimestep)
  127. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverControl.py", line 223, in blockUntilKtserverIsRunnning
  128. log.txt: killSwitchPath):
  129. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverControl.py", line 284, in __isKtServerRunning
  130. log.txt: killSwitchPath)
  131. log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverControl.py", line 204, in __readStatusFromSwitchFile
  132. log.txt: raise RuntimeError("Ktserver polling detected fatal error")
  133. log.txt: RuntimeError: Ktserver polling detected fatal error
  134. log.txt: Exiting the slave because of a failed job on host iminkin-VirtualBox
  135. log.txt: Due to failure we are reducing the remaining retry count of job /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t1/job to 0
  136. log.txt: We have set the default memory of the failed job to 4294967296 bytes
  137. Job: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t1/job is completely failed
  138.  
  139.  
  140. **********************************************************************************
  141. **********************************************************************************
  142. ** ALERT **
  143. **********************************************************************************
  144. **********************************************************************************
  145. The only jobs that I have detected running for at least the past 4200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  146. * wait a bit. Maybe it will resume
  147. * look for fatal errors in ./work6/temp123/cactus.log
  148. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  149. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  150. * if not it's probably time to abort.
  151. Note that you can (and probably should) kill any trailing ktserver jobs by running
  152. rm -rf ./work6/temp123/jobTree
  153. They will eventually timeout on their own but it could take days.
  154.  
  155.  
  156.  
  157. **********************************************************************************
  158. **********************************************************************************
  159. ** ALERT **
  160. **********************************************************************************
  161. **********************************************************************************
  162. The only jobs that I have detected running for at least the past 4800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  163. * wait a bit. Maybe it will resume
  164. * look for fatal errors in ./work6/temp123/cactus.log
  165. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  166. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  167. * if not it's probably time to abort.
  168. Note that you can (and probably should) kill any trailing ktserver jobs by running
  169. rm -rf ./work6/temp123/jobTree
  170. They will eventually timeout on their own but it could take days.
  171.  
  172.  
  173.  
  174. **********************************************************************************
  175. **********************************************************************************
  176. ** ALERT **
  177. **********************************************************************************
  178. **********************************************************************************
  179. The only jobs that I have detected running for at least the past 5400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  180. * wait a bit. Maybe it will resume
  181. * look for fatal errors in ./work6/temp123/cactus.log
  182. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  183. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  184. * if not it's probably time to abort.
  185. Note that you can (and probably should) kill any trailing ktserver jobs by running
  186. rm -rf ./work6/temp123/jobTree
  187. They will eventually timeout on their own but it could take days.
  188.  
  189.  
  190.  
  191. **********************************************************************************
  192. **********************************************************************************
  193. ** ALERT **
  194. **********************************************************************************
  195. **********************************************************************************
  196. The only jobs that I have detected running for at least the past 6000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  197. * wait a bit. Maybe it will resume
  198. * look for fatal errors in ./work6/temp123/cactus.log
  199. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  200. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  201. * if not it's probably time to abort.
  202. Note that you can (and probably should) kill any trailing ktserver jobs by running
  203. rm -rf ./work6/temp123/jobTree
  204. They will eventually timeout on their own but it could take days.
  205.  
  206.  
  207.  
  208. **********************************************************************************
  209. **********************************************************************************
  210. ** ALERT **
  211. **********************************************************************************
  212. **********************************************************************************
  213. The only jobs that I have detected running for at least the past 6600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  214. * wait a bit. Maybe it will resume
  215. * look for fatal errors in ./work6/temp123/cactus.log
  216. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  217. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  218. * if not it's probably time to abort.
  219. Note that you can (and probably should) kill any trailing ktserver jobs by running
  220. rm -rf ./work6/temp123/jobTree
  221. They will eventually timeout on their own but it could take days.
  222.  
  223.  
  224.  
  225. **********************************************************************************
  226. **********************************************************************************
  227. ** ALERT **
  228. **********************************************************************************
  229. **********************************************************************************
  230. The only jobs that I have detected running for at least the past 7200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  231. * wait a bit. Maybe it will resume
  232. * look for fatal errors in ./work6/temp123/cactus.log
  233. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  234. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  235. * if not it's probably time to abort.
  236. Note that you can (and probably should) kill any trailing ktserver jobs by running
  237. rm -rf ./work6/temp123/jobTree
  238. They will eventually timeout on their own but it could take days.
  239.  
  240.  
  241.  
  242. **********************************************************************************
  243. **********************************************************************************
  244. ** ALERT **
  245. **********************************************************************************
  246. **********************************************************************************
  247. The only jobs that I have detected running for at least the past 7800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  248. * wait a bit. Maybe it will resume
  249. * look for fatal errors in ./work6/temp123/cactus.log
  250. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  251. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  252. * if not it's probably time to abort.
  253. Note that you can (and probably should) kill any trailing ktserver jobs by running
  254. rm -rf ./work6/temp123/jobTree
  255. They will eventually timeout on their own but it could take days.
  256.  
  257.  
  258.  
  259. **********************************************************************************
  260. **********************************************************************************
  261. ** ALERT **
  262. **********************************************************************************
  263. **********************************************************************************
  264. The only jobs that I have detected running for at least the past 8400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  265. * wait a bit. Maybe it will resume
  266. * look for fatal errors in ./work6/temp123/cactus.log
  267. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  268. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  269. * if not it's probably time to abort.
  270. Note that you can (and probably should) kill any trailing ktserver jobs by running
  271. rm -rf ./work6/temp123/jobTree
  272. They will eventually timeout on their own but it could take days.
  273.  
  274.  
  275.  
  276. **********************************************************************************
  277. **********************************************************************************
  278. ** ALERT **
  279. **********************************************************************************
  280. **********************************************************************************
  281. The only jobs that I have detected running for at least the past 9000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  282. * wait a bit. Maybe it will resume
  283. * look for fatal errors in ./work6/temp123/cactus.log
  284. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  285. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  286. * if not it's probably time to abort.
  287. Note that you can (and probably should) kill any trailing ktserver jobs by running
  288. rm -rf ./work6/temp123/jobTree
  289. They will eventually timeout on their own but it could take days.
  290.  
  291.  
  292.  
  293. **********************************************************************************
  294. **********************************************************************************
  295. ** ALERT **
  296. **********************************************************************************
  297. **********************************************************************************
  298. The only jobs that I have detected running for at least the past 9600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  299. * wait a bit. Maybe it will resume
  300. * look for fatal errors in ./work6/temp123/cactus.log
  301. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  302. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  303. * if not it's probably time to abort.
  304. Note that you can (and probably should) kill any trailing ktserver jobs by running
  305. rm -rf ./work6/temp123/jobTree
  306. They will eventually timeout on their own but it could take days.
  307.  
  308.  
  309.  
  310. **********************************************************************************
  311. **********************************************************************************
  312. ** ALERT **
  313. **********************************************************************************
  314. **********************************************************************************
  315. The only jobs that I have detected running for at least the past 10200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  316. * wait a bit. Maybe it will resume
  317. * look for fatal errors in ./work6/temp123/cactus.log
  318. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  319. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  320. * if not it's probably time to abort.
  321. Note that you can (and probably should) kill any trailing ktserver jobs by running
  322. rm -rf ./work6/temp123/jobTree
  323. They will eventually timeout on their own but it could take days.
  324.  
  325.  
  326.  
  327. **********************************************************************************
  328. **********************************************************************************
  329. ** ALERT **
  330. **********************************************************************************
  331. **********************************************************************************
  332. The only jobs that I have detected running for at least the past 10800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  333. * wait a bit. Maybe it will resume
  334. * look for fatal errors in ./work6/temp123/cactus.log
  335. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  336. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  337. * if not it's probably time to abort.
  338. Note that you can (and probably should) kill any trailing ktserver jobs by running
  339. rm -rf ./work6/temp123/jobTree
  340. They will eventually timeout on their own but it could take days.
  341.  
  342.  
  343.  
  344. **********************************************************************************
  345. **********************************************************************************
  346. ** ALERT **
  347. **********************************************************************************
  348. **********************************************************************************
  349. The only jobs that I have detected running for at least the past 11400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  350. * wait a bit. Maybe it will resume
  351. * look for fatal errors in ./work6/temp123/cactus.log
  352. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  353. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  354. * if not it's probably time to abort.
  355. Note that you can (and probably should) kill any trailing ktserver jobs by running
  356. rm -rf ./work6/temp123/jobTree
  357. They will eventually timeout on their own but it could take days.
  358.  
  359.  
  360.  
  361. **********************************************************************************
  362. **********************************************************************************
  363. ** ALERT **
  364. **********************************************************************************
  365. **********************************************************************************
  366. The only jobs that I have detected running for at least the past 12000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  367. * wait a bit. Maybe it will resume
  368. * look for fatal errors in ./work6/temp123/cactus.log
  369. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  370. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  371. * if not it's probably time to abort.
  372. Note that you can (and probably should) kill any trailing ktserver jobs by running
  373. rm -rf ./work6/temp123/jobTree
  374. They will eventually timeout on their own but it could take days.
  375.  
  376.  
  377.  
  378. **********************************************************************************
  379. **********************************************************************************
  380. ** ALERT **
  381. **********************************************************************************
  382. **********************************************************************************
  383. The only jobs that I have detected running for at least the past 12600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  384. * wait a bit. Maybe it will resume
  385. * look for fatal errors in ./work6/temp123/cactus.log
  386. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  387. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  388. * if not it's probably time to abort.
  389. Note that you can (and probably should) kill any trailing ktserver jobs by running
  390. rm -rf ./work6/temp123/jobTree
  391. They will eventually timeout on their own but it could take days.
  392.  
  393.  
  394.  
  395. **********************************************************************************
  396. **********************************************************************************
  397. ** ALERT **
  398. **********************************************************************************
  399. **********************************************************************************
  400. The only jobs that I have detected running for at least the past 13200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  401. * wait a bit. Maybe it will resume
  402. * look for fatal errors in ./work6/temp123/cactus.log
  403. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  404. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  405. * if not it's probably time to abort.
  406. Note that you can (and probably should) kill any trailing ktserver jobs by running
  407. rm -rf ./work6/temp123/jobTree
  408. They will eventually timeout on their own but it could take days.
  409.  
  410.  
  411.  
  412. **********************************************************************************
  413. **********************************************************************************
  414. ** ALERT **
  415. **********************************************************************************
  416. **********************************************************************************
  417. The only jobs that I have detected running for at least the past 13800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  418. * wait a bit. Maybe it will resume
  419. * look for fatal errors in ./work6/temp123/cactus.log
  420. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  421. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  422. * if not it's probably time to abort.
  423. Note that you can (and probably should) kill any trailing ktserver jobs by running
  424. rm -rf ./work6/temp123/jobTree
  425. They will eventually timeout on their own but it could take days.
  426.  
  427.  
  428.  
  429. **********************************************************************************
  430. **********************************************************************************
  431. ** ALERT **
  432. **********************************************************************************
  433. **********************************************************************************
  434. The only jobs that I have detected running for at least the past 14400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  435. * wait a bit. Maybe it will resume
  436. * look for fatal errors in ./work6/temp123/cactus.log
  437. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  438. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  439. * if not it's probably time to abort.
  440. Note that you can (and probably should) kill any trailing ktserver jobs by running
  441. rm -rf ./work6/temp123/jobTree
  442. They will eventually timeout on their own but it could take days.
  443.  
  444.  
  445.  
  446. **********************************************************************************
  447. **********************************************************************************
  448. ** ALERT **
  449. **********************************************************************************
  450. **********************************************************************************
  451. The only jobs that I have detected running for at least the past 15000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  452. * wait a bit. Maybe it will resume
  453. * look for fatal errors in ./work6/temp123/cactus.log
  454. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  455. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  456. * if not it's probably time to abort.
  457. Note that you can (and probably should) kill any trailing ktserver jobs by running
  458. rm -rf ./work6/temp123/jobTree
  459. They will eventually timeout on their own but it could take days.
  460.  
  461.  
  462.  
  463. **********************************************************************************
  464. **********************************************************************************
  465. ** ALERT **
  466. **********************************************************************************
  467. **********************************************************************************
  468. The only jobs that I have detected running for at least the past 15600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  469. * wait a bit. Maybe it will resume
  470. * look for fatal errors in ./work6/temp123/cactus.log
  471. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  472. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  473. * if not it's probably time to abort.
  474. Note that you can (and probably should) kill any trailing ktserver jobs by running
  475. rm -rf ./work6/temp123/jobTree
  476. They will eventually timeout on their own but it could take days.
  477.  
  478.  
  479.  
  480. **********************************************************************************
  481. **********************************************************************************
  482. ** ALERT **
  483. **********************************************************************************
  484. **********************************************************************************
  485. The only jobs that I have detected running for at least the past 16200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  486. * wait a bit. Maybe it will resume
  487. * look for fatal errors in ./work6/temp123/cactus.log
  488. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  489. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  490. * if not it's probably time to abort.
  491. Note that you can (and probably should) kill any trailing ktserver jobs by running
  492. rm -rf ./work6/temp123/jobTree
  493. They will eventually timeout on their own but it could take days.
  494.  
  495.  
  496.  
  497. **********************************************************************************
  498. **********************************************************************************
  499. ** ALERT **
  500. **********************************************************************************
  501. **********************************************************************************
  502. The only jobs that I have detected running for at least the past 16800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  503. * wait a bit. Maybe it will resume
  504. * look for fatal errors in ./work6/temp123/cactus.log
  505. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  506. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  507. * if not it's probably time to abort.
  508. Note that you can (and probably should) kill any trailing ktserver jobs by running
  509. rm -rf ./work6/temp123/jobTree
  510. They will eventually timeout on their own but it could take days.
  511.  
  512.  
  513.  
  514. **********************************************************************************
  515. **********************************************************************************
  516. ** ALERT **
  517. **********************************************************************************
  518. **********************************************************************************
  519. The only jobs that I have detected running for at least the past 17400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  520. * wait a bit. Maybe it will resume
  521. * look for fatal errors in ./work6/temp123/cactus.log
  522. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  523. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  524. * if not it's probably time to abort.
  525. Note that you can (and probably should) kill any trailing ktserver jobs by running
  526. rm -rf ./work6/temp123/jobTree
  527. They will eventually timeout on their own but it could take days.
  528.  
  529.  
  530.  
  531. **********************************************************************************
  532. **********************************************************************************
  533. ** ALERT **
  534. **********************************************************************************
  535. **********************************************************************************
  536. The only jobs that I have detected running for at least the past 18000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  537. * wait a bit. Maybe it will resume
  538. * look for fatal errors in ./work6/temp123/cactus.log
  539. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  540. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  541. * if not it's probably time to abort.
  542. Note that you can (and probably should) kill any trailing ktserver jobs by running
  543. rm -rf ./work6/temp123/jobTree
  544. They will eventually timeout on their own but it could take days.
  545.  
  546.  
  547.  
  548. **********************************************************************************
  549. **********************************************************************************
  550. ** ALERT **
  551. **********************************************************************************
  552. **********************************************************************************
  553. The only jobs that I have detected running for at least the past 18600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  554. * wait a bit. Maybe it will resume
  555. * look for fatal errors in ./work6/temp123/cactus.log
  556. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  557. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  558. * if not it's probably time to abort.
  559. Note that you can (and probably should) kill any trailing ktserver jobs by running
  560. rm -rf ./work6/temp123/jobTree
  561. They will eventually timeout on their own but it could take days.
  562.  
  563.  
  564.  
  565. **********************************************************************************
  566. **********************************************************************************
  567. ** ALERT **
  568. **********************************************************************************
  569. **********************************************************************************
  570. The only jobs that I have detected running for at least the past 19200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  571. * wait a bit. Maybe it will resume
  572. * look for fatal errors in ./work6/temp123/cactus.log
  573. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  574. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  575. * if not it's probably time to abort.
  576. Note that you can (and probably should) kill any trailing ktserver jobs by running
  577. rm -rf ./work6/temp123/jobTree
  578. They will eventually timeout on their own but it could take days.
  579.  
  580.  
  581.  
  582. **********************************************************************************
  583. **********************************************************************************
  584. ** ALERT **
  585. **********************************************************************************
  586. **********************************************************************************
  587. The only jobs that I have detected running for at least the past 19800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  588. * wait a bit. Maybe it will resume
  589. * look for fatal errors in ./work6/temp123/cactus.log
  590. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  591. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  592. * if not it's probably time to abort.
  593. Note that you can (and probably should) kill any trailing ktserver jobs by running
  594. rm -rf ./work6/temp123/jobTree
  595. They will eventually timeout on their own but it could take days.
  596.  
  597.  
  598.  
  599. **********************************************************************************
  600. **********************************************************************************
  601. ** ALERT **
  602. **********************************************************************************
  603. **********************************************************************************
  604. The only jobs that I have detected running for at least the past 20400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  605. * wait a bit. Maybe it will resume
  606. * look for fatal errors in ./work6/temp123/cactus.log
  607. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  608. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  609. * if not it's probably time to abort.
  610. Note that you can (and probably should) kill any trailing ktserver jobs by running
  611. rm -rf ./work6/temp123/jobTree
  612. They will eventually timeout on their own but it could take days.
  613.  
  614.  
  615.  
  616. **********************************************************************************
  617. **********************************************************************************
  618. ** ALERT **
  619. **********************************************************************************
  620. **********************************************************************************
  621. The only jobs that I have detected running for at least the past 21000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  622. * wait a bit. Maybe it will resume
  623. * look for fatal errors in ./work6/temp123/cactus.log
  624. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  625. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  626. * if not it's probably time to abort.
  627. Note that you can (and probably should) kill any trailing ktserver jobs by running
  628. rm -rf ./work6/temp123/jobTree
  629. They will eventually timeout on their own but it could take days.
  630.  
  631.  
  632.  
  633. **********************************************************************************
  634. **********************************************************************************
  635. ** ALERT **
  636. **********************************************************************************
  637. **********************************************************************************
  638. The only jobs that I have detected running for at least the past 21600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  639. * wait a bit. Maybe it will resume
  640. * look for fatal errors in ./work6/temp123/cactus.log
  641. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  642. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  643. * if not it's probably time to abort.
  644. Note that you can (and probably should) kill any trailing ktserver jobs by running
  645. rm -rf ./work6/temp123/jobTree
  646. They will eventually timeout on their own but it could take days.
  647.  
  648.  
  649.  
  650. **********************************************************************************
  651. **********************************************************************************
  652. ** ALERT **
  653. **********************************************************************************
  654. **********************************************************************************
  655. The only jobs that I have detected running for at least the past 22200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  656. * wait a bit. Maybe it will resume
  657. * look for fatal errors in ./work6/temp123/cactus.log
  658. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  659. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  660. * if not it's probably time to abort.
  661. Note that you can (and probably should) kill any trailing ktserver jobs by running
  662. rm -rf ./work6/temp123/jobTree
  663. They will eventually timeout on their own but it could take days.
  664.  
  665.  
  666.  
  667. **********************************************************************************
  668. **********************************************************************************
  669. ** ALERT **
  670. **********************************************************************************
  671. **********************************************************************************
  672. The only jobs that I have detected running for at least the past 22800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  673. * wait a bit. Maybe it will resume
  674. * look for fatal errors in ./work6/temp123/cactus.log
  675. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  676. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  677. * if not it's probably time to abort.
  678. Note that you can (and probably should) kill any trailing ktserver jobs by running
  679. rm -rf ./work6/temp123/jobTree
  680. They will eventually timeout on their own but it could take days.
  681.  
  682.  
  683.  
  684. **********************************************************************************
  685. **********************************************************************************
  686. ** ALERT **
  687. **********************************************************************************
  688. **********************************************************************************
  689. The only jobs that I have detected running for at least the past 23400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  690. * wait a bit. Maybe it will resume
  691. * look for fatal errors in ./work6/temp123/cactus.log
  692. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  693. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  694. * if not it's probably time to abort.
  695. Note that you can (and probably should) kill any trailing ktserver jobs by running
  696. rm -rf ./work6/temp123/jobTree
  697. They will eventually timeout on their own but it could take days.
  698.  
  699.  
  700.  
  701. **********************************************************************************
  702. **********************************************************************************
  703. ** ALERT **
  704. **********************************************************************************
  705. **********************************************************************************
  706. The only jobs that I have detected running for at least the past 24000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  707. * wait a bit. Maybe it will resume
  708. * look for fatal errors in ./work6/temp123/cactus.log
  709. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  710. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  711. * if not it's probably time to abort.
  712. Note that you can (and probably should) kill any trailing ktserver jobs by running
  713. rm -rf ./work6/temp123/jobTree
  714. They will eventually timeout on their own but it could take days.
  715.  
  716.  
  717.  
  718. **********************************************************************************
  719. **********************************************************************************
  720. ** ALERT **
  721. **********************************************************************************
  722. **********************************************************************************
  723. The only jobs that I have detected running for at least the past 24600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  724. * wait a bit. Maybe it will resume
  725. * look for fatal errors in ./work6/temp123/cactus.log
  726. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  727. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  728. * if not it's probably time to abort.
  729. Note that you can (and probably should) kill any trailing ktserver jobs by running
  730. rm -rf ./work6/temp123/jobTree
  731. They will eventually timeout on their own but it could take days.
  732.  
  733.  
  734.  
  735. **********************************************************************************
  736. **********************************************************************************
  737. ** ALERT **
  738. **********************************************************************************
  739. **********************************************************************************
  740. The only jobs that I have detected running for at least the past 25200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  741. * wait a bit. Maybe it will resume
  742. * look for fatal errors in ./work6/temp123/cactus.log
  743. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  744. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  745. * if not it's probably time to abort.
  746. Note that you can (and probably should) kill any trailing ktserver jobs by running
  747. rm -rf ./work6/temp123/jobTree
  748. They will eventually timeout on their own but it could take days.
  749.  
  750.  
  751.  
  752. **********************************************************************************
  753. **********************************************************************************
  754. ** ALERT **
  755. **********************************************************************************
  756. **********************************************************************************
  757. The only jobs that I have detected running for at least the past 25800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  758. * wait a bit. Maybe it will resume
  759. * look for fatal errors in ./work6/temp123/cactus.log
  760. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  761. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  762. * if not it's probably time to abort.
  763. Note that you can (and probably should) kill any trailing ktserver jobs by running
  764. rm -rf ./work6/temp123/jobTree
  765. They will eventually timeout on their own but it could take days.
  766.  
  767.  
  768.  
  769. **********************************************************************************
  770. **********************************************************************************
  771. ** ALERT **
  772. **********************************************************************************
  773. **********************************************************************************
  774. The only jobs that I have detected running for at least the past 26400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  775. * wait a bit. Maybe it will resume
  776. * look for fatal errors in ./work6/temp123/cactus.log
  777. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  778. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  779. * if not it's probably time to abort.
  780. Note that you can (and probably should) kill any trailing ktserver jobs by running
  781. rm -rf ./work6/temp123/jobTree
  782. They will eventually timeout on their own but it could take days.
  783.  
  784.  
  785.  
  786. **********************************************************************************
  787. **********************************************************************************
  788. ** ALERT **
  789. **********************************************************************************
  790. **********************************************************************************
  791. The only jobs that I have detected running for at least the past 27000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  792. * wait a bit. Maybe it will resume
  793. * look for fatal errors in ./work6/temp123/cactus.log
  794. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  795. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  796. * if not it's probably time to abort.
  797. Note that you can (and probably should) kill any trailing ktserver jobs by running
  798. rm -rf ./work6/temp123/jobTree
  799. They will eventually timeout on their own but it could take days.
  800.  
  801.  
  802.  
  803. **********************************************************************************
  804. **********************************************************************************
  805. ** ALERT **
  806. **********************************************************************************
  807. **********************************************************************************
  808. The only jobs that I have detected running for at least the past 27600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  809. * wait a bit. Maybe it will resume
  810. * look for fatal errors in ./work6/temp123/cactus.log
  811. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  812. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  813. * if not it's probably time to abort.
  814. Note that you can (and probably should) kill any trailing ktserver jobs by running
  815. rm -rf ./work6/temp123/jobTree
  816. They will eventually timeout on their own but it could take days.
  817.  
  818.  
  819.  
  820. **********************************************************************************
  821. **********************************************************************************
  822. ** ALERT **
  823. **********************************************************************************
  824. **********************************************************************************
  825. The only jobs that I have detected running for at least the past 28200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  826. * wait a bit. Maybe it will resume
  827. * look for fatal errors in ./work6/temp123/cactus.log
  828. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  829. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  830. * if not it's probably time to abort.
  831. Note that you can (and probably should) kill any trailing ktserver jobs by running
  832. rm -rf ./work6/temp123/jobTree
  833. They will eventually timeout on their own but it could take days.
  834.  
  835.  
  836.  
  837. **********************************************************************************
  838. **********************************************************************************
  839. ** ALERT **
  840. **********************************************************************************
  841. **********************************************************************************
  842. The only jobs that I have detected running for at least the past 28800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  843. * wait a bit. Maybe it will resume
  844. * look for fatal errors in ./work6/temp123/cactus.log
  845. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  846. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  847. * if not it's probably time to abort.
  848. Note that you can (and probably should) kill any trailing ktserver jobs by running
  849. rm -rf ./work6/temp123/jobTree
  850. They will eventually timeout on their own but it could take days.
  851.  
  852.  
  853.  
  854. **********************************************************************************
  855. **********************************************************************************
  856. ** ALERT **
  857. **********************************************************************************
  858. **********************************************************************************
  859. The only jobs that I have detected running for at least the past 29400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  860. * wait a bit. Maybe it will resume
  861. * look for fatal errors in ./work6/temp123/cactus.log
  862. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  863. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  864. * if not it's probably time to abort.
  865. Note that you can (and probably should) kill any trailing ktserver jobs by running
  866. rm -rf ./work6/temp123/jobTree
  867. They will eventually timeout on their own but it could take days.
  868.  
  869.  
  870.  
  871. **********************************************************************************
  872. **********************************************************************************
  873. ** ALERT **
  874. **********************************************************************************
  875. **********************************************************************************
  876. The only jobs that I have detected running for at least the past 30000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  877. * wait a bit. Maybe it will resume
  878. * look for fatal errors in ./work6/temp123/cactus.log
  879. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  880. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  881. * if not it's probably time to abort.
  882. Note that you can (and probably should) kill any trailing ktserver jobs by running
  883. rm -rf ./work6/temp123/jobTree
  884. They will eventually timeout on their own but it could take days.
  885.  
  886.  
  887.  
  888. **********************************************************************************
  889. **********************************************************************************
  890. ** ALERT **
  891. **********************************************************************************
  892. **********************************************************************************
  893. The only jobs that I have detected running for at least the past 30600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  894. * wait a bit. Maybe it will resume
  895. * look for fatal errors in ./work6/temp123/cactus.log
  896. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  897. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  898. * if not it's probably time to abort.
  899. Note that you can (and probably should) kill any trailing ktserver jobs by running
  900. rm -rf ./work6/temp123/jobTree
  901. They will eventually timeout on their own but it could take days.
  902.  
  903.  
  904.  
  905. **********************************************************************************
  906. **********************************************************************************
  907. ** ALERT **
  908. **********************************************************************************
  909. **********************************************************************************
  910. The only jobs that I have detected running for at least the past 31200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  911. * wait a bit. Maybe it will resume
  912. * look for fatal errors in ./work6/temp123/cactus.log
  913. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  914. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  915. * if not it's probably time to abort.
  916. Note that you can (and probably should) kill any trailing ktserver jobs by running
  917. rm -rf ./work6/temp123/jobTree
  918. They will eventually timeout on their own but it could take days.
  919.  
  920.  
  921.  
  922. **********************************************************************************
  923. **********************************************************************************
  924. ** ALERT **
  925. **********************************************************************************
  926. **********************************************************************************
  927. The only jobs that I have detected running for at least the past 31800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  928. * wait a bit. Maybe it will resume
  929. * look for fatal errors in ./work6/temp123/cactus.log
  930. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  931. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  932. * if not it's probably time to abort.
  933. Note that you can (and probably should) kill any trailing ktserver jobs by running
  934. rm -rf ./work6/temp123/jobTree
  935. They will eventually timeout on their own but it could take days.
  936.  
  937.  
  938.  
  939. **********************************************************************************
  940. **********************************************************************************
  941. ** ALERT **
  942. **********************************************************************************
  943. **********************************************************************************
  944. The only jobs that I have detected running for at least the past 32400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  945. * wait a bit. Maybe it will resume
  946. * look for fatal errors in ./work6/temp123/cactus.log
  947. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  948. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  949. * if not it's probably time to abort.
  950. Note that you can (and probably should) kill any trailing ktserver jobs by running
  951. rm -rf ./work6/temp123/jobTree
  952. They will eventually timeout on their own but it could take days.
  953.  
  954.  
  955.  
  956. **********************************************************************************
  957. **********************************************************************************
  958. ** ALERT **
  959. **********************************************************************************
  960. **********************************************************************************
  961. The only jobs that I have detected running for at least the past 33000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  962. * wait a bit. Maybe it will resume
  963. * look for fatal errors in ./work6/temp123/cactus.log
  964. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  965. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  966. * if not it's probably time to abort.
  967. Note that you can (and probably should) kill any trailing ktserver jobs by running
  968. rm -rf ./work6/temp123/jobTree
  969. They will eventually timeout on their own but it could take days.
  970.  
  971.  
  972.  
  973. **********************************************************************************
  974. **********************************************************************************
  975. ** ALERT **
  976. **********************************************************************************
  977. **********************************************************************************
  978. The only jobs that I have detected running for at least the past 33600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  979. * wait a bit. Maybe it will resume
  980. * look for fatal errors in ./work6/temp123/cactus.log
  981. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  982. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  983. * if not it's probably time to abort.
  984. Note that you can (and probably should) kill any trailing ktserver jobs by running
  985. rm -rf ./work6/temp123/jobTree
  986. They will eventually timeout on their own but it could take days.
  987.  
  988.  
  989.  
  990. **********************************************************************************
  991. **********************************************************************************
  992. ** ALERT **
  993. **********************************************************************************
  994. **********************************************************************************
  995. The only jobs that I have detected running for at least the past 34200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  996. * wait a bit. Maybe it will resume
  997. * look for fatal errors in ./work6/temp123/cactus.log
  998. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  999. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1000. * if not it's probably time to abort.
  1001. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1002. rm -rf ./work6/temp123/jobTree
  1003. They will eventually timeout on their own but it could take days.
  1004.  
  1005.  
  1006.  
  1007. **********************************************************************************
  1008. **********************************************************************************
  1009. ** ALERT **
  1010. **********************************************************************************
  1011. **********************************************************************************
  1012. The only jobs that I have detected running for at least the past 34800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1013. * wait a bit. Maybe it will resume
  1014. * look for fatal errors in ./work6/temp123/cactus.log
  1015. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1016. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1017. * if not it's probably time to abort.
  1018. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1019. rm -rf ./work6/temp123/jobTree
  1020. They will eventually timeout on their own but it could take days.
  1021.  
  1022.  
  1023.  
  1024. **********************************************************************************
  1025. **********************************************************************************
  1026. ** ALERT **
  1027. **********************************************************************************
  1028. **********************************************************************************
  1029. The only jobs that I have detected running for at least the past 35400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1030. * wait a bit. Maybe it will resume
  1031. * look for fatal errors in ./work6/temp123/cactus.log
  1032. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1033. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1034. * if not it's probably time to abort.
  1035. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1036. rm -rf ./work6/temp123/jobTree
  1037. They will eventually timeout on their own but it could take days.
  1038.  
  1039.  
  1040.  
  1041. **********************************************************************************
  1042. **********************************************************************************
  1043. ** ALERT **
  1044. **********************************************************************************
  1045. **********************************************************************************
  1046. The only jobs that I have detected running for at least the past 36000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1047. * wait a bit. Maybe it will resume
  1048. * look for fatal errors in ./work6/temp123/cactus.log
  1049. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1050. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1051. * if not it's probably time to abort.
  1052. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1053. rm -rf ./work6/temp123/jobTree
  1054. They will eventually timeout on their own but it could take days.
  1055.  
  1056.  
  1057.  
  1058. **********************************************************************************
  1059. **********************************************************************************
  1060. ** ALERT **
  1061. **********************************************************************************
  1062. **********************************************************************************
  1063. The only jobs that I have detected running for at least the past 36600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1064. * wait a bit. Maybe it will resume
  1065. * look for fatal errors in ./work6/temp123/cactus.log
  1066. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1067. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1068. * if not it's probably time to abort.
  1069. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1070. rm -rf ./work6/temp123/jobTree
  1071. They will eventually timeout on their own but it could take days.
  1072.  
  1073.  
  1074.  
  1075. **********************************************************************************
  1076. **********************************************************************************
  1077. ** ALERT **
  1078. **********************************************************************************
  1079. **********************************************************************************
  1080. The only jobs that I have detected running for at least the past 37200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1081. * wait a bit. Maybe it will resume
  1082. * look for fatal errors in ./work6/temp123/cactus.log
  1083. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1084. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1085. * if not it's probably time to abort.
  1086. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1087. rm -rf ./work6/temp123/jobTree
  1088. They will eventually timeout on their own but it could take days.
  1089.  
  1090.  
  1091.  
  1092. **********************************************************************************
  1093. **********************************************************************************
  1094. ** ALERT **
  1095. **********************************************************************************
  1096. **********************************************************************************
  1097. The only jobs that I have detected running for at least the past 37800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1098. * wait a bit. Maybe it will resume
  1099. * look for fatal errors in ./work6/temp123/cactus.log
  1100. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1101. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1102. * if not it's probably time to abort.
  1103. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1104. rm -rf ./work6/temp123/jobTree
  1105. They will eventually timeout on their own but it could take days.
  1106.  
  1107.  
  1108.  
  1109. **********************************************************************************
  1110. **********************************************************************************
  1111. ** ALERT **
  1112. **********************************************************************************
  1113. **********************************************************************************
  1114. The only jobs that I have detected running for at least the past 38400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1115. * wait a bit. Maybe it will resume
  1116. * look for fatal errors in ./work6/temp123/cactus.log
  1117. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1118. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1119. * if not it's probably time to abort.
  1120. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1121. rm -rf ./work6/temp123/jobTree
  1122. They will eventually timeout on their own but it could take days.
  1123.  
  1124.  
  1125.  
  1126. **********************************************************************************
  1127. **********************************************************************************
  1128. ** ALERT **
  1129. **********************************************************************************
  1130. **********************************************************************************
  1131. The only jobs that I have detected running for at least the past 39000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1132. * wait a bit. Maybe it will resume
  1133. * look for fatal errors in ./work6/temp123/cactus.log
  1134. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1135. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1136. * if not it's probably time to abort.
  1137. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1138. rm -rf ./work6/temp123/jobTree
  1139. They will eventually timeout on their own but it could take days.
  1140.  
  1141.  
  1142.  
  1143. **********************************************************************************
  1144. **********************************************************************************
  1145. ** ALERT **
  1146. **********************************************************************************
  1147. **********************************************************************************
  1148. The only jobs that I have detected running for at least the past 39600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1149. * wait a bit. Maybe it will resume
  1150. * look for fatal errors in ./work6/temp123/cactus.log
  1151. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1152. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1153. * if not it's probably time to abort.
  1154. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1155. rm -rf ./work6/temp123/jobTree
  1156. They will eventually timeout on their own but it could take days.
  1157.  
  1158.  
  1159.  
  1160. **********************************************************************************
  1161. **********************************************************************************
  1162. ** ALERT **
  1163. **********************************************************************************
  1164. **********************************************************************************
  1165. The only jobs that I have detected running for at least the past 40200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1166. * wait a bit. Maybe it will resume
  1167. * look for fatal errors in ./work6/temp123/cactus.log
  1168. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1169. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1170. * if not it's probably time to abort.
  1171. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1172. rm -rf ./work6/temp123/jobTree
  1173. They will eventually timeout on their own but it could take days.
  1174.  
  1175.  
  1176.  
  1177. **********************************************************************************
  1178. **********************************************************************************
  1179. ** ALERT **
  1180. **********************************************************************************
  1181. **********************************************************************************
  1182. The only jobs that I have detected running for at least the past 40800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1183. * wait a bit. Maybe it will resume
  1184. * look for fatal errors in ./work6/temp123/cactus.log
  1185. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1186. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1187. * if not it's probably time to abort.
  1188. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1189. rm -rf ./work6/temp123/jobTree
  1190. They will eventually timeout on their own but it could take days.
  1191.  
  1192.  
  1193.  
  1194. **********************************************************************************
  1195. **********************************************************************************
  1196. ** ALERT **
  1197. **********************************************************************************
  1198. **********************************************************************************
  1199. The only jobs that I have detected running for at least the past 41400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1200. * wait a bit. Maybe it will resume
  1201. * look for fatal errors in ./work6/temp123/cactus.log
  1202. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1203. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1204. * if not it's probably time to abort.
  1205. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1206. rm -rf ./work6/temp123/jobTree
  1207. They will eventually timeout on their own but it could take days.
  1208.  
  1209.  
  1210.  
  1211. **********************************************************************************
  1212. **********************************************************************************
  1213. ** ALERT **
  1214. **********************************************************************************
  1215. **********************************************************************************
  1216. The only jobs that I have detected running for at least the past 42000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1217. * wait a bit. Maybe it will resume
  1218. * look for fatal errors in ./work6/temp123/cactus.log
  1219. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1220. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1221. * if not it's probably time to abort.
  1222. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1223. rm -rf ./work6/temp123/jobTree
  1224. They will eventually timeout on their own but it could take days.
  1225.  
  1226.  
  1227.  
  1228. **********************************************************************************
  1229. **********************************************************************************
  1230. ** ALERT **
  1231. **********************************************************************************
  1232. **********************************************************************************
  1233. The only jobs that I have detected running for at least the past 42600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1234. * wait a bit. Maybe it will resume
  1235. * look for fatal errors in ./work6/temp123/cactus.log
  1236. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1237. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1238. * if not it's probably time to abort.
  1239. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1240. rm -rf ./work6/temp123/jobTree
  1241. They will eventually timeout on their own but it could take days.
  1242.  
  1243.  
  1244.  
  1245. **********************************************************************************
  1246. **********************************************************************************
  1247. ** ALERT **
  1248. **********************************************************************************
  1249. **********************************************************************************
  1250. The only jobs that I have detected running for at least the past 43200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1251. * wait a bit. Maybe it will resume
  1252. * look for fatal errors in ./work6/temp123/cactus.log
  1253. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1254. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1255. * if not it's probably time to abort.
  1256. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1257. rm -rf ./work6/temp123/jobTree
  1258. They will eventually timeout on their own but it could take days.
  1259.  
  1260.  
  1261.  
  1262. **********************************************************************************
  1263. **********************************************************************************
  1264. ** ALERT **
  1265. **********************************************************************************
  1266. **********************************************************************************
  1267. The only jobs that I have detected running for at least the past 43800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1268. * wait a bit. Maybe it will resume
  1269. * look for fatal errors in ./work6/temp123/cactus.log
  1270. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1271. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1272. * if not it's probably time to abort.
  1273. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1274. rm -rf ./work6/temp123/jobTree
  1275. They will eventually timeout on their own but it could take days.
  1276.  
  1277.  
  1278.  
  1279. **********************************************************************************
  1280. **********************************************************************************
  1281. ** ALERT **
  1282. **********************************************************************************
  1283. **********************************************************************************
  1284. The only jobs that I have detected running for at least the past 44400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1285. * wait a bit. Maybe it will resume
  1286. * look for fatal errors in ./work6/temp123/cactus.log
  1287. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1288. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1289. * if not it's probably time to abort.
  1290. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1291. rm -rf ./work6/temp123/jobTree
  1292. They will eventually timeout on their own but it could take days.
  1293.  
  1294.  
  1295.  
  1296. **********************************************************************************
  1297. **********************************************************************************
  1298. ** ALERT **
  1299. **********************************************************************************
  1300. **********************************************************************************
  1301. The only jobs that I have detected running for at least the past 45000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1302. * wait a bit. Maybe it will resume
  1303. * look for fatal errors in ./work6/temp123/cactus.log
  1304. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1305. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1306. * if not it's probably time to abort.
  1307. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1308. rm -rf ./work6/temp123/jobTree
  1309. They will eventually timeout on their own but it could take days.
  1310.  
  1311.  
  1312.  
  1313. **********************************************************************************
  1314. **********************************************************************************
  1315. ** ALERT **
  1316. **********************************************************************************
  1317. **********************************************************************************
  1318. The only jobs that I have detected running for at least the past 45600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1319. * wait a bit. Maybe it will resume
  1320. * look for fatal errors in ./work6/temp123/cactus.log
  1321. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1322. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1323. * if not it's probably time to abort.
  1324. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1325. rm -rf ./work6/temp123/jobTree
  1326. They will eventually timeout on their own but it could take days.
  1327.  
  1328.  
  1329.  
  1330. **********************************************************************************
  1331. **********************************************************************************
  1332. ** ALERT **
  1333. **********************************************************************************
  1334. **********************************************************************************
  1335. The only jobs that I have detected running for at least the past 46200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1336. * wait a bit. Maybe it will resume
  1337. * look for fatal errors in ./work6/temp123/cactus.log
  1338. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1339. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1340. * if not it's probably time to abort.
  1341. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1342. rm -rf ./work6/temp123/jobTree
  1343. They will eventually timeout on their own but it could take days.
  1344.  
  1345.  
  1346.  
  1347. **********************************************************************************
  1348. **********************************************************************************
  1349. ** ALERT **
  1350. **********************************************************************************
  1351. **********************************************************************************
  1352. The only jobs that I have detected running for at least the past 46800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1353. * wait a bit. Maybe it will resume
  1354. * look for fatal errors in ./work6/temp123/cactus.log
  1355. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1356. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1357. * if not it's probably time to abort.
  1358. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1359. rm -rf ./work6/temp123/jobTree
  1360. They will eventually timeout on their own but it could take days.
  1361.  
  1362.  
  1363.  
  1364. **********************************************************************************
  1365. **********************************************************************************
  1366. ** ALERT **
  1367. **********************************************************************************
  1368. **********************************************************************************
  1369. The only jobs that I have detected running for at least the past 47400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1370. * wait a bit. Maybe it will resume
  1371. * look for fatal errors in ./work6/temp123/cactus.log
  1372. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1373. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1374. * if not it's probably time to abort.
  1375. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1376. rm -rf ./work6/temp123/jobTree
  1377. They will eventually timeout on their own but it could take days.
  1378.  
  1379.  
  1380.  
  1381. **********************************************************************************
  1382. **********************************************************************************
  1383. ** ALERT **
  1384. **********************************************************************************
  1385. **********************************************************************************
  1386. The only jobs that I have detected running for at least the past 48000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1387. * wait a bit. Maybe it will resume
  1388. * look for fatal errors in ./work6/temp123/cactus.log
  1389. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1390. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1391. * if not it's probably time to abort.
  1392. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1393. rm -rf ./work6/temp123/jobTree
  1394. They will eventually timeout on their own but it could take days.
  1395.  
  1396.  
  1397.  
  1398. **********************************************************************************
  1399. **********************************************************************************
  1400. ** ALERT **
  1401. **********************************************************************************
  1402. **********************************************************************************
  1403. The only jobs that I have detected running for at least the past 48600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1404. * wait a bit. Maybe it will resume
  1405. * look for fatal errors in ./work6/temp123/cactus.log
  1406. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1407. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1408. * if not it's probably time to abort.
  1409. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1410. rm -rf ./work6/temp123/jobTree
  1411. They will eventually timeout on their own but it could take days.
  1412.  
  1413.  
  1414.  
  1415. **********************************************************************************
  1416. **********************************************************************************
  1417. ** ALERT **
  1418. **********************************************************************************
  1419. **********************************************************************************
  1420. The only jobs that I have detected running for at least the past 49200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1421. * wait a bit. Maybe it will resume
  1422. * look for fatal errors in ./work6/temp123/cactus.log
  1423. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1424. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1425. * if not it's probably time to abort.
  1426. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1427. rm -rf ./work6/temp123/jobTree
  1428. They will eventually timeout on their own but it could take days.
  1429.  
  1430.  
  1431.  
  1432. **********************************************************************************
  1433. **********************************************************************************
  1434. ** ALERT **
  1435. **********************************************************************************
  1436. **********************************************************************************
  1437. The only jobs that I have detected running for at least the past 49800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1438. * wait a bit. Maybe it will resume
  1439. * look for fatal errors in ./work6/temp123/cactus.log
  1440. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1441. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1442. * if not it's probably time to abort.
  1443. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1444. rm -rf ./work6/temp123/jobTree
  1445. They will eventually timeout on their own but it could take days.
  1446.  
  1447.  
  1448.  
  1449. **********************************************************************************
  1450. **********************************************************************************
  1451. ** ALERT **
  1452. **********************************************************************************
  1453. **********************************************************************************
  1454. The only jobs that I have detected running for at least the past 50400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1455. * wait a bit. Maybe it will resume
  1456. * look for fatal errors in ./work6/temp123/cactus.log
  1457. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1458. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1459. * if not it's probably time to abort.
  1460. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1461. rm -rf ./work6/temp123/jobTree
  1462. They will eventually timeout on their own but it could take days.
  1463.  
  1464.  
  1465.  
  1466. **********************************************************************************
  1467. **********************************************************************************
  1468. ** ALERT **
  1469. **********************************************************************************
  1470. **********************************************************************************
  1471. The only jobs that I have detected running for at least the past 51000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1472. * wait a bit. Maybe it will resume
  1473. * look for fatal errors in ./work6/temp123/cactus.log
  1474. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1475. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1476. * if not it's probably time to abort.
  1477. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1478. rm -rf ./work6/temp123/jobTree
  1479. They will eventually timeout on their own but it could take days.
  1480.  
  1481.  
  1482.  
  1483. **********************************************************************************
  1484. **********************************************************************************
  1485. ** ALERT **
  1486. **********************************************************************************
  1487. **********************************************************************************
  1488. The only jobs that I have detected running for at least the past 51600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1489. * wait a bit. Maybe it will resume
  1490. * look for fatal errors in ./work6/temp123/cactus.log
  1491. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1492. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1493. * if not it's probably time to abort.
  1494. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1495. rm -rf ./work6/temp123/jobTree
  1496. They will eventually timeout on their own but it could take days.
  1497.  
  1498.  
  1499.  
  1500. **********************************************************************************
  1501. **********************************************************************************
  1502. ** ALERT **
  1503. **********************************************************************************
  1504. **********************************************************************************
  1505. The only jobs that I have detected running for at least the past 52200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1506. * wait a bit. Maybe it will resume
  1507. * look for fatal errors in ./work6/temp123/cactus.log
  1508. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1509. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1510. * if not it's probably time to abort.
  1511. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1512. rm -rf ./work6/temp123/jobTree
  1513. They will eventually timeout on their own but it could take days.
  1514.  
  1515.  
  1516.  
  1517. **********************************************************************************
  1518. **********************************************************************************
  1519. ** ALERT **
  1520. **********************************************************************************
  1521. **********************************************************************************
  1522. The only jobs that I have detected running for at least the past 52800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1523. * wait a bit. Maybe it will resume
  1524. * look for fatal errors in ./work6/temp123/cactus.log
  1525. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1526. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1527. * if not it's probably time to abort.
  1528. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1529. rm -rf ./work6/temp123/jobTree
  1530. They will eventually timeout on their own but it could take days.
  1531.  
  1532.  
  1533.  
  1534. **********************************************************************************
  1535. **********************************************************************************
  1536. ** ALERT **
  1537. **********************************************************************************
  1538. **********************************************************************************
  1539. The only jobs that I have detected running for at least the past 53400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1540. * wait a bit. Maybe it will resume
  1541. * look for fatal errors in ./work6/temp123/cactus.log
  1542. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1543. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1544. * if not it's probably time to abort.
  1545. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1546. rm -rf ./work6/temp123/jobTree
  1547. They will eventually timeout on their own but it could take days.
  1548.  
  1549.  
  1550.  
  1551. **********************************************************************************
  1552. **********************************************************************************
  1553. ** ALERT **
  1554. **********************************************************************************
  1555. **********************************************************************************
  1556. The only jobs that I have detected running for at least the past 54000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1557. * wait a bit. Maybe it will resume
  1558. * look for fatal errors in ./work6/temp123/cactus.log
  1559. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1560. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1561. * if not it's probably time to abort.
  1562. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1563. rm -rf ./work6/temp123/jobTree
  1564. They will eventually timeout on their own but it could take days.
  1565.  
  1566.  
  1567.  
  1568. **********************************************************************************
  1569. **********************************************************************************
  1570. ** ALERT **
  1571. **********************************************************************************
  1572. **********************************************************************************
  1573. The only jobs that I have detected running for at least the past 54600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1574. * wait a bit. Maybe it will resume
  1575. * look for fatal errors in ./work6/temp123/cactus.log
  1576. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1577. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1578. * if not it's probably time to abort.
  1579. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1580. rm -rf ./work6/temp123/jobTree
  1581. They will eventually timeout on their own but it could take days.
  1582.  
  1583.  
  1584.  
  1585. **********************************************************************************
  1586. **********************************************************************************
  1587. ** ALERT **
  1588. **********************************************************************************
  1589. **********************************************************************************
  1590. The only jobs that I have detected running for at least the past 55200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1591. * wait a bit. Maybe it will resume
  1592. * look for fatal errors in ./work6/temp123/cactus.log
  1593. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1594. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1595. * if not it's probably time to abort.
  1596. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1597. rm -rf ./work6/temp123/jobTree
  1598. They will eventually timeout on their own but it could take days.
  1599.  
  1600.  
  1601.  
  1602. **********************************************************************************
  1603. **********************************************************************************
  1604. ** ALERT **
  1605. **********************************************************************************
  1606. **********************************************************************************
  1607. The only jobs that I have detected running for at least the past 55800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1608. * wait a bit. Maybe it will resume
  1609. * look for fatal errors in ./work6/temp123/cactus.log
  1610. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1611. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1612. * if not it's probably time to abort.
  1613. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1614. rm -rf ./work6/temp123/jobTree
  1615. They will eventually timeout on their own but it could take days.
  1616.  
  1617.  
  1618.  
  1619. **********************************************************************************
  1620. **********************************************************************************
  1621. ** ALERT **
  1622. **********************************************************************************
  1623. **********************************************************************************
  1624. The only jobs that I have detected running for at least the past 56400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1625. * wait a bit. Maybe it will resume
  1626. * look for fatal errors in ./work6/temp123/cactus.log
  1627. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1628. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1629. * if not it's probably time to abort.
  1630. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1631. rm -rf ./work6/temp123/jobTree
  1632. They will eventually timeout on their own but it could take days.
  1633.  
  1634.  
  1635.  
  1636. **********************************************************************************
  1637. **********************************************************************************
  1638. ** ALERT **
  1639. **********************************************************************************
  1640. **********************************************************************************
  1641. The only jobs that I have detected running for at least the past 57000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1642. * wait a bit. Maybe it will resume
  1643. * look for fatal errors in ./work6/temp123/cactus.log
  1644. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1645. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1646. * if not it's probably time to abort.
  1647. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1648. rm -rf ./work6/temp123/jobTree
  1649. They will eventually timeout on their own but it could take days.
  1650.  
  1651.  
  1652.  
  1653. **********************************************************************************
  1654. **********************************************************************************
  1655. ** ALERT **
  1656. **********************************************************************************
  1657. **********************************************************************************
  1658. The only jobs that I have detected running for at least the past 57600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1659. * wait a bit. Maybe it will resume
  1660. * look for fatal errors in ./work6/temp123/cactus.log
  1661. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1662. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1663. * if not it's probably time to abort.
  1664. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1665. rm -rf ./work6/temp123/jobTree
  1666. They will eventually timeout on their own but it could take days.
  1667.  
  1668.  
  1669.  
  1670. **********************************************************************************
  1671. **********************************************************************************
  1672. ** ALERT **
  1673. **********************************************************************************
  1674. **********************************************************************************
  1675. The only jobs that I have detected running for at least the past 58200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1676. * wait a bit. Maybe it will resume
  1677. * look for fatal errors in ./work6/temp123/cactus.log
  1678. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1679. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1680. * if not it's probably time to abort.
  1681. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1682. rm -rf ./work6/temp123/jobTree
  1683. They will eventually timeout on their own but it could take days.
  1684.  
  1685.  
  1686.  
  1687. **********************************************************************************
  1688. **********************************************************************************
  1689. ** ALERT **
  1690. **********************************************************************************
  1691. **********************************************************************************
  1692. The only jobs that I have detected running for at least the past 58800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1693. * wait a bit. Maybe it will resume
  1694. * look for fatal errors in ./work6/temp123/cactus.log
  1695. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1696. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1697. * if not it's probably time to abort.
  1698. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1699. rm -rf ./work6/temp123/jobTree
  1700. They will eventually timeout on their own but it could take days.
  1701.  
  1702.  
  1703.  
  1704. **********************************************************************************
  1705. **********************************************************************************
  1706. ** ALERT **
  1707. **********************************************************************************
  1708. **********************************************************************************
  1709. The only jobs that I have detected running for at least the past 59400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1710. * wait a bit. Maybe it will resume
  1711. * look for fatal errors in ./work6/temp123/cactus.log
  1712. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1713. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1714. * if not it's probably time to abort.
  1715. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1716. rm -rf ./work6/temp123/jobTree
  1717. They will eventually timeout on their own but it could take days.
  1718.  
  1719.  
  1720.  
  1721. **********************************************************************************
  1722. **********************************************************************************
  1723. ** ALERT **
  1724. **********************************************************************************
  1725. **********************************************************************************
  1726. The only jobs that I have detected running for at least the past 60000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1727. * wait a bit. Maybe it will resume
  1728. * look for fatal errors in ./work6/temp123/cactus.log
  1729. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1730. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1731. * if not it's probably time to abort.
  1732. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1733. rm -rf ./work6/temp123/jobTree
  1734. They will eventually timeout on their own but it could take days.
  1735.  
  1736.  
  1737.  
  1738. **********************************************************************************
  1739. **********************************************************************************
  1740. ** ALERT **
  1741. **********************************************************************************
  1742. **********************************************************************************
  1743. The only jobs that I have detected running for at least the past 60600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1744. * wait a bit. Maybe it will resume
  1745. * look for fatal errors in ./work6/temp123/cactus.log
  1746. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1747. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1748. * if not it's probably time to abort.
  1749. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1750. rm -rf ./work6/temp123/jobTree
  1751. They will eventually timeout on their own but it could take days.
  1752.  
  1753.  
  1754.  
  1755. **********************************************************************************
  1756. **********************************************************************************
  1757. ** ALERT **
  1758. **********************************************************************************
  1759. **********************************************************************************
  1760. The only jobs that I have detected running for at least the past 61200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1761. * wait a bit. Maybe it will resume
  1762. * look for fatal errors in ./work6/temp123/cactus.log
  1763. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1764. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1765. * if not it's probably time to abort.
  1766. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1767. rm -rf ./work6/temp123/jobTree
  1768. They will eventually timeout on their own but it could take days.
  1769.  
  1770.  
  1771.  
  1772. **********************************************************************************
  1773. **********************************************************************************
  1774. ** ALERT **
  1775. **********************************************************************************
  1776. **********************************************************************************
  1777. The only jobs that I have detected running for at least the past 61800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1778. * wait a bit. Maybe it will resume
  1779. * look for fatal errors in ./work6/temp123/cactus.log
  1780. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1781. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1782. * if not it's probably time to abort.
  1783. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1784. rm -rf ./work6/temp123/jobTree
  1785. They will eventually timeout on their own but it could take days.
  1786.  
  1787.  
  1788.  
  1789. **********************************************************************************
  1790. **********************************************************************************
  1791. ** ALERT **
  1792. **********************************************************************************
  1793. **********************************************************************************
  1794. The only jobs that I have detected running for at least the past 62400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1795. * wait a bit. Maybe it will resume
  1796. * look for fatal errors in ./work6/temp123/cactus.log
  1797. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1798. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1799. * if not it's probably time to abort.
  1800. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1801. rm -rf ./work6/temp123/jobTree
  1802. They will eventually timeout on their own but it could take days.
  1803.  
  1804.  
  1805.  
  1806. **********************************************************************************
  1807. **********************************************************************************
  1808. ** ALERT **
  1809. **********************************************************************************
  1810. **********************************************************************************
  1811. The only jobs that I have detected running for at least the past 63000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1812. * wait a bit. Maybe it will resume
  1813. * look for fatal errors in ./work6/temp123/cactus.log
  1814. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1815. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1816. * if not it's probably time to abort.
  1817. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1818. rm -rf ./work6/temp123/jobTree
  1819. They will eventually timeout on their own but it could take days.
  1820.  
  1821.  
  1822.  
  1823. **********************************************************************************
  1824. **********************************************************************************
  1825. ** ALERT **
  1826. **********************************************************************************
  1827. **********************************************************************************
  1828. The only jobs that I have detected running for at least the past 63600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1829. * wait a bit. Maybe it will resume
  1830. * look for fatal errors in ./work6/temp123/cactus.log
  1831. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1832. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1833. * if not it's probably time to abort.
  1834. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1835. rm -rf ./work6/temp123/jobTree
  1836. They will eventually timeout on their own but it could take days.
  1837.  
  1838.  
  1839.  
  1840. **********************************************************************************
  1841. **********************************************************************************
  1842. ** ALERT **
  1843. **********************************************************************************
  1844. **********************************************************************************
  1845. The only jobs that I have detected running for at least the past 64200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
  1846. * wait a bit. Maybe it will resume
  1847. * look for fatal errors in ./work6/temp123/cactus.log
  1848. * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
  1849. * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
  1850. * if not it's probably time to abort.
  1851. Note that you can (and probably should) kill any trailing ktserver jobs by running
  1852. rm -rf ./work6/temp123/jobTree
  1853. They will eventually timeout on their own but it could take days.
  1854.  
  1855. Traceback (most recent call last):
  1856. File "/home/iminkin/Program/progressiveCactus/submodules/cactus/bin/cactus_progressive.py", line 220, in <module>
  1857. Process Process-2:
  1858. Traceback (most recent call last):
  1859. File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
  1860. Process Process-3:
  1861. Traceback (most recent call last):
  1862. File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
  1863. Process Process-4:
  1864. Traceback (most recent call last):
  1865. File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
  1866. main()
  1867. File "/home/iminkin/Program/progressiveCactus/submodules/cactus/progressive/cactus_progressive.py", line 216, in main
  1868. self.run()
  1869. File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
  1870. self._target(*self._args, **self._kwargs)
  1871. File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/singleMachine.py", line 45, in worker
  1872. self.run()
  1873. File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
  1874. self._target(*self._args, **self._kwargs)
  1875. File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/singleMachine.py", line 45, in worker
  1876. self.run()
  1877. File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
  1878. self._target(*self._args, **self._kwargs)
  1879. File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/singleMachine.py", line 45, in worker
  1880. Stack(baseTarget).startJobTree(options)
  1881. File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 95, in startJobTree
  1882. return mainLoop(config, batchSystem)
  1883. File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/src/master.py", line 440, in mainLoop
  1884. args = inputQueue.get()
  1885. File "/usr/lib/python2.7/multiprocessing/queues.py", line 115, in get
  1886. updatedJob = batchSystem.getUpdatedJob(10) #Asks the batch system what jobs have been completed.
  1887. File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/singleMachine.py", line 121, in getUpdatedJob
  1888. i = self.getFromQueueSafely(self.outputQueue, maxWait)
  1889. File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/abstractBatchSystem.py", line 98, in getFromQueueSafely
  1890. args = inputQueue.get()
  1891. File "/usr/lib/python2.7/multiprocessing/queues.py", line 117, in get
  1892. args = inputQueue.get()
  1893. File "/usr/lib/python2.7/multiprocessing/queues.py", line 115, in get
  1894. return queue.get(timeout=maxWait)
  1895. File "/usr/lib/python2.7/multiprocessing/queues.py", line 131, in get
  1896. self._rlock.acquire()
  1897. KeyboardInterrupt
  1898. res = self._recv()
  1899. KeyboardInterrupt
  1900. self._rlock.acquire()
  1901. KeyboardInterrupt
  1902. if timeout < 0 or not self._poll(timeout):
  1903. KeyboardInterrupt
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement