Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2013-10-23 19:44:02.219720: Beginning Progressive Cactus Alignment
- Got message from job at time: 1382528642.58 : Starting preprocessor phase target at 1382528642.57 seconds
- Got message from job at time: 1382528656.02 : Blocking on ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB" in_memory="1" port="1984" snapshot="0" />
- with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/gTD15/tmp_3aXMfbyAFP/tmp_dOGDhjudGD_kill.txt
- Got message from job at time: 1382528694.84 : Starting caf phase target with index 0 at 1382528656.14 seconds (recursing = 1)
- Got message from job at time: 1382528694.84 : Pinch graph component with 1915 nodes and 2983 edges is being split up by breaking 826 edges to reduce size to less than 489 max, but found 0 pointless edges
- Got message from job at time: 1382528694.84 : Attaching the sequence to the cactus root 5120029826366796271, header SE007 with length 1698318 and 830977 total bases aligned and 0 bases aligned to other chromosome threads
- Got message from job at time: 1382528696.62 : Starting bar phase target with index 0 at 1382528694.84 seconds (recursing = 1)
- Got message from job at time: 1382529508.49 : Starting avg phase target with index 0 at 1382529508.48 seconds (recursing = 0)
- Got message from job at time: 1382529508.49 : Starting reference phase target with index 0 at 1382529508.48 seconds (recursing = 1)
- Got message from job at time: 1382529518.59 : Blocking on ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" in_memory="1" port="2084" snapshot="0" />
- with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_K7Bdb9V47I_kill.txt
- Got message from job at time: 1382529559.93 : Launching ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" in_memory="1" port="2084" snapshot="0" />
- with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_K7Bdb9V47I_kill.txt
- Got message from job at time: 1382529560.62 : Killing ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" host="iminkin-VirtualBox" in_memory="1" port="2084" snapshot="0" />
- with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_K7Bdb9V47I_kill.txt
- Got message from job at time: 1382529560.62 : Report for iminkin-VirtualBox:2084:
- cnt_get: 28605
- cnt_get_misses: 0
- cnt_misc: 0
- cnt_remove: 28604
- cnt_remove_misses: 0
- cnt_script: 0
- cnt_set: 28605
- cnt_set_misses: 0
- conf_kc_features: (atomic)(zlib)
- conf_kc_version: 1.2.76 (16.13)
- conf_kt_features: (epoll)
- conf_kt_version: 0.9.56 (2.19)
- conf_os_name: Linux
- db_0: count=1 size=269021116 path=:
- db_total_count: 1
- db_total_size: 269021116
- serv_conn_count: 1
- serv_current_time: 1382529556.100855
- serv_proc_id: 18557
- serv_running_term: 47.510640
- serv_task_count: 0
- serv_thread_count: 64
- sys_mem_cached: 33427456
- sys_mem_free: 1179172864
- sys_mem_peak: 1446133760
- sys_mem_rss: 69787648
- sys_mem_size: 1379700736
- sys_mem_total: 1714315264
- sys_ru_stime: 0.128000
- sys_ru_utime: 0.188000
- Contents of /home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972:
- total 4.0K
- -rw-rw-r-- 1 iminkin iminkin 457 Oct 23 19:58 ktout.log
- Got message from job at time: 1382529565.84 : Starting reference phase target with index 0 at 1382529560.63 seconds (recursing = 1)
- Got message from job at time: 1382529565.84 : Starting Reference Extract Phase
- Got message from job at time: 1382529565.84 : Starting check phase target with index 0 at 1382529565.83 seconds (recursing = 0)
- Got message from job at time: 1382529575.87 : Blocking on ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" in_memory="1" port="2084" snapshot="0" />
- with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_TalSPAGl6l_kill.txt
- Got message from job at time: 1382529585.93 : Launching ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" in_memory="1" port="2084" snapshot="0" />
- with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_TalSPAGl6l_kill.txt
- Got message from job at time: 1382529586.17 : Killing ktserver <kyoto_tycoon database_dir="/home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972" host="iminkin-VirtualBox" in_memory="1" port="2084" snapshot="0" />
- with killPath /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/gTD1/tmp_U9dtRI47PR/tmp_TalSPAGl6l_kill.txt
- Got message from job at time: 1382529586.17 : Report for iminkin-VirtualBox:2084:
- cnt_get: 130576
- cnt_get_misses: 0
- cnt_misc: 0
- cnt_remove: 130572
- cnt_remove_misses: 0
- cnt_script: 0
- cnt_set: 130576
- cnt_set_misses: 0
- conf_kc_features: (atomic)(zlib)
- conf_kc_version: 1.2.76 (16.13)
- conf_kt_features: (epoll)
- conf_kt_version: 0.9.56 (2.19)
- conf_os_name: Linux
- db_0: count=4 size=271119180 path=:
- db_total_count: 4
- db_total_size: 271119180
- serv_conn_count: 1
- serv_current_time: 1382529583.019693
- serv_proc_id: 18959
- serv_running_term: 17.165976
- serv_task_count: 0
- serv_thread_count: 64
- sys_mem_cached: 49475584
- sys_mem_free: 1016328192
- sys_mem_peak: 1446133760
- sys_mem_rss: 207855616
- sys_mem_size: 1379729408
- sys_mem_total: 1714315264
- sys_ru_stime: 0.176000
- sys_ru_utime: 0.448000
- Contents of /home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972:
- total 4.0K
- -rw-rw-r-- 1 iminkin iminkin 457 Oct 23 19:59 ktout.log
- The job seems to have left a log file, indicating failure: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t0/job
- Reporting file: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t0/log.txt
- log.txt: Traceback (most recent call last):
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 223, in main
- log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
- log.txt: self.target.run()
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverJobTree.py", line 134, in run
- log.txt: killPingInterval=self.runTimestep)
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverControl.py", line 129, in runKtserver
- log.txt: raise e
- log.txt: RuntimeError: Ktserver already found running with log /home/iminkin/Program/progressiveCactus/work6/temp123/progressiveAlignment/Anc14/Anc14/Anc14_DB_tempSecondaryDatabaseDir_0.851997632972/ktout.log
- log.txt: Exiting the slave because of a failed job on host iminkin-VirtualBox
- log.txt: Due to failure we are reducing the remaining retry count of job /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t0/job to 0
- log.txt: We have set the default memory of the failed job to 4294967296 bytes
- Job: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t0/job is completely failed
- The job seems to have left a log file, indicating failure: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t1/job
- Reporting file: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t1/log.txt
- log.txt: Traceback (most recent call last):
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/src/jobTreeSlave.py", line 223, in main
- log.txt: defaultMemory=defaultMemory, defaultCpu=defaultCpu, depth=depth)
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 153, in execute
- log.txt: self.target.run()
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverJobTree.py", line 167, in run
- log.txt: self.blockTimeout, self.blockTimestep)
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverControl.py", line 223, in blockUntilKtserverIsRunnning
- log.txt: killSwitchPath):
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverControl.py", line 284, in __isKtServerRunning
- log.txt: killSwitchPath)
- log.txt: File "/home/iminkin/Program/progressiveCactus/submodules/cactus/pipeline/ktserverControl.py", line 204, in __readStatusFromSwitchFile
- log.txt: raise RuntimeError("Ktserver polling detected fatal error")
- log.txt: RuntimeError: Ktserver polling detected fatal error
- log.txt: Exiting the slave because of a failed job on host iminkin-VirtualBox
- log.txt: Due to failure we are reducing the remaining retry count of job /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t1/job to 0
- log.txt: We have set the default memory of the failed job to 4294967296 bytes
- Job: /home/iminkin/Program/progressiveCactus/work6/temp123/jobTree/jobs/t1/t1/job is completely failed
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 4200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 4800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 5400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 6000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 6600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 7200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 7800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 8400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 9000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 9600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 10200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 10800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 11400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 12000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 12600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 13200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 13800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 14400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 15000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 15600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 16200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 16800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 17400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 18000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 18600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 19200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 19800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 20400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 21000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 21600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 22200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 22800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 23400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 24000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 24600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 25200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 25800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 26400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 27000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 27600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 28200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 28800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 29400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 30000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 30600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 31200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 31800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 32400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 33000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 33600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 34200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 34800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 35400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 36000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 36600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 37200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 37800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 38400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 39000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 39600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 40200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 40800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 41400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 42000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 42600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 43200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 43800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 44400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 45000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 45600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 46200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 46800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 47400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 48000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 48600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 49200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 49800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 50400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 51000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 51600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 52200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 52800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 53400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 54000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 54600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 55200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 55800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 56400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 57000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 57600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 58200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 58800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 59400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 60000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 60600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 61200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 61800s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 62400s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 63000s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 63600s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- **********************************************************************************
- **********************************************************************************
- ** ALERT **
- **********************************************************************************
- **********************************************************************************
- The only jobs that I have detected running for at least the past 64200s are 1 ktservers. Furthermore, there appears to have been 2 failed jobs. It is likely that Progressive Cactus is in a deadlock state and will not finish until the servers or your batch system time out. Suggestions:
- * wait a bit. Maybe it will resume
- * look for fatal errors in ./work6/temp123/cactus.log
- * jobTreeStatus --jobTree ./work6/temp123/jobTree --verbose
- * check your resource manager to see if any more jobs are queued. maybe your cluster is just busy...
- * if not it's probably time to abort.
- Note that you can (and probably should) kill any trailing ktserver jobs by running
- rm -rf ./work6/temp123/jobTree
- They will eventually timeout on their own but it could take days.
- Traceback (most recent call last):
- File "/home/iminkin/Program/progressiveCactus/submodules/cactus/bin/cactus_progressive.py", line 220, in <module>
- Process Process-2:
- Traceback (most recent call last):
- File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
- Process Process-3:
- Traceback (most recent call last):
- File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
- Process Process-4:
- Traceback (most recent call last):
- File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
- main()
- File "/home/iminkin/Program/progressiveCactus/submodules/cactus/progressive/cactus_progressive.py", line 216, in main
- self.run()
- File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
- self._target(*self._args, **self._kwargs)
- File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/singleMachine.py", line 45, in worker
- self.run()
- File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
- self._target(*self._args, **self._kwargs)
- File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/singleMachine.py", line 45, in worker
- self.run()
- File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
- self._target(*self._args, **self._kwargs)
- File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/singleMachine.py", line 45, in worker
- Stack(baseTarget).startJobTree(options)
- File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/scriptTree/stack.py", line 95, in startJobTree
- return mainLoop(config, batchSystem)
- File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/src/master.py", line 440, in mainLoop
- args = inputQueue.get()
- File "/usr/lib/python2.7/multiprocessing/queues.py", line 115, in get
- updatedJob = batchSystem.getUpdatedJob(10) #Asks the batch system what jobs have been completed.
- File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/singleMachine.py", line 121, in getUpdatedJob
- i = self.getFromQueueSafely(self.outputQueue, maxWait)
- File "/home/iminkin/Program/progressiveCactus/submodules/jobTree/batchSystems/abstractBatchSystem.py", line 98, in getFromQueueSafely
- args = inputQueue.get()
- File "/usr/lib/python2.7/multiprocessing/queues.py", line 117, in get
- args = inputQueue.get()
- File "/usr/lib/python2.7/multiprocessing/queues.py", line 115, in get
- return queue.get(timeout=maxWait)
- File "/usr/lib/python2.7/multiprocessing/queues.py", line 131, in get
- self._rlock.acquire()
- KeyboardInterrupt
- res = self._recv()
- KeyboardInterrupt
- self._rlock.acquire()
- KeyboardInterrupt
- if timeout < 0 or not self._poll(timeout):
- KeyboardInterrupt
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement