Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [root@cephcluster2 ~]# sudo systemctl status ceph\*.service ceph\*.target
- ● ceph-radosgw.target - ceph target allowing to start/stop all ceph-radosgw@.service instances at once
- Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw.target; enabled; vendor preset: enabled)
- Active: active since Tue 2017-01-10 00:40:15 AEDT; 25min ago
- ● ceph-mon@cephcluster2.service - Ceph cluster monitor daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; disabled; vendor preset: disabled)
- Active: active (running) since Tue 2017-01-10 00:51:05 AEDT; 14min ago
- Main PID: 7098 (ceph-mon)
- CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@cephcluster2.service
- └─7098 /usr/bin/ceph-mon -f --cluster ceph --id cephcluster2 --setuser ceph --setgroup ceph
- Jan 10 00:57:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 00:57:10.655812 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 00:58:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 00:58:10.656169 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 00:59:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 00:59:10.656557 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 01:00:00 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:00:00.000308 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 01:00:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:00:10.656893 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 01:01:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:01:10.657309 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 01:02:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:02:10.657660 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 01:03:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:03:10.657998 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 01:04:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:04:10.658387 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- Jan 10 01:05:10 cephcluster2.local ceph-mon[7098]: 2017-01-10 01:05:10.658747 7f7228fec700 -1 mon.cephcluster2@0(leader).mds e2 Missing health data for MDS 4254
- ● ceph-mds.target - ceph target allowing to start/stop all ceph-mds@.service instances at once
- Loaded: loaded (/usr/lib/systemd/system/ceph-mds.target; enabled; vendor preset: enabled)
- Active: active since Tue 2017-01-10 00:40:15 AEDT; 25min ago
- ● ceph-disk@dev-nvme3n1p1.service - Ceph disk activation: /dev/nvme3n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:17 AEDT; 23min ago
- Main PID: 1846 (code=exited, status=124)
- Jan 10 00:40:17 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme3n1p1...
- Jan 10 00:42:17 cephcluster2.local systemd[1]: ceph-disk@dev-nvme3n1p1.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:17 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme3n1p1.
- Jan 10 00:42:17 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme3n1p1.service entered failed state.
- Jan 10 00:42:17 cephcluster2.local systemd[1]: ceph-disk@dev-nvme3n1p1.service failed.
- ● ceph-disk@dev-nvme5n1p1.service - Ceph disk activation: /dev/nvme5n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1824 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme5n1p1...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme5n1p1.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme5n1p1.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme5n1p1.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme5n1p1.service failed.
- ● ceph-osd@17.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
- Main PID: 5016 (code=exited, status=1/FAILURE)
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@17.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@17.service failed.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@17.service holdoff time over, scheduling restart.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@17.service
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@17.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@17.service failed.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- ● ceph-disk@dev-nvme5n1p2.service - Ceph disk activation: /dev/nvme5n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1827 (code=exited, status=124)
- Jan 10 00:42:16 cephcluster2.local sh[1827]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/nvme5n1p2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x2883d70>, log_stdo...
- Jan 10 00:42:16 cephcluster2.local sh[1827]: command: Running command: /usr/sbin/init --version
- Jan 10 00:42:16 cephcluster2.local sh[1827]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme5n1p2
- Jan 10 00:42:16 cephcluster2.local sh[1827]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme5n1p2
- Jan 10 00:42:16 cephcluster2.local sh[1827]: main_trigger: trigger /dev/nvme5n1p2 parttype 45b0969e-9b03-4f30-b4c6-b4b80ceff106 uuid 058b145b-2bb1-424c-89fb-34603fdfc9da
- Jan 10 00:42:16 cephcluster2.local sh[1827]: command: Running command: /usr/sbin/ceph-disk --verbose activate-journal /dev/nvme5n1p2
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme5n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme5n1p2.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme5n1p2.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme5n1p2.service failed.
- ● ceph-disk@dev-nvme4n1p1.service - Ceph disk activation: /dev/nvme4n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1660 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme4n1p1...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme4n1p1.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme4n1p1.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme4n1p1.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme4n1p1.service failed.
- ● ceph-disk@dev-nvme3n1p2.service - Ceph disk activation: /dev/nvme3n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:17 AEDT; 23min ago
- Main PID: 1845 (code=exited, status=124)
- Jan 10 00:40:17 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme3n1p2...
- Jan 10 00:42:17 cephcluster2.local systemd[1]: ceph-disk@dev-nvme3n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:17 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme3n1p2.
- Jan 10 00:42:17 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme3n1p2.service entered failed state.
- Jan 10 00:42:17 cephcluster2.local systemd[1]: ceph-disk@dev-nvme3n1p2.service failed.
- ● ceph-disk@dev-nvme1n1p2.service - Ceph disk activation: /dev/nvme1n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1773 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme1n1p2...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme1n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme1n1p2.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme1n1p2.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme1n1p2.service failed.
- ● ceph-disk@dev-nvme7n1p2.service - Ceph disk activation: /dev/nvme7n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1578 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme7n1p2...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme7n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme7n1p2.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme7n1p2.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme7n1p2.service failed.
- ● ceph-osd@18.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:46:48 AEDT; 18min ago
- Main PID: 5021 (code=exited, status=1/FAILURE)
- Jan 10 00:46:48 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:46:48 cephcluster2.local systemd[1]: Unit ceph-osd@18.service entered failed state.
- Jan 10 00:46:48 cephcluster2.local systemd[1]: ceph-osd@18.service failed.
- Jan 10 00:46:48 cephcluster2.local systemd[1]: ceph-osd@18.service holdoff time over, scheduling restart.
- Jan 10 00:46:48 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@18.service
- Jan 10 00:46:48 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:46:48 cephcluster2.local systemd[1]: Unit ceph-osd@18.service entered failed state.
- Jan 10 00:46:48 cephcluster2.local systemd[1]: ceph-osd@18.service failed.
- Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- ● ceph-disk@dev-nvme6n1p2.service - Ceph disk activation: /dev/nvme6n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1705 (code=exited, status=124)
- Jan 10 00:42:04 cephcluster2.local sh[1705]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/nvme6n1p2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x11f9d70>, log_stdo...
- Jan 10 00:42:04 cephcluster2.local sh[1705]: command: Running command: /usr/sbin/init --version
- Jan 10 00:42:04 cephcluster2.local sh[1705]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme6n1p2
- Jan 10 00:42:04 cephcluster2.local sh[1705]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme6n1p2
- Jan 10 00:42:04 cephcluster2.local sh[1705]: main_trigger: trigger /dev/nvme6n1p2 parttype 45b0969e-9b03-4f30-b4c6-b4b80ceff106 uuid 0eb4e43f-b515-46aa-9675-13ef16752d43
- Jan 10 00:42:04 cephcluster2.local sh[1705]: command: Running command: /usr/sbin/ceph-disk --verbose activate-journal /dev/nvme6n1p2
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme6n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme6n1p2.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme6n1p2.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme6n1p2.service failed.
- ● ceph-osd@11.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:48:19 AEDT; 16min ago
- Jan 10 00:48:19 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: Unit ceph-osd@11.service entered failed state.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@11.service failed.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@11.service holdoff time over, scheduling restart.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@11.service
- Jan 10 00:48:19 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: Unit ceph-osd@11.service entered failed state.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@11.service failed.
- Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- ● ceph-disk@dev-nvme0n1p2.service - Ceph disk activation: /dev/nvme0n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1756 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme0n1p2...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme0n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme0n1p2.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme0n1p2.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme0n1p2.service failed.
- ● ceph-osd@10.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:52:09 AEDT; 13min ago
- Process: 7617 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
- Process: 7575 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
- Main PID: 7617 (code=exited, status=1/FAILURE)
- Jan 10 00:52:08 cephcluster2.local systemd[1]: Unit ceph-osd@10.service entered failed state.
- Jan 10 00:52:08 cephcluster2.local systemd[1]: ceph-osd@10.service failed.
- Jan 10 00:52:09 cephcluster2.local systemd[1]: ceph-osd@10.service holdoff time over, scheduling restart.
- Jan 10 00:52:09 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@10.service
- Jan 10 00:52:09 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:52:09 cephcluster2.local systemd[1]: Unit ceph-osd@10.service entered failed state.
- Jan 10 00:52:09 cephcluster2.local systemd[1]: ceph-osd@10.service failed.
- ● ceph-osd@12.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
- Main PID: 5058 (code=exited, status=1/FAILURE)
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@12.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@12.service failed.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@12.service holdoff time over, scheduling restart.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@12.service
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@12.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@12.service failed.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- ● ceph.target - ceph target allowing to start/stop all ceph*@.service instances at once
- Loaded: loaded (/usr/lib/systemd/system/ceph.target; enabled; vendor preset: enabled)
- Active: active since Tue 2017-01-10 00:42:04 AEDT; 23min ago
- Jan 10 00:42:04 cephcluster2.local systemd[1]: Reached target ceph target allowing to start/stop all ceph*@.service instances at once.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: Starting ceph target allowing to start/stop all ceph*@.service instances at once.
- ● ceph-disk@dev-nvme0n1p1.service - Ceph disk activation: /dev/nvme0n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:50:13 AEDT; 15min ago
- Main PID: 6838 (code=exited, status=1/FAILURE)
- Jan 10 00:50:13 cephcluster2.local sh[6838]: main(sys.argv[1:])
- Jan 10 00:50:13 cephcluster2.local sh[6838]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4960, in main
- Jan 10 00:50:13 cephcluster2.local sh[6838]: args.func(args)
- Jan 10 00:50:13 cephcluster2.local sh[6838]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4397, in main_trigger
- Jan 10 00:50:13 cephcluster2.local sh[6838]: raise Error('return code ' + str(ret))
- Jan 10 00:50:13 cephcluster2.local sh[6838]: ceph_disk.main.Error: Error: return code 1
- Jan 10 00:50:13 cephcluster2.local systemd[1]: ceph-disk@dev-nvme0n1p1.service: main process exited, code=exited, status=1/FAILURE
- Jan 10 00:50:13 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme0n1p1.
- Jan 10 00:50:13 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme0n1p1.service entered failed state.
- Jan 10 00:50:13 cephcluster2.local systemd[1]: ceph-disk@dev-nvme0n1p1.service failed.
- ● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@.service instances at once
- Loaded: loaded (/usr/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)
- Active: active since Tue 2017-01-10 00:40:15 AEDT; 25min ago
- ● ceph-disk@dev-nvme2n1p1.service - Ceph disk activation: /dev/nvme2n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:04 AEDT; 23min ago
- Main PID: 1553 (code=exited, status=1/FAILURE)
- Jan 10 00:42:04 cephcluster2.local sh[1553]: main(sys.argv[1:])
- Jan 10 00:42:04 cephcluster2.local sh[1553]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4960, in main
- Jan 10 00:42:04 cephcluster2.local sh[1553]: args.func(args)
- Jan 10 00:42:04 cephcluster2.local sh[1553]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4397, in main_trigger
- Jan 10 00:42:04 cephcluster2.local sh[1553]: raise Error('return code ' + str(ret))
- Jan 10 00:42:04 cephcluster2.local sh[1553]: ceph_disk.main.Error: Error: return code 1
- Jan 10 00:42:04 cephcluster2.local systemd[1]: ceph-disk@dev-nvme2n1p1.service: main process exited, code=exited, status=1/FAILURE
- Jan 10 00:42:04 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme2n1p1.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme2n1p1.service entered failed state.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: ceph-disk@dev-nvme2n1p1.service failed.
- ● ceph-disk@dev-nvme8n1p2.service - Ceph disk activation: /dev/nvme8n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1818 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme8n1p2...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme8n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme8n1p2.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme8n1p2.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme8n1p2.service failed.
- ● ceph-osd@14.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
- Main PID: 5030 (code=exited, status=1/FAILURE)
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@14.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@14.service failed.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@14.service holdoff time over, scheduling restart.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@14.service
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@14.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@14.service failed.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- ● ceph-disk@dev-nvme4n1p2.service - Ceph disk activation: /dev/nvme4n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1661 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme4n1p2...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme4n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme4n1p2.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme4n1p2.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme4n1p2.service failed.
- ● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd.target; enabled; vendor preset: enabled)
- Active: active since Tue 2017-01-10 00:42:04 AEDT; 23min ago
- Jan 10 00:42:04 cephcluster2.local systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: Starting ceph target allowing to start/stop all ceph-osd@.service instances at once.
- ● ceph-disk@dev-nvme1n1p1.service - Ceph disk activation: /dev/nvme1n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1770 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme1n1p1...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme1n1p1.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme1n1p1.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme1n1p1.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme1n1p1.service failed.
- ● ceph-osd@15.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:48:19 AEDT; 16min ago
- Main PID: 5008 (code=exited, status=1/FAILURE)
- Jan 10 00:48:19 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: Unit ceph-osd@15.service entered failed state.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@15.service failed.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@15.service holdoff time over, scheduling restart.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@15.service
- Jan 10 00:48:19 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: Unit ceph-osd@15.service entered failed state.
- Jan 10 00:48:19 cephcluster2.local systemd[1]: ceph-osd@15.service failed.
- Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- ● ceph-disk@dev-nvme8n1p1.service - Ceph disk activation: /dev/nvme8n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1819 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme8n1p1...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme8n1p1.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme8n1p1.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme8n1p1.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme8n1p1.service failed.
- ● ceph-disk@dev-nvme9n1p1.service - Ceph disk activation: /dev/nvme9n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:18 AEDT; 22min ago
- Main PID: 1894 (code=exited, status=124)
- Jan 10 00:40:18 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme9n1p1...
- Jan 10 00:42:18 cephcluster2.local systemd[1]: ceph-disk@dev-nvme9n1p1.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:18 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme9n1p1.
- Jan 10 00:42:18 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme9n1p1.service entered failed state.
- Jan 10 00:42:18 cephcluster2.local systemd[1]: ceph-disk@dev-nvme9n1p1.service failed.
- ● ceph-disk@dev-nvme9n1p2.service - Ceph disk activation: /dev/nvme9n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:18 AEDT; 22min ago
- Main PID: 1897 (code=exited, status=124)
- Jan 10 00:42:17 cephcluster2.local sh[1897]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/nvme9n1p2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x202cd70>, log_stdo...
- Jan 10 00:42:17 cephcluster2.local sh[1897]: command: Running command: /usr/sbin/init --version
- Jan 10 00:42:17 cephcluster2.local sh[1897]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme9n1p2
- Jan 10 00:42:17 cephcluster2.local sh[1897]: command: Running command: /usr/sbin/blkid -o udev -p /dev/nvme9n1p2
- Jan 10 00:42:17 cephcluster2.local sh[1897]: main_trigger: trigger /dev/nvme9n1p2 parttype 45b0969e-9b03-4f30-b4c6-b4b80ceff106 uuid 7bf65df4-b2cd-4515-8e9c-f48f98c005e8
- Jan 10 00:42:17 cephcluster2.local sh[1897]: command: Running command: /usr/sbin/ceph-disk --verbose activate-journal /dev/nvme9n1p2
- Jan 10 00:42:18 cephcluster2.local systemd[1]: ceph-disk@dev-nvme9n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:18 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme9n1p2.
- Jan 10 00:42:18 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme9n1p2.service entered failed state.
- Jan 10 00:42:18 cephcluster2.local systemd[1]: ceph-disk@dev-nvme9n1p2.service failed.
- ● ceph-disk@dev-nvme2n1p2.service - Ceph disk activation: /dev/nvme2n1p2
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1554 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme2n1p2...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme2n1p2.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme2n1p2.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme2n1p2.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme2n1p2.service failed.
- ● ceph-osd@13.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
- Main PID: 5024 (code=exited, status=1/FAILURE)
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@13.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@13.service failed.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@13.service holdoff time over, scheduling restart.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@13.service
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@13.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@13.service failed.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- ● ceph-disk@dev-nvme6n1p1.service - Ceph disk activation: /dev/nvme6n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1699 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme6n1p1...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme6n1p1.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme6n1p1.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme6n1p1.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme6n1p1.service failed.
- ● ceph-disk@dev-nvme7n1p1.service - Ceph disk activation: /dev/nvme7n1p1
- Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
- Active: failed (Result: exit-code) since Tue 2017-01-10 00:42:16 AEDT; 23min ago
- Main PID: 1576 (code=exited, status=124)
- Jan 10 00:40:16 cephcluster2.local systemd[1]: Starting Ceph disk activation: /dev/nvme7n1p1...
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme7n1p1.service: main process exited, code=exited, status=124/n/a
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Failed to start Ceph disk activation: /dev/nvme7n1p1.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: Unit ceph-disk@dev-nvme7n1p1.service entered failed state.
- Jan 10 00:42:16 cephcluster2.local systemd[1]: ceph-disk@dev-nvme7n1p1.service failed.
- ● ceph-osd@16.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: failed (Result: start-limit) since Tue 2017-01-10 00:40:35 AEDT; 24min ago
- Main PID: 5013 (code=exited, status=1/FAILURE)
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@16.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@16.service failed.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@16.service holdoff time over, scheduling restart.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: start request repeated too quickly for ceph-osd@16.service
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Failed to start Ceph object storage daemon.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: Unit ceph-osd@16.service entered failed state.
- Jan 10 00:40:35 cephcluster2.local systemd[1]: ceph-osd@16.service failed.
- Jan 10 00:42:04 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:42:17 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- Jan 10 00:48:43 cephcluster2.local systemd[1]: [/usr/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
- ● ceph-osd@9.service - Ceph object storage daemon
- Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
- Active: active (running) since Tue 2017-01-10 00:51:10 AEDT; 14min ago
- Process: 7014 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
- Main PID: 7184 (ceph-osd)
- CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@9.service
- └─7184 /usr/bin/ceph-osd -f --cluster ceph --id 9 --setuser ceph --setgroup ceph
- Jan 10 01:05:10 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:10.143626 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:50.143624)
- Jan 10 01:05:11 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:11.143758 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:51.143756)
- Jan 10 01:05:12 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:12.143891 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:52.143889)
- Jan 10 01:05:13 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:13.143970 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:53.143968)
- Jan 10 01:05:13 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:13.383400 7fc8d12e2700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:53.383398)
- Jan 10 01:05:14 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:14.144082 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:54.144077)
- Jan 10 01:05:15 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:15.083916 7fc8d12e2700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:55.083915)
- Jan 10 01:05:15 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:15.144227 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:55.144225)
- Jan 10 01:05:16 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:16.144377 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:56.144375)
- Jan 10 01:05:17 cephcluster2.local ceph-osd[7184]: 2017-01-10 01:05:17.144510 7fc8eca61700 -1 osd.9 144 heartbeat_check: no reply from 0x7fc9099e7890 osd.11 ever on either front or back, first ping sent 2017-01-10...1:04:57.144508)
- Hint: Some lines were ellipsized, use -l to show in full.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement