Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- TASK [openstack.osa.glusterfs : Create gluster peers] **************************
- fatal: [infra01_repo_container-a051b328]: FAILED! => {"changed": false, "msg": "peer probe: failed: Probe returned with Transport endpoint is not connected\n", "rc": 1}
- TASK [openstack.osa.glusterfs : Ensure glusterfs backing directory exists] *****
- changed: [infra03_repo_container-bc88ba18]
- changed: [infra02_repo_container-733e8f33]
- ...
- TASK [openstack.osa.glusterfs : Reset brick for a replaced node] ***************
- skipping: [infra02_repo_container-733e8f33] => (item=gluster volume reset-brick gfs-repo infra02-repo-container-733e8f33:/gluster/bricks/1 start)
- skipping: [infra02_repo_container-733e8f33] => (item=gluster volume reset-brick gfs-repo infra02-repo-container-733e8f33:/gluster/bricks/1 infra02-repo-container-733e8f33:/gluster/bricks/1 commit force)
- skipping: [infra03_repo_container-bc88ba18] => (item=gluster volume reset-brick gfs-repo infra03-repo-container-bc88ba18:/gluster/bricks/1 start)
- skipping: [infra03_repo_container-bc88ba18] => (item=gluster volume reset-brick gfs-repo infra03-repo-container-bc88ba18:/gluster/bricks/1 infra03-repo-container-bc88ba18:/gluster/bricks/1 commit force)
- TASK [openstack.osa.glusterfs : Create gluster volume] *************************
- An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None
- fatal: [infra02_repo_container-733e8f33]: FAILED! => {"changed": false, "msg": "error running gluster (/usr/sbin/gluster --mode=script peer probe infra01-repo-container-a051b328) command (rc=1): peer probe: failed: Probe returned with Transport endpoint is not connected\n"}
- ...
- TASK [systemd_mount : Set the state of the mount] ******************************
- fatal: [infra03_repo_container-bc88ba18]: FAILED! => {"changed": false, "cmd": "systemctl reload-or-restart $(systemd-escape -p --suffix=\"mount\" \"/var/www/repo\")", "delta": "0:00:00.107599", "end": "2023-01-20 00:23:59.417681", "msg": "non-zero return code", "rc": 1, "start": "2023-01-20 00:23:59.310082", "stderr": "Job failed. See \"journalctl -xe\" for details.", "stderr_lines": ["Job failed. See \"journalctl -xe\" for details."], "stdout": "", "stdout_lines": []}
- TASK [systemd_mount : Set the state of the mount (fallback)] *******************
- fatal: [infra03_repo_container-bc88ba18]: FAILED! => {"changed": false, "msg": "Unable to start service var-www-repo.mount: Job failed. See \"journalctl -xe\" for details.\n"}
- ...
- RUNNING HANDLER [galera_server : Start new cluster] ****************************
- ok: [infra03_galera_container-df158e10 -> infra01_galera_container-e81bec2e(172.17.246.64)]
- RUNNING HANDLER [galera_server : Restart mysql (All)] **************************
- ok: [infra03_galera_container-df158e10 -> infra01_galera_container-e81bec2e(172.17.246.64)] => (item=infra01_galera_container-e81bec2e)
- ok: [infra03_galera_container-df158e10 -> infra02_galera_container-ff8012b2(172.17.246.17)] => (item=infra02_galera_container-ff8012b2)
- FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (6 retries left).
- FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (5 retries left).
- FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (4 retries left).
- FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (3 retries left).
- FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (2 retries left).
- FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (1 retries left).
- failed: [infra03_galera_container-df158e10] (item=infra03_galera_container-df158e10) => {"ansible_loop_var": "item", "attempts": 6, "changed": false, "item": "infra03_galera_container-df158e10", "msg": "Unable to start service mariadb: Job for mariadb.service failed because the control process exited with error code.\nSee \"systemctl status mariadb.service\" and \"journalctl -xeu mariadb.service\" for details.\n"}
- RUNNING HANDLER [systemd_service : Restart changed services] *******************
- skipping: [infra03_galera_container-df158e10] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': {'service_name': 'mariadb', 'systemd_overrides_only': True, 'systemd_overrides': {'Service': {'LimitNOFILE': 164679, 'Restart': 'on-abort', 'RestartSec': '5s', 'Slice': 'galera.slice', 'CPUAccounting': True, 'BlockIOAccounting': True, 'MemoryAccounting': False, 'TasksAccounting': True, 'TimeoutStartSec': 1800, 'PrivateDevices': 'false', 'OOMScoreAdjust': '-1000'}}}, 'ansible_loop_var': 'item'})
- skipping: [infra03_galera_container-df158e10] => (item={'dest': '/etc/systemd/system/mariadbcheck@.service', 'src': '/root/.ansible/tmp/ansible-tmp-1674175159.8252733-34108-53292626652311/source', 'md5sum': 'c9197120c7bac24ce7a4dac560fe9cb2', 'checksum': 'c8f32adcc5527ff6a21ce337224d7b776b1426a8', 'changed': True, 'uid': 0, 'gid': 0, 'owner': 'root', 'group': 'root', 'mode': '0644', 'state': 'file', 'size': 649, 'invocation': {'module_args': {'src': '/root/.ansible/tmp/ansible-tmp-1674175159.8252733-34108-53292626652311/source', 'dest': '/etc/systemd/system/mariadbcheck@.service', 'mode': '0644', 'owner': 'root', 'group': 'root', '_original_basename': 'systemd-service.j2', 'follow': True, 'backup': False, 'force': True, 'unsafe_writes': False, 'content': None, 'validate': None, 'directory_mode': None, 'remote_src': None, 'local_follow': None, 'checksum': None, 'seuser': None, 'serole': None, 'selevel': None, 'setype': None, 'attributes': None}}, 'failed': False, 'item': {'service_name': 'mariadbcheck@', 'service_type': 'oneshot', 'execstarts': '-/usr/local/bin/clustercheck', 'enabled': False, 'load': False, 'standard_output': 'socket', 'sockets': [{'socket_name': 'mariadbcheck', 'enabled': True, 'state': 'restarted', 'options': {'ListenStream': '172.17.246.42:9200', 'IPAddressDeny': 'any', 'IPAddressAllow': '172.17.246.64 172.17.246.17 172.17.246.42 172.17.246.1 172.17.246.2 172.17.246.3 127.0.0.1', 'Accept': 'yes'}}]}, 'ansible_loop_var': 'item'})
- RUNNING HANDLER [pki : cert changed] *******************************************
- RUNNING HANDLER [pki : cert installed] *****************************************
- NO MORE HOSTS LEFT *************************************************************
- PLAY RECAP *********************************************************************
- compute01 : ok=40 changed=4 unreachable=0 failed=0 skipped=75 rescued=0 ignored=0
- ...
- infra01_repo_container-a051b328 : ok=125 changed=66 unreachable=0 failed=1 skipped=37 rescued=0 ignored=0
- infra02_repo_container-733e8f33 : ok=127 changed=67 unreachable=0 failed=1 skipped=38 rescued=0 ignored=0
- infra03_repo_container-bc88ba18 : ok=150 changed=73 unreachable=0 failed=1 skipped=41 rescued=1 ignored=0
- ...
- localhost : ok=35 changed=21 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0
- EXIT NOTICE [Playbook execution failure] **************************************
- ===============================================================================
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement