Advertisement
Guest User

Untitled

a guest
Jan 20th, 2023
65
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. TASK [openstack.osa.glusterfs : Create gluster peers] **************************
  2. fatal: [infra01_repo_container-a051b328]: FAILED! => {"changed": false, "msg": "peer probe: failed: Probe returned with Transport endpoint is not connected\n", "rc": 1}
  3.  
  4. TASK [openstack.osa.glusterfs : Ensure glusterfs backing directory exists] *****
  5. changed: [infra03_repo_container-bc88ba18]
  6. changed: [infra02_repo_container-733e8f33]
  7.  
  8. ...
  9.  
  10. TASK [openstack.osa.glusterfs : Reset brick for a replaced node] ***************
  11. skipping: [infra02_repo_container-733e8f33] => (item=gluster volume reset-brick gfs-repo infra02-repo-container-733e8f33:/gluster/bricks/1 start)
  12. skipping: [infra02_repo_container-733e8f33] => (item=gluster volume reset-brick gfs-repo infra02-repo-container-733e8f33:/gluster/bricks/1 infra02-repo-container-733e8f33:/gluster/bricks/1 commit force)
  13. skipping: [infra03_repo_container-bc88ba18] => (item=gluster volume reset-brick gfs-repo infra03-repo-container-bc88ba18:/gluster/bricks/1 start)
  14. skipping: [infra03_repo_container-bc88ba18] => (item=gluster volume reset-brick gfs-repo infra03-repo-container-bc88ba18:/gluster/bricks/1 infra03-repo-container-bc88ba18:/gluster/bricks/1 commit force)
  15.  
  16. TASK [openstack.osa.glusterfs : Create gluster volume] *************************
  17. An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None
  18. fatal: [infra02_repo_container-733e8f33]: FAILED! => {"changed": false, "msg": "error running gluster (/usr/sbin/gluster --mode=script peer probe infra01-repo-container-a051b328) command (rc=1): peer probe: failed: Probe returned with Transport endpoint is not connected\n"}
  19.  
  20. ...
  21.  
  22. TASK [systemd_mount : Set the state of the mount] ******************************
  23. fatal: [infra03_repo_container-bc88ba18]: FAILED! => {"changed": false, "cmd": "systemctl reload-or-restart $(systemd-escape -p --suffix=\"mount\" \"/var/www/repo\")", "delta": "0:00:00.107599", "end": "2023-01-20 00:23:59.417681", "msg": "non-zero return code", "rc": 1, "start": "2023-01-20 00:23:59.310082", "stderr": "Job failed. See \"journalctl -xe\" for details.", "stderr_lines": ["Job failed. See \"journalctl -xe\" for details."], "stdout": "", "stdout_lines": []}
  24.  
  25. TASK [systemd_mount : Set the state of the mount (fallback)] *******************
  26. fatal: [infra03_repo_container-bc88ba18]: FAILED! => {"changed": false, "msg": "Unable to start service var-www-repo.mount: Job failed. See \"journalctl -xe\" for details.\n"}
  27.  
  28. ...
  29.  
  30. RUNNING HANDLER [galera_server : Start new cluster] ****************************
  31. ok: [infra03_galera_container-df158e10 -> infra01_galera_container-e81bec2e(172.17.246.64)]
  32.  
  33. RUNNING HANDLER [galera_server : Restart mysql (All)] **************************
  34. ok: [infra03_galera_container-df158e10 -> infra01_galera_container-e81bec2e(172.17.246.64)] => (item=infra01_galera_container-e81bec2e)
  35. ok: [infra03_galera_container-df158e10 -> infra02_galera_container-ff8012b2(172.17.246.17)] => (item=infra02_galera_container-ff8012b2)
  36. FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (6 retries left).
  37. FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (5 retries left).
  38. FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (4 retries left).
  39. FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (3 retries left).
  40. FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (2 retries left).
  41. FAILED - RETRYING: [infra03_galera_container-df158e10]: Restart mysql (All) (1 retries left).
  42. failed: [infra03_galera_container-df158e10] (item=infra03_galera_container-df158e10) => {"ansible_loop_var": "item", "attempts": 6, "changed": false, "item": "infra03_galera_container-df158e10", "msg": "Unable to start service mariadb: Job for mariadb.service failed because the control process exited with error code.\nSee \"systemctl status mariadb.service\" and \"journalctl -xeu mariadb.service\" for details.\n"}
  43.  
  44. RUNNING HANDLER [systemd_service : Restart changed services] *******************
  45. skipping: [infra03_galera_container-df158e10] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': {'service_name': 'mariadb', 'systemd_overrides_only': True, 'systemd_overrides': {'Service': {'LimitNOFILE': 164679, 'Restart': 'on-abort', 'RestartSec': '5s', 'Slice': 'galera.slice', 'CPUAccounting': True, 'BlockIOAccounting': True, 'MemoryAccounting': False, 'TasksAccounting': True, 'TimeoutStartSec': 1800, 'PrivateDevices': 'false', 'OOMScoreAdjust': '-1000'}}}, 'ansible_loop_var': 'item'})
  46. skipping: [infra03_galera_container-df158e10] => (item={'dest': '/etc/systemd/system/mariadbcheck@.service', 'src': '/root/.ansible/tmp/ansible-tmp-1674175159.8252733-34108-53292626652311/source', 'md5sum': 'c9197120c7bac24ce7a4dac560fe9cb2', 'checksum': 'c8f32adcc5527ff6a21ce337224d7b776b1426a8', 'changed': True, 'uid': 0, 'gid': 0, 'owner': 'root', 'group': 'root', 'mode': '0644', 'state': 'file', 'size': 649, 'invocation': {'module_args': {'src': '/root/.ansible/tmp/ansible-tmp-1674175159.8252733-34108-53292626652311/source', 'dest': '/etc/systemd/system/mariadbcheck@.service', 'mode': '0644', 'owner': 'root', 'group': 'root', '_original_basename': 'systemd-service.j2', 'follow': True, 'backup': False, 'force': True, 'unsafe_writes': False, 'content': None, 'validate': None, 'directory_mode': None, 'remote_src': None, 'local_follow': None, 'checksum': None, 'seuser': None, 'serole': None, 'selevel': None, 'setype': None, 'attributes': None}}, 'failed': False, 'item': {'service_name': 'mariadbcheck@', 'service_type': 'oneshot', 'execstarts': '-/usr/local/bin/clustercheck', 'enabled': False, 'load': False, 'standard_output': 'socket', 'sockets': [{'socket_name': 'mariadbcheck', 'enabled': True, 'state': 'restarted', 'options': {'ListenStream': '172.17.246.42:9200', 'IPAddressDeny': 'any', 'IPAddressAllow': '172.17.246.64 172.17.246.17 172.17.246.42 172.17.246.1 172.17.246.2 172.17.246.3 127.0.0.1', 'Accept': 'yes'}}]}, 'ansible_loop_var': 'item'})
  47.  
  48. RUNNING HANDLER [pki : cert changed] *******************************************
  49.  
  50. RUNNING HANDLER [pki : cert installed] *****************************************
  51.  
  52. NO MORE HOSTS LEFT *************************************************************
  53.  
  54. PLAY RECAP *********************************************************************
  55. compute01                  : ok=40   changed=4    unreachable=0    failed=0    skipped=75   rescued=0    ignored=0
  56. ...
  57. infra01_repo_container-a051b328 : ok=125  changed=66   unreachable=0    failed=1    skipped=37   rescued=0    ignored=0
  58. infra02_repo_container-733e8f33 : ok=127  changed=67   unreachable=0    failed=1    skipped=38   rescued=0    ignored=0
  59. infra03_repo_container-bc88ba18 : ok=150  changed=73   unreachable=0    failed=1    skipped=41   rescued=1    ignored=0
  60. ...
  61. localhost                  : ok=35   changed=21   unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
  62.  
  63.  
  64.  
  65. EXIT NOTICE [Playbook execution failure] **************************************
  66. ===============================================================================
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement