Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ~/docker/PostDock$ sudo docker-compose -f docker-compose/latest.yml up pgmaster pgslave1 pgslave2 pgslave3 pgslave4 pgpool backup
- Creating network "dockercompose_default" with the default driver
- Creating network "dockercompose_cluster" with driver "bridge"
- Creating volume "dockercompose_backup" with default driver
- Creating volume "dockercompose_pgmaster" with default driver
- Creating volume "dockercompose_pgslave1" with default driver
- Creating volume "dockercompose_pgslave2" with default driver
- Creating volume "dockercompose_pgslave3" with default driver
- Creating volume "dockercompose_pgslave4" with default driver
- Creating dockercompose_pgslave2_1 ...
- Creating dockercompose_pgslave3_1 ...
- Creating dockercompose_pgpool_1 ...
- Creating dockercompose_backup_1 ...
- Creating dockercompose_pgslave2_1
- Creating dockercompose_pgpool_1
- Creating dockercompose_pgslave3_1
- Creating dockercompose_pgslave1_1 ...
- Creating dockercompose_backup_1
- Creating dockercompose_pgmaster_1 ...
- Creating dockercompose_pgslave4_1 ...
- Creating dockercompose_pgslave1_1
- Creating dockercompose_pgmaster_1
- Creating dockercompose_pgmaster_1 ... done
- Attaching to dockercompose_pgslave2_1, dockercompose_pgpool_1, dockercompose_pgslave1_1, dockercompose_pgslave3_1, dockercompose_backup_1, dockercompose_pgslave4_1, dockercompose_pgmaster_1
- pgpool_1 | >>> STARTING SSH (if required)...
- pgslave2_1 | + echo '>>> Setting up STOP handlers...'
- pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave2_1 | + trap 'system_stop TERM' TERM
- pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave2_1 | + trap 'system_stop SIGTERM' SIGTERM
- pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave2_1 | + trap 'system_stop QUIT' QUIT
- pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave2_1 | + trap 'system_stop SIGQUIT' SIGQUIT
- pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave2_1 | + trap 'system_stop INT' INT
- pgslave1_1 | + echo '>>> Setting up STOP handlers...'
- pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave2_1 | + trap 'system_stop SIGINT' SIGINT
- pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave1_1 | + trap 'system_stop TERM' TERM
- pgpool_1 | >>> TUNING UP SSH CLIENT...
- pgslave2_1 | + trap 'system_stop KILL' KILL
- pgslave3_1 | >>> Setting up STOP handlers...
- pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- backup_1 | >>> Checking all configurations
- pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgpool_1 | >>> STARTING SSH SERVER...
- pgslave4_1 | >>> Setting up STOP handlers...
- pgpool_1 | >>> TURNING PGPOOL...
- pgslave1_1 | + trap 'system_stop SIGTERM' SIGTERM
- backup_1 | >>> Configuring barman for streaming replication
- pgslave2_1 | + trap 'system_stop SIGKILL' SIGKILL
- pgslave3_1 | + echo '>>> Setting up STOP handlers...'
- pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgpool_1 | >>> Opening access from all hosts by md5 in /usr/local/etc/pool_hba.conf
- backup_1 | >>> STARTING SSH (if required)...
- pgslave3_1 | + trap 'system_stop TERM' TERM
- pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave3_1 | + trap 'system_stop SIGTERM' SIGTERM
- pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgmaster_1 | + echo '>>> Setting up STOP handlers...'
- pgslave3_1 | + trap 'system_stop QUIT' QUIT
- backup_1 | >>> TUNING UP SSH CLIENT...
- pgpool_1 | >>> Adding user pcp_user for PCP
- pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave3_1 | + trap 'system_stop SIGQUIT' SIGQUIT
- pgslave2_1 | + echo '>>> STARTING SSH (if required)...'
- pgslave2_1 | + source /home/postgres/.ssh/entrypoint.sh
- backup_1 | >>> STARTING SSH SERVER...
- pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave1_1 | + trap 'system_stop QUIT' QUIT
- pgslave4_1 | + echo '>>> Setting up STOP handlers...'
- pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave3_1 | + trap 'system_stop INT' INT
- pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave1_1 | >>> Setting up STOP handlers...
- pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave3_1 | + trap 'system_stop SIGINT' SIGINT
- pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- backup_1 | >>> SETUP BARMAN CRON
- backup_1 | >>>>>> Backup schedule is */30 */5 * * *
- pgslave2_1 | ++ set -e
- pgslave3_1 | + trap 'system_stop KILL' KILL
- pgslave1_1 | + trap 'system_stop SIGQUIT' SIGQUIT
- pgmaster_1 | + trap 'system_stop TERM' TERM
- pgpool_1 | >>> Creating a ~/.pcppass file for pcp_user
- pgslave4_1 | + trap 'system_stop TERM' TERM
- backup_1 | >>> STARTING METRICS SERVER
- pgslave2_1 | ++ cp -f '/home/postgres/.ssh/keys/*' /home/postgres/.ssh/
- pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave3_1 | + trap 'system_stop SIGKILL' SIGKILL
- pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- backup_1 | >>> STARTING CRON
- pgslave3_1 | >>> STARTING SSH (if required)...
- pgslave2_1 | >>> Setting up STOP handlers...
- pgpool_1 | >>> Adding users for md5 auth
- pgslave1_1 | + trap 'system_stop INT' INT
- pgmaster_1 | + trap 'system_stop SIGTERM' SIGTERM
- pgslave4_1 | + trap 'system_stop SIGTERM' SIGTERM
- pgslave2_1 | >>> STARTING SSH (if required)...
- pgslave3_1 | + echo '>>> STARTING SSH (if required)...'
- pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgpool_1 | >>>>>> Adding user monkey_user
- pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave3_1 | + source /home/postgres/.ssh/entrypoint.sh
- pgmaster_1 | + trap 'system_stop QUIT' QUIT
- pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgpool_1 | >>> Adding check user 'monkey_user' for md5 auth
- pgslave2_1 | cp: cannot stat '/home/postgres/.ssh/keys/*': No such file or directory
- pgslave1_1 | + trap 'system_stop SIGINT' SIGINT
- pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave4_1 | + trap 'system_stop QUIT' QUIT
- pgslave3_1 | ++ set -e
- pgslave2_1 | No pre-populated ssh keys!
- pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgpool_1 | >>> Adding user 'monkey_user' as check user
- pgslave3_1 | ++ cp -f '/home/postgres/.ssh/keys/*' /home/postgres/.ssh/
- pgmaster_1 | + trap 'system_stop SIGQUIT' SIGQUIT
- pgslave1_1 | + trap 'system_stop KILL' KILL
- pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave2_1 | ++ echo 'No pre-populated ssh keys!'
- pgpool_1 | >>> Adding user 'monkey_user' as health-check user
- pgslave3_1 | cp: cannot stat '/home/postgres/.ssh/keys/*': No such file or directory
- pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgmaster_1 | + trap 'system_stop INT' INT
- pgslave4_1 | + trap 'system_stop SIGQUIT' SIGQUIT
- pgslave2_1 | ++ chown -R postgres:postgres /home/postgres
- pgslave3_1 | ++ echo 'No pre-populated ssh keys!'
- pgmaster_1 | >>> Setting up STOP handlers...
- pgpool_1 | >>> Adding backends
- pgslave3_1 | ++ chown -R postgres:postgres /home/postgres
- pgslave1_1 | + trap 'system_stop SIGKILL' SIGKILL
- pgslave2_1 | ++ [[ 0 == \1 ]]
- pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgpool_1 | >>>>>> Waiting for backend 0 to start pgpool (WAIT_BACKEND_TIMEOUT=60)
- pgslave1_1 | + echo '>>> STARTING SSH (if required)...'
- pgslave3_1 | No pre-populated ssh keys!
- pgslave2_1 | ++ echo '>>> SSH is not enabled!'
- pgmaster_1 | + trap 'system_stop SIGINT' SIGINT
- pgslave4_1 | + trap 'system_stop INT' INT
- pgslave1_1 | + source /home/postgres/.ssh/entrypoint.sh
- pgpool_1 | 2018/09/17 12:41:26 Waiting for host: tcp://pgmaster:5432
- pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgmaster_1 | + trap 'system_stop KILL' KILL
- pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgmaster_1 | + trap 'system_stop SIGKILL' SIGKILL
- pgmaster_1 | + echo '>>> STARTING SSH (if required)...'
- pgslave2_1 | >>> SSH is not enabled!
- pgmaster_1 | + source /home/postgres/.ssh/entrypoint.sh
- pgslave3_1 | ++ [[ 0 == \1 ]]
- pgslave1_1 | >>> STARTING SSH (if required)...
- pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave3_1 | ++ echo '>>> SSH is not enabled!'
- pgmaster_1 | ++ set -e
- pgslave4_1 | + trap 'system_stop SIGINT' SIGINT
- pgslave1_1 | ++ set -e
- pgslave2_1 | + echo '>>> STARTING POSTGRES...'
- pgslave3_1 | + echo '>>> STARTING POSTGRES...'
- pgmaster_1 | ++ cp -f /home/postgres/.ssh/keys/id_rsa /home/postgres/.ssh/keys/id_rsa.pub /home/postgres/.ssh/
- pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave2_1 | >>> STARTING POSTGRES...
- pgslave3_1 | >>> SSH is not enabled!
- pgslave3_1 | >>> STARTING POSTGRES...
- pgslave1_1 | ++ cp -f /home/postgres/.ssh/keys/id_rsa /home/postgres/.ssh/keys/id_rsa.pub /home/postgres/.ssh/
- pgmaster_1 | >>> STARTING SSH (if required)...
- pgslave3_1 | + wait 10
- pgslave4_1 | + trap 'system_stop KILL' KILL
- pgslave1_1 | ++ chown -R postgres:postgres /home/postgres
- pgslave2_1 | + wait 11
- pgmaster_1 | ++ chown -R postgres:postgres /home/postgres
- pgslave3_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
- pgslave1_1 | >>> TUNING UP SSH CLIENT...
- pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
- pgslave4_1 | + trap 'system_stop SIGKILL' SIGKILL
- pgslave1_1 | ++ [[ 1 == \1 ]]
- pgslave2_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
- pgslave3_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
- pgslave1_1 | ++ echo '>>> TUNING UP SSH CLIENT...'
- pgmaster_1 | ++ [[ 1 == \1 ]]
- pgslave4_1 | + echo '>>> STARTING SSH (if required)...'
- pgslave1_1 | ++ '[' '!' -f /home/postgres/.ssh/id_rsa.pub ']'
- pgmaster_1 | ++ echo '>>> TUNING UP SSH CLIENT...'
- pgslave3_1 | >>> TUNING UP POSTGRES...
- pgslave2_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
- pgslave1_1 | ++ chmod 600 -R /home/postgres/.ssh/id_rsa
- pgslave4_1 | + source /home/postgres/.ssh/entrypoint.sh
- pgmaster_1 | >>> TUNING UP SSH CLIENT...
- pgslave3_1 | >>> Cleaning data folder which might have some garbage...
- pgslave2_1 | >>> TUNING UP POSTGRES...
- pgslave1_1 | ++ mkdir -p /var/run/sshd
- pgslave4_1 | >>> STARTING SSH (if required)...
- pgmaster_1 | ++ '[' '!' -f /home/postgres/.ssh/id_rsa.pub ']'
- pgslave3_1 | >>> Check all partner nodes for common upstream node...
- pgslave1_1 | ++ sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
- pgslave2_1 | >>> Cleaning data folder which might have some garbage...
- pgmaster_1 | ++ chmod 600 -R /home/postgres/.ssh/id_rsa
- pgslave4_1 | ++ set -e
- pgslave3_1 | >>>>>> Checking NODE=pgmaster...
- pgslave1_1 | ++ sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
- pgslave2_1 | >>> Auto-detected master name: ''
- pgmaster_1 | ++ mkdir -p /var/run/sshd
- pgslave4_1 | ++ cp -f '/home/postgres/.ssh/keys/*' /home/postgres/.ssh/
- pgslave1_1 | ++ echo 'export VISIBLE=now'
- pgslave2_1 | >>> Setting up repmgr...
- pgslave2_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
- pgmaster_1 | ++ sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
- pgslave4_1 | cp: cannot stat '/home/postgres/.ssh/keys/*': No such file or directory
- pgslave2_1 | >>> Setting up upstream node...
- pgslave1_1 | ++ cat /home/postgres/.ssh/id_rsa.pub
- pgslave4_1 | ++ echo 'No pre-populated ssh keys!'
- pgmaster_1 | ++ sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
- pgslave4_1 | ++ chown -R postgres:postgres /home/postgres
- pgslave2_1 | cat: /var/lib/postgresql/data/standby.lock: No such file or directory
- pgmaster_1 | ++ echo 'export VISIBLE=now'
- pgslave1_1 | >>> STARTING SSH SERVER...
- pgslave2_1 | >>> Previously Locked standby upstream node LOCKED_STANDBY=''
- pgslave1_1 | ++ echo '>>> STARTING SSH SERVER...'
- pgmaster_1 | ++ cat /home/postgres/.ssh/id_rsa.pub
- pgslave4_1 | No pre-populated ssh keys!
- pgslave2_1 | >>> Waiting for upstream postgres server...
- pgslave1_1 | ++ /usr/sbin/sshd
- pgmaster_1 | ++ echo '>>> STARTING SSH SERVER...'
- pgslave4_1 | >>> SSH is not enabled!
- pgmaster_1 | ++ /usr/sbin/sshd
- pgslave2_1 | >>> Wait schema replication_db.repmgr on pgslave1:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
- pgslave1_1 | + echo '>>> STARTING POSTGRES...'
- pgslave4_1 | ++ [[ 0 == \1 ]]
- pgmaster_1 | >>> STARTING SSH SERVER...
- pgslave1_1 | >>> STARTING POSTGRES...
- pgslave4_1 | ++ echo '>>> SSH is not enabled!'
- pgslave2_1 | psql: could not connect to server: Connection refused
- pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
- pgslave2_1 | TCP/IP connections on port 5432?
- pgslave1_1 | + wait 16
- pgslave4_1 | >>> STARTING POSTGRES...
- pgmaster_1 | >>> STARTING POSTGRES...
- pgslave1_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
- pgslave4_1 | + echo '>>> STARTING POSTGRES...'
- pgmaster_1 | + echo '>>> STARTING POSTGRES...'
- pgslave1_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
- pgslave4_1 | + wait 9
- pgmaster_1 | + wait 16
- pgmaster_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
- pgslave1_1 | >>> TUNING UP POSTGRES...
- pgmaster_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
- pgslave1_1 | >>> Cleaning data folder which might have some garbage...
- pgmaster_1 | >>> TUNING UP POSTGRES...
- pgslave1_1 | >>> Check all partner nodes for common upstream node...
- pgslave4_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
- pgslave1_1 | >>>>>> Checking NODE=pgmaster...
- pgslave4_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
- pgmaster_1 | >>> Cleaning data folder which might have some garbage...
- pgslave4_1 | >>> TUNING UP POSTGRES...
- pgslave1_1 | psql: could not connect to server: No route to host
- pgslave1_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
- pgslave1_1 | TCP/IP connections on port 5432?
- pgslave4_1 | >>> Cleaning data folder which might have some garbage...
- pgslave1_1 | >>>>>> Skipping: failed to get master from the node!
- pgmaster_1 | >>> Check all partner nodes for common upstream node...
- pgslave4_1 | >>> Auto-detected master name: ''
- pgslave1_1 | >>>>>> Checking NODE=pgslave1...
- pgslave1_1 | psql: could not connect to server: Connection refused
- pgslave1_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
- pgmaster_1 | >>>>>> Checking NODE=pgmaster...
- pgslave1_1 | TCP/IP connections on port 5432?
- pgslave4_1 | >>> Setting up repmgr...
- pgslave4_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
- pgslave4_1 | >>> Setting up upstream node...
- pgslave4_1 | cat: /var/lib/postgresql/data/standby.lock: No such file or directory
- pgslave4_1 | >>> Previously Locked standby upstream node LOCKED_STANDBY=''
- pgslave4_1 | >>> Waiting for upstream postgres server...
- pgslave4_1 | >>> Wait schema replication_db.repmgr on pgslave3:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
- pgslave1_1 | >>>>>> Skipping: failed to get master from the node!
- pgslave1_1 | >>>>>> Checking NODE=pgslave3...
- pgmaster_1 | psql: could not connect to server: Connection refused
- pgmaster_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
- pgmaster_1 | TCP/IP connections on port 5432?
- pgslave4_1 | psql: could not connect to server: Connection refused
- pgslave4_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
- pgslave1_1 | psql: could not connect to server: Connection refused
- pgmaster_1 | >>>>>> Skipping: failed to get master from the node!
- pgslave4_1 | TCP/IP connections on port 5432?
- pgslave1_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
- pgmaster_1 | >>>>>> Checking NODE=pgslave1...
- pgslave1_1 | TCP/IP connections on port 5432?
- pgslave1_1 | >>>>>> Skipping: failed to get master from the node!
- pgslave1_1 | >>> Auto-detected master name: ''
- pgslave1_1 | >>> Setting up repmgr...
- pgslave1_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
- pgslave1_1 | >>> Setting up upstream node...
- pgslave1_1 | cat: /var/lib/postgresql/data/standby.lock: No such file or directory
- pgslave1_1 | >>> Previously Locked standby upstream node LOCKED_STANDBY=''
- pgslave1_1 | >>> Waiting for upstream postgres server...
- pgslave1_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
- pgmaster_1 | psql: could not connect to server: Connection refused
- pgmaster_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
- pgmaster_1 | TCP/IP connections on port 5432?
- pgslave1_1 | psql: could not connect to server: Connection refused
- pgslave1_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
- pgslave1_1 | TCP/IP connections on port 5432?
- pgslave3_1 | psql: could not connect to server: Connection refused
- pgslave3_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
- pgslave3_1 | TCP/IP connections on port 5432?
- pgslave3_1 | >>>>>> Skipping: failed to get master from the node!
- pgslave3_1 | >>>>>> Checking NODE=pgslave1...
- pgmaster_1 | >>>>>> Skipping: failed to get master from the node!
- pgmaster_1 | >>>>>> Checking NODE=pgslave3...
- pgslave3_1 | psql: could not connect to server: Connection refused
- pgslave3_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
- pgslave3_1 | TCP/IP connections on port 5432?
- pgslave3_1 | >>>>>> Skipping: failed to get master from the node!
- pgslave3_1 | >>>>>> Checking NODE=pgslave3...
- pgmaster_1 | psql: could not connect to server: Connection refused
- pgmaster_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
- pgmaster_1 | TCP/IP connections on port 5432?
- pgmaster_1 | >>>>>> Skipping: failed to get master from the node!
- pgmaster_1 | >>> Auto-detected master name: ''
- pgmaster_1 | >>> Setting up repmgr...
- pgmaster_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
- pgmaster_1 | >>> Setting up upstream node...
- pgmaster_1 | >>> Sending in background postgres start...
- pgmaster_1 | >>> Waiting for local postgres server recovery if any in progress:LAUNCH_RECOVERY_CHECK_INTERVAL=30
- pgmaster_1 | >>> Recovery is in progress:
- pgslave3_1 | psql: could not connect to server: Connection refused
- pgslave3_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
- pgslave3_1 | TCP/IP connections on port 5432?
- pgslave3_1 | >>>>>> Skipping: failed to get master from the node!
- pgslave3_1 | >>> Auto-detected master name: ''
- pgslave3_1 | >>> Setting up repmgr...
- pgslave3_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
- pgslave3_1 | >>> Setting up upstream node...
- pgslave3_1 | cat: /var/lib/postgresql/data/standby.lock: No such file or directory
- pgslave3_1 | >>> Previously Locked standby upstream node LOCKED_STANDBY=''
- pgslave3_1 | >>> Waiting for upstream postgres server...
- pgmaster_1 | The files belonging to this database system will be owned by user "postgres".
- pgmaster_1 | This user must also own the server process.
- pgmaster_1 |
- pgmaster_1 | The database cluster will be initialized with locale "en_US.utf8".
- pgmaster_1 | The default database encoding has accordingly been set to "UTF8".
- pgmaster_1 | The default text search configuration will be set to "english".
- pgmaster_1 |
- pgmaster_1 | Data page checksums are disabled.
- pgmaster_1 |
- pgmaster_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
- pgmaster_1 | creating subdirectories ... ok
- pgslave3_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
- pgmaster_1 | selecting default max_connections ... 100
- pgmaster_1 | selecting default shared_buffers ... 128MB
- pgmaster_1 | selecting dynamic shared memory implementation ... posix
- pgslave3_1 | psql: could not connect to server: Connection refused
- pgslave3_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
- pgslave3_1 | TCP/IP connections on port 5432?
- pgmaster_1 | creating configuration files ... ok
- pgmaster_1 | running bootstrap script ... ok
- pgmaster_1 | performing post-bootstrap initialization ... ok
- pgmaster_1 | syncing data to disk ...
- pgmaster_1 | WARNING: enabling "trust" authentication for local connections
- pgmaster_1 | You can change this by editing pg_hba.conf or using the option -A, or
- pgmaster_1 | --auth-local and --auth-host, the next time you run initdb.
- pgmaster_1 | ok
- pgmaster_1 |
- pgmaster_1 | Success. You can now start the database server using:
- pgmaster_1 |
- pgmaster_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
- pgmaster_1 |
- pgmaster_1 | waiting for server to start....2018-09-17 12:41:32.907 UTC [105] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
- pgmaster_1 | 2018-09-17 12:41:32.926 UTC [106] LOG: database system was shut down at 2018-09-17 12:41:32 UTC
- pgmaster_1 | 2018-09-17 12:41:32.931 UTC [105] LOG: database system is ready to accept connections
- pgmaster_1 | done
- pgmaster_1 | server started
- pgmaster_1 | CREATE DATABASE
- pgmaster_1 |
- pgmaster_1 |
- pgmaster_1 | /docker-entrypoint.sh: running /docker-entrypoint-initdb.d/entrypoint.sh
- pgmaster_1 | >>> Configuring /var/lib/postgresql/data/postgresql.conf
- pgmaster_1 | >>>>>> Config file was replaced with standard one!
- pgmaster_1 | >>>>>> Adding config 'listen_addresses'=''*''
- pgmaster_1 | >>>>>> Adding config 'max_replication_slots'='5'
- pgmaster_1 | >>>>>> Adding config 'shared_preload_libraries'=''repmgr''
- pgmaster_1 | >>> Creating replication user 'replication_user'
- pgmaster_1 | CREATE ROLE
- pgmaster_1 | >>> Creating replication db 'replication_db'
- pgmaster_1 |
- pgmaster_1 | 2018-09-17 12:41:33.622 UTC [105] LOG: received fast shutdown request
- pgmaster_1 | waiting for server to shut down....2018-09-17 12:41:33.625 UTC [105] LOG: aborting any active transactions
- pgmaster_1 | 2018-09-17 12:41:33.626 UTC [105] LOG: worker process: logical replication launcher (PID 112) exited with exit code 1
- pgmaster_1 | 2018-09-17 12:41:33.627 UTC [107] LOG: shutting down
- pgmaster_1 | 2018-09-17 12:41:33.646 UTC [105] LOG: database system is shut down
- pgmaster_1 | done
- pgmaster_1 | server stopped
- pgmaster_1 |
- pgmaster_1 | PostgreSQL init process complete; ready for start up.
- pgmaster_1 |
- pgmaster_1 | 2018-09-17 12:41:33.737 UTC [65] LOG: listening on IPv4 address "0.0.0.0", port 5432
- pgmaster_1 | 2018-09-17 12:41:33.737 UTC [65] LOG: listening on IPv6 address "::", port 5432
- pgpool_1 | 2018/09/17 12:41:33 Connected to tcp://pgmaster:5432
- pgpool_1 | >>>>>> Adding backend 0
- pgpool_1 | >>>>>> Waiting for backend 1 to start pgpool (WAIT_BACKEND_TIMEOUT=60)
- pgpool_1 | 2018/09/17 12:41:33 Waiting for host: tcp://pgslave1:5432
- pgmaster_1 | 2018-09-17 12:41:33.743 UTC [65] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
- pgmaster_1 | 2018-09-17 12:41:33.758 UTC [144] LOG: database system was shut down at 2018-09-17 12:41:33 UTC
- pgmaster_1 | 2018-09-17 12:41:33.759 UTC [145] LOG: incomplete startup packet
- pgmaster_1 | 2018-09-17 12:41:33.766 UTC [65] LOG: database system is ready to accept connections
- pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 30 times more)
- pgslave2_1 | psql: could not connect to server: Connection refused
- pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
- pgslave2_1 | TCP/IP connections on port 5432?
- pgslave4_1 | >>>>>> Host pgslave3:5432 is not accessible (will try 30 times more)
- pgslave4_1 | >>>>>> Host pgslave3:5432 is not accessiblepsql: could not connect to server: Connection refused
- pgslave4_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
- pgslave4_1 | TCP/IP connections on port 5432?
- pgslave1_1 | >>>>>> Host pgmaster:5432 is not accessible (will try 30 times more)
- pgslave3_1 | >>>>>> Host pgmaster:5432 is not accessible (will try 30 times more)
- pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 29 times more)
- pgslave2_1 | psql: could not connect to server: Connection refused
- pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
- pgslave2_1 | TCP/IP connections on port 5432?
- pgslave4_1 | (will try 29 times more)
- pgslave4_1 | >>>>>> Host pgslave3:5432 is not accessiblepsql: could not connect to server: Connection refused
- pgslave4_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
- pgslave4_1 | TCP/IP connections on port 5432?
- pgslave1_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 29 times more)
- pgslave3_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 29 times more)
- pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 28 times more)
- pgslave2_1 | psql: could not connect to server: Connection refused
- pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
- pgslave2_1 | TCP/IP connections on port 5432?
- pgslave4_1 | (will try 28 times more)
- pgslave4_1 | psql: could not connect to server: Connection refused
- pgslave4_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
- pgslave4_1 | TCP/IP connections on port 5432?
- backup_1 | 2018-09-17 12:42:01,617 [33] barman.config DEBUG: Including configuration file: upstream.conf
- backup_1 | 2018-09-17 12:42:01,618 [33] barman.cli DEBUG: Initialised Barman version 2.4 (config: /etc/barman.conf, args: {'server_name': ['pg_cluster'], 'format': 'console', 'quiet': False, 'command': 'show_server', 'debug': False})
- backup_1 | 2018-09-17 12:42:01,632 [33] barman.backup_executor DEBUG: The default backup strategy for postgres backup_method is: concurrent_backup
- backup_1 | 2018-09-17 12:42:01,632 [33] barman.server DEBUG: Retention policy for server pg_cluster: RECOVERY WINDOW OF 30 DAYS
- backup_1 | 2018-09-17 12:42:01,632 [33] barman.server DEBUG: WAL retention policy for server pg_cluster: MAIN
- backup_1 | 2018-09-17 12:42:01,654 [33] barman.command_wrappers DEBUG: Command: ['/usr/bin/pg_receivewal', '--version']
- pgmaster_1 | >>>>>> RECOVERY_WAL_ID is empty!
- pgmaster_1 | >>> Not in recovery state (anymore)
- pgmaster_1 | >>> Waiting for local postgres server start...
- pgmaster_1 | >>> Wait schema replication_db.public on pgmaster:5432(user: replication_user,password: *******), will try 9 times with delay 10 seconds (TIMEOUT=90)
- pgslave1_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 28 times more)
- pgmaster_1 | >>>>>> Schema replication_db.public exists on host pgmaster:5432!
- pgmaster_1 | >>> Registering node with role master
- pgmaster_1 | INFO: connecting to primary database...
- pgmaster_1 | NOTICE: attempting to install extension "repmgr"
- pgmaster_1 | NOTICE: "repmgr" extension successfully installed
- pgmaster_1 | INFO: executing notification command for event "cluster_created"
- pgmaster_1 | DETAIL: command is:
- pgmaster_1 | /usr/local/bin/cluster/repmgr/events/router.sh 1 cluster_created 1 "2018-09-17 12:42:01.875268+00" ""
- pgmaster_1 | [REPMGR EVENT] Node id: 1; Event type: cluster_created; Success [1|0]: 1; Time: 2018-09-17 12:42:01.875268+00; Details:
- pgslave3_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 28 times more)
- pgmaster_1 | INFO: executing notification command for event "primary_register"
- pgmaster_1 | DETAIL: command is:
- pgmaster_1 | /usr/local/bin/cluster/repmgr/events/router.sh 1 primary_register 1 "2018-09-17 12:42:01.885953+00" ""
- pgmaster_1 | [REPMGR EVENT] Node id: 1; Event type: primary_register; Success [1|0]: 1; Time: 2018-09-17 12:42:01.885953+00; Details:
- pgmaster_1 | NOTICE: primary node record (id: 1) registered
- pgmaster_1 | >>> Starting repmgr daemon...
- pgslave3_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
- pgmaster_1 | [2018-09-17 12:42:01] [NOTICE] repmgrd (repmgr 4.0.6) starting up
- pgmaster_1 | INFO: looking for configuration file in /etc
- pgmaster_1 | INFO: configuration file found at: "/etc/repmgr.conf"
- pgmaster_1 | [2018-09-17 12:42:01] [INFO] connecting to database "user=replication_user password=replication_pass host=pgmaster dbname=replication_db port=5432 connect_timeout=2"
- pgmaster_1 | [2018-09-17 12:42:01] [NOTICE] starting monitoring of node "node1" (ID: 1)
- pgmaster_1 | [2018-09-17 12:42:01] [INFO] executing notification command for event "repmgrd_start"
- pgmaster_1 | [2018-09-17 12:42:01] [DETAIL] command is:
- pgmaster_1 | /usr/local/bin/cluster/repmgr/events/router.sh 1 repmgrd_start 1 "2018-09-17 12:42:01.962474+00" "monitoring cluster primary \"node1\" (node ID: 1)"
- pgmaster_1 | [2018-09-17 12:42:01] [NOTICE] monitoring cluster primary "node1" (node ID: 1)
- pgslave3_1 | >>> REPLICATION_UPSTREAM_NODE_ID=1
- pgslave3_1 | >>> Sending in background postgres start...
- pgslave3_1 | >>> Waiting for upstream postgres server...
- pgslave3_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
- backup_1 | 2018-09-17 12:42:02,052 [33] barman.command_wrappers DEBUG: Command return code: 0
- backup_1 | 2018-09-17 12:42:02,052 [33] barman.command_wrappers DEBUG: Command stdout: pg_receivewal (PostgreSQL) 10.5 (Debian 10.5-1.pgdg80+1)
- backup_1 |
- backup_1 | 2018-09-17 12:42:02,052 [33] barman.command_wrappers DEBUG: Command stderr:
- backup_1 | 2018-09-17 12:42:02,054 [33] barman.wal_archiver DEBUG: Look for 'barman_receive_wal' in 'synchronous_standby_names': ['']
- backup_1 | 2018-09-17 12:42:02,054 [33] barman.wal_archiver DEBUG: Synchronous WAL streaming for barman_receive_wal: False
- backup_1 | 2018-09-17 12:42:02,054 [33] barman.command_wrappers DEBUG: Command: ['/usr/bin/pg_basebackup', '--version']
- pgslave3_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
- pgslave3_1 | >>> Starting standby node...
- pgslave3_1 | >>> Instance hasn't been set up yet.
- pgslave3_1 | >>> Clonning primary node...
- pgslave3_1 | >>> Waiting for upstream postgres server...
- pgslave3_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
- pgslave3_1 | NOTICE: destination directory "/var/lib/postgresql/data" provided
- pgslave3_1 | INFO: connecting to source node
- pgslave3_1 | DETAIL: connection string is: host=pgmaster user=replication_user port=5432 dbname=replication_db
- pgslave3_1 | DETAIL: current installation size is 37 MB
- pgslave3_1 | INFO: checking and correcting permissions on existing directory "/var/lib/postgresql/data"
- pgslave3_1 | NOTICE: >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
- pgslave3_1 | starting backup (using pg_basebackup)...
- pgslave3_1 | INFO: executing:
- pgslave3_1 | /usr/lib/postgresql/10/bin/pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h pgmaster -p 5432 -U replication_user -c fast -X stream -S repmgr_slot_4
- pgslave3_1 | >>> Waiting for cloning on this node is over(if any in progress): CLEAN_UP_ON_FAIL=, INTERVAL=30
- pgslave3_1 | >>> Replicated: 4
- backup_1 | 2018-09-17 12:42:02,441 [33] barman.command_wrappers DEBUG: Command return code: 0
- backup_1 | 2018-09-17 12:42:02,442 [33] barman.command_wrappers DEBUG: Command stdout: pg_basebackup (PostgreSQL) 10.5 (Debian 10.5-1.pgdg80+1)
- backup_1 |
- backup_1 | 2018-09-17 12:42:02,442 [33] barman.command_wrappers DEBUG: Command stderr:
- backup_1 | Creating replication slot: barman_the_backupper
- backup_1 | 2018-09-17 12:42:02,562 [38] barman.config DEBUG: Including configuration file: upstream.conf
- backup_1 | 2018-09-17 12:42:02,562 [38] barman.cli DEBUG: Initialised Barman version 2.4 (config: /etc/barman.conf, args: {'reset': False, 'server_name': 'pg_cluster', 'format': 'console', 'stop': False, 'create_slot': True, 'quiet': False, 'drop_slot': False, 'command': 'receive_wal', 'debug': False})
- backup_1 | 2018-09-17 12:42:02,576 [38] barman.backup_executor DEBUG: The default backup strategy for postgres backup_method is: concurrent_backup
- backup_1 | 2018-09-17 12:42:02,577 [38] barman.server DEBUG: Retention policy for server pg_cluster: RECOVERY WINDOW OF 30 DAYS
- backup_1 | 2018-09-17 12:42:02,577 [38] barman.server DEBUG: WAL retention policy for server pg_cluster: MAIN
- backup_1 | 2018-09-17 12:42:02,579 [38] barman.server INFO: Creating physical replication slot 'barman_the_backupper' on server 'pg_cluster'
- backup_1 | 2018-09-17 12:42:02,632 [38] barman.server INFO: Replication slot 'barman_the_backupper' created
- backup_1 | Creating physical replication slot 'barman_the_backupper' on server 'pg_cluster'
- backup_1 | Replication slot 'barman_the_backupper' created
- backup_1 | 2018-09-17 12:42:02,739 [39] barman.config DEBUG: Including configuration file: upstream.conf
- backup_1 | 2018-09-17 12:42:02,740 [39] barman.cli DEBUG: Initialised Barman version 2.4 (config: /etc/barman.conf, args: {'debug': False, 'command': 'cron', 'quiet': False, 'format': 'console'})
- backup_1 | 2018-09-17 12:42:02,754 [39] barman.backup_executor DEBUG: The default backup strategy for postgres backup_method is: concurrent_backup
- backup_1 | 2018-09-17 12:42:02,754 [39] barman.server DEBUG: Retention policy for server pg_cluster: RECOVERY WINDOW OF 30 DAYS
- backup_1 | 2018-09-17 12:42:02,754 [39] barman.server DEBUG: WAL retention policy for server pg_cluster: MAIN
- backup_1 | 2018-09-17 12:42:02,754 [39] barman.command_wrappers DEBUG: BarmanSubProcess: ['/usr/bin/python', '/usr/bin/barman', '-c', '/etc/barman.conf', '-q', 'archive-wal', 'pg_cluster']
- pgslave3_1 | NOTICE: standby clone (using pg_basebackup) complete
- pgslave3_1 | NOTICE: you can now start your PostgreSQL server
- pgslave3_1 | HINT: for example: pg_ctl -D /var/lib/postgresql/data start
- pgslave3_1 | HINT: after starting the server, you need to register this standby with "repmgr standby register"
- pgslave3_1 | INFO: executing notification command for event "standby_clone"
- pgslave3_1 | DETAIL: command is:
- pgslave3_1 | /usr/local/bin/cluster/repmgr/events/router.sh 4 standby_clone 1 "2018-09-17 12:42:02.833449+00" "cloned from host \"pgmaster\", port 5432; backup method: pg_basebackup; --force: Y"
- pgslave3_1 | [REPMGR EVENT] Node id: 4; Event type: standby_clone; Success [1|0]: 1; Time: 2018-09-17 12:42:02.833449+00; Details: cloned from host "pgmaster", port 5432; backup method: pg_basebackup; --force: Y
- pgslave3_1 | >>> Configuring /var/lib/postgresql/data/postgresql.conf
- pgslave3_1 | >>>>>> Will add configs to the exists file
- pgslave3_1 | >>>>>> Adding config 'listen_addresses'=''*''
- pgslave3_1 | >>>>>> Adding config 'shared_preload_libraries'=''repmgr''
- pgslave3_1 | >>> Starting postgres...
- pgslave3_1 | >>> Waiting for local postgres server recovery if any in progress:LAUNCH_RECOVERY_CHECK_INTERVAL=30
- pgslave3_1 | >>> Recovery is in progress:
- pgslave3_1 | 2018-09-17 12:42:02.928 UTC [168] LOG: listening on IPv4 address "0.0.0.0", port 5432
- pgslave3_1 | 2018-09-17 12:42:02.929 UTC [168] LOG: listening on IPv6 address "::", port 5432
- pgslave3_1 | 2018-09-17 12:42:02.934 UTC [168] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
- pgslave3_1 | 2018-09-17 12:42:02.954 UTC [177] LOG: database system was interrupted; last known up at 2018-09-17 12:42:02 UTC
- pgslave3_1 | 2018-09-17 12:42:03.029 UTC [177] LOG: entering standby mode
- pgslave3_1 | 2018-09-17 12:42:03.036 UTC [177] LOG: redo starts at 0/2000028
- pgslave3_1 | 2018-09-17 12:42:03.039 UTC [177] LOG: consistent recovery state reached at 0/20000F8
- pgslave3_1 | 2018-09-17 12:42:03.039 UTC [168] LOG: database system is ready to accept read only connections
- pgslave3_1 | 2018-09-17 12:42:03.051 UTC [181] LOG: started streaming WAL from primary at 0/3000000 on timeline 1
- backup_1 | 2018-09-17 12:42:03,112 [39] barman.command_wrappers DEBUG: BarmanSubProcess: subprocess started. pid: 40
- backup_1 | 2018-09-17 12:42:03,113 [39] barman.command_wrappers DEBUG: BarmanSubProcess: ['/usr/bin/python', '/usr/bin/barman', '-c', '/etc/barman.conf', '-q', 'receive-wal', 'pg_cluster']
- backup_1 | 2018-09-17 12:42:03,449 [39] barman.command_wrappers DEBUG: BarmanSubProcess: subprocess started. pid: 41
- backup_1 | Starting WAL archiving for server pg_cluster
- backup_1 | Starting streaming archiver for server pg_cluster
- pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 27 times more)
- pgslave2_1 | psql: could not connect to server: Connection refused
- pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
- pgslave2_1 | TCP/IP connections on port 5432?
- pgslave4_1 | >>>>>> Host pgslave3:5432 is not accessible (will try 27 times more)
- pgslave4_1 | >>>>>> Schema replication_db.repmgr exists on host pgslave3:5432!
- pgslave4_1 | >>> Can not get REPLICATION_UPSTREAM_NODE_ID from LOCK file or by CURRENT_REPLICATION_PRIMARY_HOST=pgslave3
- dockercompose_pgslave4_1 exited with code 1
- pgslave1_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 27 times more)
- pgslave1_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
- pgslave1_1 | >>> REPLICATION_UPSTREAM_NODE_ID=1
- pgslave1_1 | >>> Sending in background postgres start...
- pgslave1_1 | >>> Waiting for upstream postgres server...
- pgslave1_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
- pgslave1_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
- pgslave1_1 | >>> Starting standby node...
- pgslave1_1 | >>> Instance hasn't been set up yet.
- pgslave1_1 | >>> Clonning primary node...
- pgslave1_1 | >>> Waiting for upstream postgres server...
- pgslave1_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
- pgslave1_1 | NOTICE: destination directory "/var/lib/postgresql/data" provided
- pgslave1_1 | INFO: connecting to source node
- pgslave1_1 | DETAIL: connection string is: host=pgmaster user=replication_user port=5432 dbname=replication_db
- pgslave1_1 | DETAIL: current installation size is 37 MB
- pgslave1_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
- pgslave1_1 | >>> Waiting for cloning on this node is over(if any in progress): CLEAN_UP_ON_FAIL=, INTERVAL=30
- pgslave1_1 | INFO: checking and correcting permissions on existing directory "/var/lib/postgresql/data"
- pgslave1_1 | >>> Replicated: 4
- pgslave1_1 | NOTICE: starting backup (using pg_basebackup)...
- pgslave1_1 | INFO: executing:
- pgslave1_1 | /usr/lib/postgresql/10/bin/pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h pgmaster -p 5432 -U replication_user -c fast -X stream -S repmgr_slot_2
- pgslave1_1 | NOTICE: standby clone (using pg_basebackup) complete
- pgslave1_1 | NOTICE: you can now start your PostgreSQL server
- pgslave1_1 | HINT: for example: pg_ctl -D /var/lib/postgresql/data start
- pgslave1_1 | HINT: after starting the server, you need to register this standby with "repmgr standby register"
- pgslave1_1 | INFO: executing notification command for event "standby_clone"
- pgslave1_1 | DETAIL: command is:
- pgslave1_1 | /usr/local/bin/cluster/repmgr/events/router.sh 2 standby_clone 1 "2018-09-17 12:42:12.787654+00" "cloned from host \"pgmaster\", port 5432; backup method: pg_basebackup; --force: Y"
- pgslave1_1 | [REPMGR EVENT] Node id: 2; Event type: standby_clone; Success [1|0]: 1; Time: 2018-09-17 12:42:12.787654+00; Details: cloned from host "pgmaster", port 5432; backup method: pg_basebackup; --force: Y
- pgslave1_1 | >>> Configuring /var/lib/postgresql/data/postgresql.conf
- pgslave1_1 | >>>>>> Will add configs to the exists file
- pgslave1_1 | >>>>>> Adding config 'max_replication_slots'='10'
- pgslave1_1 | >>>>>> Adding config 'shared_preload_libraries'=''repmgr''
- pgslave1_1 | >>> Starting postgres...
- pgslave1_1 | >>> Waiting for local postgres server recovery if any in progress:LAUNCH_RECOVERY_CHECK_INTERVAL=30
- pgslave1_1 | >>> Recovery is in progress:
- pgslave1_1 | 2018-09-17 12:42:12.954 UTC [184] LOG: listening on IPv4 address "0.0.0.0", port 5432
- pgslave1_1 | 2018-09-17 12:42:12.954 UTC [184] LOG: listening on IPv6 address "::", port 5432
- pgpool_1 | 2018/09/17 12:42:12 Connected to tcp://pgslave1:5432
- pgpool_1 | >>>>>> Adding backend 1
- pgpool_1 | >>>>>> Waiting for backend 3 to start pgpool (WAIT_BACKEND_TIMEOUT=60)
- pgpool_1 | 2018/09/17 12:42:12 Waiting for host: tcp://pgslave3:5432
- pgpool_1 | 2018/09/17 12:42:12 Connected to tcp://pgslave3:5432
- pgslave3_1 | 2018-09-17 12:42:12.961 UTC [184] LOG: incomplete startup packet
- pgpool_1 | >>>>>> Adding backend 3
- pgpool_1 | >>>>>> Waiting for backend 2 to start pgpool (WAIT_BACKEND_TIMEOUT=60)
- pgslave1_1 | 2018-09-17 12:42:12.963 UTC [184] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
- pgpool_1 | 2018/09/17 12:42:12 Waiting for host: tcp://pgslave2:5432
- pgslave1_1 | 2018-09-17 12:42:12.980 UTC [193] LOG: database system was interrupted; last known up at 2018-09-17 12:42:12 UTC
- pgslave1_1 | 2018-09-17 12:42:12.980 UTC [194] LOG: incomplete startup packet
- pgslave1_1 | 2018-09-17 12:42:13.044 UTC [193] LOG: entering standby mode
- pgslave1_1 | 2018-09-17 12:42:13.051 UTC [193] LOG: redo starts at 0/4000028
- pgslave1_1 | 2018-09-17 12:42:13.053 UTC [193] LOG: consistent recovery state reached at 0/40000F8
- pgslave1_1 | 2018-09-17 12:42:13.053 UTC [184] LOG: database system is ready to accept read only connections
- pgslave1_1 | 2018-09-17 12:42:13.059 UTC [198] LOG: started streaming WAL from primary at 0/5000000 on timeline 1
- pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 26 times more)
- pgslave2_1 | >>>>>> Schema replication_db.repmgr exists on host pgslave1:5432!
- pgslave2_1 | >>> Can not get REPLICATION_UPSTREAM_NODE_ID from LOCK file or by CURRENT_REPLICATION_PRIMARY_HOST=pgslave1
- dockercompose_pgslave2_1 exited with code 1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement