Advertisement
Guest User

Untitled

a guest
Sep 17th, 2018
231
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 46.10 KB | None | 0 0
  1. ~/docker/PostDock$ sudo docker-compose -f docker-compose/latest.yml up pgmaster pgslave1 pgslave2 pgslave3 pgslave4 pgpool backup
  2. Creating network "dockercompose_default" with the default driver
  3. Creating network "dockercompose_cluster" with driver "bridge"
  4. Creating volume "dockercompose_backup" with default driver
  5. Creating volume "dockercompose_pgmaster" with default driver
  6. Creating volume "dockercompose_pgslave1" with default driver
  7. Creating volume "dockercompose_pgslave2" with default driver
  8. Creating volume "dockercompose_pgslave3" with default driver
  9. Creating volume "dockercompose_pgslave4" with default driver
  10. Creating dockercompose_pgslave2_1 ...
  11. Creating dockercompose_pgslave3_1 ...
  12. Creating dockercompose_pgpool_1 ...
  13. Creating dockercompose_backup_1 ...
  14. Creating dockercompose_pgslave2_1
  15. Creating dockercompose_pgpool_1
  16. Creating dockercompose_pgslave3_1
  17. Creating dockercompose_pgslave1_1 ...
  18. Creating dockercompose_backup_1
  19. Creating dockercompose_pgmaster_1 ...
  20. Creating dockercompose_pgslave4_1 ...
  21. Creating dockercompose_pgslave1_1
  22. Creating dockercompose_pgmaster_1
  23. Creating dockercompose_pgmaster_1 ... done
  24. Attaching to dockercompose_pgslave2_1, dockercompose_pgpool_1, dockercompose_pgslave1_1, dockercompose_pgslave3_1, dockercompose_backup_1, dockercompose_pgslave4_1, dockercompose_pgmaster_1
  25. pgpool_1 | >>> STARTING SSH (if required)...
  26. pgslave2_1 | + echo '>>> Setting up STOP handlers...'
  27. pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  28. pgslave2_1 | + trap 'system_stop TERM' TERM
  29. pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  30. pgslave2_1 | + trap 'system_stop SIGTERM' SIGTERM
  31. pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  32. pgslave2_1 | + trap 'system_stop QUIT' QUIT
  33. pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  34. pgslave2_1 | + trap 'system_stop SIGQUIT' SIGQUIT
  35. pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  36. pgslave2_1 | + trap 'system_stop INT' INT
  37. pgslave1_1 | + echo '>>> Setting up STOP handlers...'
  38. pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  39. pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  40. pgslave2_1 | + trap 'system_stop SIGINT' SIGINT
  41. pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  42. pgslave1_1 | + trap 'system_stop TERM' TERM
  43. pgpool_1 | >>> TUNING UP SSH CLIENT...
  44. pgslave2_1 | + trap 'system_stop KILL' KILL
  45. pgslave3_1 | >>> Setting up STOP handlers...
  46. pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  47. backup_1 | >>> Checking all configurations
  48. pgslave2_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  49. pgpool_1 | >>> STARTING SSH SERVER...
  50. pgslave4_1 | >>> Setting up STOP handlers...
  51. pgpool_1 | >>> TURNING PGPOOL...
  52. pgslave1_1 | + trap 'system_stop SIGTERM' SIGTERM
  53. backup_1 | >>> Configuring barman for streaming replication
  54. pgslave2_1 | + trap 'system_stop SIGKILL' SIGKILL
  55. pgslave3_1 | + echo '>>> Setting up STOP handlers...'
  56. pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  57. pgpool_1 | >>> Opening access from all hosts by md5 in /usr/local/etc/pool_hba.conf
  58. backup_1 | >>> STARTING SSH (if required)...
  59. pgslave3_1 | + trap 'system_stop TERM' TERM
  60. pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  61. pgslave3_1 | + trap 'system_stop SIGTERM' SIGTERM
  62. pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  63. pgmaster_1 | + echo '>>> Setting up STOP handlers...'
  64. pgslave3_1 | + trap 'system_stop QUIT' QUIT
  65. backup_1 | >>> TUNING UP SSH CLIENT...
  66. pgpool_1 | >>> Adding user pcp_user for PCP
  67. pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  68. pgslave3_1 | + trap 'system_stop SIGQUIT' SIGQUIT
  69. pgslave2_1 | + echo '>>> STARTING SSH (if required)...'
  70. pgslave2_1 | + source /home/postgres/.ssh/entrypoint.sh
  71. backup_1 | >>> STARTING SSH SERVER...
  72. pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  73. pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  74. pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  75. pgslave1_1 | + trap 'system_stop QUIT' QUIT
  76. pgslave4_1 | + echo '>>> Setting up STOP handlers...'
  77. pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  78. pgslave3_1 | + trap 'system_stop INT' INT
  79. pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  80. pgslave1_1 | >>> Setting up STOP handlers...
  81. pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  82. pgslave3_1 | + trap 'system_stop SIGINT' SIGINT
  83. pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  84. backup_1 | >>> SETUP BARMAN CRON
  85. backup_1 | >>>>>> Backup schedule is */30 */5 * * *
  86. pgslave2_1 | ++ set -e
  87. pgslave3_1 | + trap 'system_stop KILL' KILL
  88. pgslave1_1 | + trap 'system_stop SIGQUIT' SIGQUIT
  89. pgmaster_1 | + trap 'system_stop TERM' TERM
  90. pgpool_1 | >>> Creating a ~/.pcppass file for pcp_user
  91. pgslave4_1 | + trap 'system_stop TERM' TERM
  92. backup_1 | >>> STARTING METRICS SERVER
  93. pgslave2_1 | ++ cp -f '/home/postgres/.ssh/keys/*' /home/postgres/.ssh/
  94. pgslave3_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  95. pgslave3_1 | + trap 'system_stop SIGKILL' SIGKILL
  96. pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  97. pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  98. pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  99. backup_1 | >>> STARTING CRON
  100. pgslave3_1 | >>> STARTING SSH (if required)...
  101. pgslave2_1 | >>> Setting up STOP handlers...
  102. pgpool_1 | >>> Adding users for md5 auth
  103. pgslave1_1 | + trap 'system_stop INT' INT
  104. pgmaster_1 | + trap 'system_stop SIGTERM' SIGTERM
  105. pgslave4_1 | + trap 'system_stop SIGTERM' SIGTERM
  106. pgslave2_1 | >>> STARTING SSH (if required)...
  107. pgslave3_1 | + echo '>>> STARTING SSH (if required)...'
  108. pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  109. pgpool_1 | >>>>>> Adding user monkey_user
  110. pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  111. pgslave3_1 | + source /home/postgres/.ssh/entrypoint.sh
  112. pgmaster_1 | + trap 'system_stop QUIT' QUIT
  113. pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  114. pgpool_1 | >>> Adding check user 'monkey_user' for md5 auth
  115. pgslave2_1 | cp: cannot stat '/home/postgres/.ssh/keys/*': No such file or directory
  116. pgslave1_1 | + trap 'system_stop SIGINT' SIGINT
  117. pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  118. pgslave4_1 | + trap 'system_stop QUIT' QUIT
  119. pgslave3_1 | ++ set -e
  120. pgslave2_1 | No pre-populated ssh keys!
  121. pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  122. pgpool_1 | >>> Adding user 'monkey_user' as check user
  123. pgslave3_1 | ++ cp -f '/home/postgres/.ssh/keys/*' /home/postgres/.ssh/
  124. pgmaster_1 | + trap 'system_stop SIGQUIT' SIGQUIT
  125. pgslave1_1 | + trap 'system_stop KILL' KILL
  126. pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  127. pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  128. pgslave2_1 | ++ echo 'No pre-populated ssh keys!'
  129. pgpool_1 | >>> Adding user 'monkey_user' as health-check user
  130. pgslave3_1 | cp: cannot stat '/home/postgres/.ssh/keys/*': No such file or directory
  131. pgslave1_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  132. pgmaster_1 | + trap 'system_stop INT' INT
  133. pgslave4_1 | + trap 'system_stop SIGQUIT' SIGQUIT
  134. pgslave2_1 | ++ chown -R postgres:postgres /home/postgres
  135. pgslave3_1 | ++ echo 'No pre-populated ssh keys!'
  136. pgmaster_1 | >>> Setting up STOP handlers...
  137. pgpool_1 | >>> Adding backends
  138. pgslave3_1 | ++ chown -R postgres:postgres /home/postgres
  139. pgslave1_1 | + trap 'system_stop SIGKILL' SIGKILL
  140. pgslave2_1 | ++ [[ 0 == \1 ]]
  141. pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  142. pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  143. pgpool_1 | >>>>>> Waiting for backend 0 to start pgpool (WAIT_BACKEND_TIMEOUT=60)
  144. pgslave1_1 | + echo '>>> STARTING SSH (if required)...'
  145. pgslave3_1 | No pre-populated ssh keys!
  146. pgslave2_1 | ++ echo '>>> SSH is not enabled!'
  147. pgmaster_1 | + trap 'system_stop SIGINT' SIGINT
  148. pgslave4_1 | + trap 'system_stop INT' INT
  149. pgslave1_1 | + source /home/postgres/.ssh/entrypoint.sh
  150. pgpool_1 | 2018/09/17 12:41:26 Waiting for host: tcp://pgmaster:5432
  151. pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  152. pgmaster_1 | + trap 'system_stop KILL' KILL
  153. pgmaster_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  154. pgmaster_1 | + trap 'system_stop SIGKILL' SIGKILL
  155. pgmaster_1 | + echo '>>> STARTING SSH (if required)...'
  156. pgslave2_1 | >>> SSH is not enabled!
  157. pgmaster_1 | + source /home/postgres/.ssh/entrypoint.sh
  158. pgslave3_1 | ++ [[ 0 == \1 ]]
  159. pgslave1_1 | >>> STARTING SSH (if required)...
  160. pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  161. pgslave3_1 | ++ echo '>>> SSH is not enabled!'
  162. pgmaster_1 | ++ set -e
  163. pgslave4_1 | + trap 'system_stop SIGINT' SIGINT
  164. pgslave1_1 | ++ set -e
  165. pgslave2_1 | + echo '>>> STARTING POSTGRES...'
  166. pgslave3_1 | + echo '>>> STARTING POSTGRES...'
  167. pgmaster_1 | ++ cp -f /home/postgres/.ssh/keys/id_rsa /home/postgres/.ssh/keys/id_rsa.pub /home/postgres/.ssh/
  168. pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  169. pgslave2_1 | >>> STARTING POSTGRES...
  170. pgslave3_1 | >>> SSH is not enabled!
  171. pgslave3_1 | >>> STARTING POSTGRES...
  172. pgslave1_1 | ++ cp -f /home/postgres/.ssh/keys/id_rsa /home/postgres/.ssh/keys/id_rsa.pub /home/postgres/.ssh/
  173. pgmaster_1 | >>> STARTING SSH (if required)...
  174. pgslave3_1 | + wait 10
  175. pgslave4_1 | + trap 'system_stop KILL' KILL
  176. pgslave1_1 | ++ chown -R postgres:postgres /home/postgres
  177. pgslave2_1 | + wait 11
  178. pgmaster_1 | ++ chown -R postgres:postgres /home/postgres
  179. pgslave3_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
  180. pgslave1_1 | >>> TUNING UP SSH CLIENT...
  181. pgslave4_1 | + for f in TERM SIGTERM QUIT SIGQUIT INT SIGINT KILL SIGKILL
  182. pgslave4_1 | + trap 'system_stop SIGKILL' SIGKILL
  183. pgslave1_1 | ++ [[ 1 == \1 ]]
  184. pgslave2_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
  185. pgslave3_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
  186. pgslave1_1 | ++ echo '>>> TUNING UP SSH CLIENT...'
  187. pgmaster_1 | ++ [[ 1 == \1 ]]
  188. pgslave4_1 | + echo '>>> STARTING SSH (if required)...'
  189. pgslave1_1 | ++ '[' '!' -f /home/postgres/.ssh/id_rsa.pub ']'
  190. pgmaster_1 | ++ echo '>>> TUNING UP SSH CLIENT...'
  191. pgslave3_1 | >>> TUNING UP POSTGRES...
  192. pgslave2_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
  193. pgslave1_1 | ++ chmod 600 -R /home/postgres/.ssh/id_rsa
  194. pgslave4_1 | + source /home/postgres/.ssh/entrypoint.sh
  195. pgmaster_1 | >>> TUNING UP SSH CLIENT...
  196. pgslave3_1 | >>> Cleaning data folder which might have some garbage...
  197. pgslave2_1 | >>> TUNING UP POSTGRES...
  198. pgslave1_1 | ++ mkdir -p /var/run/sshd
  199. pgslave4_1 | >>> STARTING SSH (if required)...
  200. pgmaster_1 | ++ '[' '!' -f /home/postgres/.ssh/id_rsa.pub ']'
  201. pgslave3_1 | >>> Check all partner nodes for common upstream node...
  202. pgslave1_1 | ++ sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
  203. pgslave2_1 | >>> Cleaning data folder which might have some garbage...
  204. pgmaster_1 | ++ chmod 600 -R /home/postgres/.ssh/id_rsa
  205. pgslave4_1 | ++ set -e
  206. pgslave3_1 | >>>>>> Checking NODE=pgmaster...
  207. pgslave1_1 | ++ sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
  208. pgslave2_1 | >>> Auto-detected master name: ''
  209. pgmaster_1 | ++ mkdir -p /var/run/sshd
  210. pgslave4_1 | ++ cp -f '/home/postgres/.ssh/keys/*' /home/postgres/.ssh/
  211. pgslave1_1 | ++ echo 'export VISIBLE=now'
  212. pgslave2_1 | >>> Setting up repmgr...
  213. pgslave2_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
  214. pgmaster_1 | ++ sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
  215. pgslave4_1 | cp: cannot stat '/home/postgres/.ssh/keys/*': No such file or directory
  216. pgslave2_1 | >>> Setting up upstream node...
  217. pgslave1_1 | ++ cat /home/postgres/.ssh/id_rsa.pub
  218. pgslave4_1 | ++ echo 'No pre-populated ssh keys!'
  219. pgmaster_1 | ++ sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
  220. pgslave4_1 | ++ chown -R postgres:postgres /home/postgres
  221. pgslave2_1 | cat: /var/lib/postgresql/data/standby.lock: No such file or directory
  222. pgmaster_1 | ++ echo 'export VISIBLE=now'
  223. pgslave1_1 | >>> STARTING SSH SERVER...
  224. pgslave2_1 | >>> Previously Locked standby upstream node LOCKED_STANDBY=''
  225. pgslave1_1 | ++ echo '>>> STARTING SSH SERVER...'
  226. pgmaster_1 | ++ cat /home/postgres/.ssh/id_rsa.pub
  227. pgslave4_1 | No pre-populated ssh keys!
  228. pgslave2_1 | >>> Waiting for upstream postgres server...
  229. pgslave1_1 | ++ /usr/sbin/sshd
  230. pgmaster_1 | ++ echo '>>> STARTING SSH SERVER...'
  231. pgslave4_1 | >>> SSH is not enabled!
  232. pgmaster_1 | ++ /usr/sbin/sshd
  233. pgslave2_1 | >>> Wait schema replication_db.repmgr on pgslave1:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
  234. pgslave1_1 | + echo '>>> STARTING POSTGRES...'
  235. pgslave4_1 | ++ [[ 0 == \1 ]]
  236. pgmaster_1 | >>> STARTING SSH SERVER...
  237. pgslave1_1 | >>> STARTING POSTGRES...
  238. pgslave4_1 | ++ echo '>>> SSH is not enabled!'
  239. pgslave2_1 | psql: could not connect to server: Connection refused
  240. pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
  241. pgslave2_1 | TCP/IP connections on port 5432?
  242. pgslave1_1 | + wait 16
  243. pgslave4_1 | >>> STARTING POSTGRES...
  244. pgmaster_1 | >>> STARTING POSTGRES...
  245. pgslave1_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
  246. pgslave4_1 | + echo '>>> STARTING POSTGRES...'
  247. pgmaster_1 | + echo '>>> STARTING POSTGRES...'
  248. pgslave1_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
  249. pgslave4_1 | + wait 9
  250. pgmaster_1 | + wait 16
  251. pgmaster_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
  252. pgslave1_1 | >>> TUNING UP POSTGRES...
  253. pgmaster_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
  254. pgslave1_1 | >>> Cleaning data folder which might have some garbage...
  255. pgmaster_1 | >>> TUNING UP POSTGRES...
  256. pgslave1_1 | >>> Check all partner nodes for common upstream node...
  257. pgslave4_1 | + /usr/local/bin/cluster/postgres/entrypoint.sh
  258. pgslave1_1 | >>>>>> Checking NODE=pgmaster...
  259. pgslave4_1 | >>> SETTING UP POLYMORPHIC VARIABLES (repmgr=3+postgres=9 | repmgr=4, postgres=10)...
  260. pgmaster_1 | >>> Cleaning data folder which might have some garbage...
  261. pgslave4_1 | >>> TUNING UP POSTGRES...
  262. pgslave1_1 | psql: could not connect to server: No route to host
  263. pgslave1_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
  264. pgslave1_1 | TCP/IP connections on port 5432?
  265. pgslave4_1 | >>> Cleaning data folder which might have some garbage...
  266. pgslave1_1 | >>>>>> Skipping: failed to get master from the node!
  267. pgmaster_1 | >>> Check all partner nodes for common upstream node...
  268. pgslave4_1 | >>> Auto-detected master name: ''
  269. pgslave1_1 | >>>>>> Checking NODE=pgslave1...
  270. pgslave1_1 | psql: could not connect to server: Connection refused
  271. pgslave1_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
  272. pgmaster_1 | >>>>>> Checking NODE=pgmaster...
  273. pgslave1_1 | TCP/IP connections on port 5432?
  274. pgslave4_1 | >>> Setting up repmgr...
  275. pgslave4_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
  276. pgslave4_1 | >>> Setting up upstream node...
  277. pgslave4_1 | cat: /var/lib/postgresql/data/standby.lock: No such file or directory
  278. pgslave4_1 | >>> Previously Locked standby upstream node LOCKED_STANDBY=''
  279. pgslave4_1 | >>> Waiting for upstream postgres server...
  280. pgslave4_1 | >>> Wait schema replication_db.repmgr on pgslave3:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
  281. pgslave1_1 | >>>>>> Skipping: failed to get master from the node!
  282. pgslave1_1 | >>>>>> Checking NODE=pgslave3...
  283. pgmaster_1 | psql: could not connect to server: Connection refused
  284. pgmaster_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
  285. pgmaster_1 | TCP/IP connections on port 5432?
  286. pgslave4_1 | psql: could not connect to server: Connection refused
  287. pgslave4_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
  288. pgslave1_1 | psql: could not connect to server: Connection refused
  289. pgmaster_1 | >>>>>> Skipping: failed to get master from the node!
  290. pgslave4_1 | TCP/IP connections on port 5432?
  291. pgslave1_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
  292. pgmaster_1 | >>>>>> Checking NODE=pgslave1...
  293. pgslave1_1 | TCP/IP connections on port 5432?
  294. pgslave1_1 | >>>>>> Skipping: failed to get master from the node!
  295. pgslave1_1 | >>> Auto-detected master name: ''
  296. pgslave1_1 | >>> Setting up repmgr...
  297. pgslave1_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
  298. pgslave1_1 | >>> Setting up upstream node...
  299. pgslave1_1 | cat: /var/lib/postgresql/data/standby.lock: No such file or directory
  300. pgslave1_1 | >>> Previously Locked standby upstream node LOCKED_STANDBY=''
  301. pgslave1_1 | >>> Waiting for upstream postgres server...
  302. pgslave1_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
  303. pgmaster_1 | psql: could not connect to server: Connection refused
  304. pgmaster_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
  305. pgmaster_1 | TCP/IP connections on port 5432?
  306. pgslave1_1 | psql: could not connect to server: Connection refused
  307. pgslave1_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
  308. pgslave1_1 | TCP/IP connections on port 5432?
  309. pgslave3_1 | psql: could not connect to server: Connection refused
  310. pgslave3_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
  311. pgslave3_1 | TCP/IP connections on port 5432?
  312. pgslave3_1 | >>>>>> Skipping: failed to get master from the node!
  313. pgslave3_1 | >>>>>> Checking NODE=pgslave1...
  314. pgmaster_1 | >>>>>> Skipping: failed to get master from the node!
  315. pgmaster_1 | >>>>>> Checking NODE=pgslave3...
  316. pgslave3_1 | psql: could not connect to server: Connection refused
  317. pgslave3_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
  318. pgslave3_1 | TCP/IP connections on port 5432?
  319. pgslave3_1 | >>>>>> Skipping: failed to get master from the node!
  320. pgslave3_1 | >>>>>> Checking NODE=pgslave3...
  321. pgmaster_1 | psql: could not connect to server: Connection refused
  322. pgmaster_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
  323. pgmaster_1 | TCP/IP connections on port 5432?
  324. pgmaster_1 | >>>>>> Skipping: failed to get master from the node!
  325. pgmaster_1 | >>> Auto-detected master name: ''
  326. pgmaster_1 | >>> Setting up repmgr...
  327. pgmaster_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
  328. pgmaster_1 | >>> Setting up upstream node...
  329. pgmaster_1 | >>> Sending in background postgres start...
  330. pgmaster_1 | >>> Waiting for local postgres server recovery if any in progress:LAUNCH_RECOVERY_CHECK_INTERVAL=30
  331. pgmaster_1 | >>> Recovery is in progress:
  332. pgslave3_1 | psql: could not connect to server: Connection refused
  333. pgslave3_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
  334. pgslave3_1 | TCP/IP connections on port 5432?
  335. pgslave3_1 | >>>>>> Skipping: failed to get master from the node!
  336. pgslave3_1 | >>> Auto-detected master name: ''
  337. pgslave3_1 | >>> Setting up repmgr...
  338. pgslave3_1 | >>> Setting up repmgr config file '/etc/repmgr.conf'...
  339. pgslave3_1 | >>> Setting up upstream node...
  340. pgslave3_1 | cat: /var/lib/postgresql/data/standby.lock: No such file or directory
  341. pgslave3_1 | >>> Previously Locked standby upstream node LOCKED_STANDBY=''
  342. pgslave3_1 | >>> Waiting for upstream postgres server...
  343. pgmaster_1 | The files belonging to this database system will be owned by user "postgres".
  344. pgmaster_1 | This user must also own the server process.
  345. pgmaster_1 |
  346. pgmaster_1 | The database cluster will be initialized with locale "en_US.utf8".
  347. pgmaster_1 | The default database encoding has accordingly been set to "UTF8".
  348. pgmaster_1 | The default text search configuration will be set to "english".
  349. pgmaster_1 |
  350. pgmaster_1 | Data page checksums are disabled.
  351. pgmaster_1 |
  352. pgmaster_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
  353. pgmaster_1 | creating subdirectories ... ok
  354. pgslave3_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
  355. pgmaster_1 | selecting default max_connections ... 100
  356. pgmaster_1 | selecting default shared_buffers ... 128MB
  357. pgmaster_1 | selecting dynamic shared memory implementation ... posix
  358. pgslave3_1 | psql: could not connect to server: Connection refused
  359. pgslave3_1 | Is the server running on host "pgmaster" (192.168.112.8) and accepting
  360. pgslave3_1 | TCP/IP connections on port 5432?
  361. pgmaster_1 | creating configuration files ... ok
  362. pgmaster_1 | running bootstrap script ... ok
  363. pgmaster_1 | performing post-bootstrap initialization ... ok
  364. pgmaster_1 | syncing data to disk ...
  365. pgmaster_1 | WARNING: enabling "trust" authentication for local connections
  366. pgmaster_1 | You can change this by editing pg_hba.conf or using the option -A, or
  367. pgmaster_1 | --auth-local and --auth-host, the next time you run initdb.
  368. pgmaster_1 | ok
  369. pgmaster_1 |
  370. pgmaster_1 | Success. You can now start the database server using:
  371. pgmaster_1 |
  372. pgmaster_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
  373. pgmaster_1 |
  374. pgmaster_1 | waiting for server to start....2018-09-17 12:41:32.907 UTC [105] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
  375. pgmaster_1 | 2018-09-17 12:41:32.926 UTC [106] LOG: database system was shut down at 2018-09-17 12:41:32 UTC
  376. pgmaster_1 | 2018-09-17 12:41:32.931 UTC [105] LOG: database system is ready to accept connections
  377. pgmaster_1 | done
  378. pgmaster_1 | server started
  379. pgmaster_1 | CREATE DATABASE
  380. pgmaster_1 |
  381. pgmaster_1 |
  382. pgmaster_1 | /docker-entrypoint.sh: running /docker-entrypoint-initdb.d/entrypoint.sh
  383. pgmaster_1 | >>> Configuring /var/lib/postgresql/data/postgresql.conf
  384. pgmaster_1 | >>>>>> Config file was replaced with standard one!
  385. pgmaster_1 | >>>>>> Adding config 'listen_addresses'=''*''
  386. pgmaster_1 | >>>>>> Adding config 'max_replication_slots'='5'
  387. pgmaster_1 | >>>>>> Adding config 'shared_preload_libraries'=''repmgr''
  388. pgmaster_1 | >>> Creating replication user 'replication_user'
  389. pgmaster_1 | CREATE ROLE
  390. pgmaster_1 | >>> Creating replication db 'replication_db'
  391. pgmaster_1 |
  392. pgmaster_1 | 2018-09-17 12:41:33.622 UTC [105] LOG: received fast shutdown request
  393. pgmaster_1 | waiting for server to shut down....2018-09-17 12:41:33.625 UTC [105] LOG: aborting any active transactions
  394. pgmaster_1 | 2018-09-17 12:41:33.626 UTC [105] LOG: worker process: logical replication launcher (PID 112) exited with exit code 1
  395. pgmaster_1 | 2018-09-17 12:41:33.627 UTC [107] LOG: shutting down
  396. pgmaster_1 | 2018-09-17 12:41:33.646 UTC [105] LOG: database system is shut down
  397. pgmaster_1 | done
  398. pgmaster_1 | server stopped
  399. pgmaster_1 |
  400. pgmaster_1 | PostgreSQL init process complete; ready for start up.
  401. pgmaster_1 |
  402. pgmaster_1 | 2018-09-17 12:41:33.737 UTC [65] LOG: listening on IPv4 address "0.0.0.0", port 5432
  403. pgmaster_1 | 2018-09-17 12:41:33.737 UTC [65] LOG: listening on IPv6 address "::", port 5432
  404. pgpool_1 | 2018/09/17 12:41:33 Connected to tcp://pgmaster:5432
  405. pgpool_1 | >>>>>> Adding backend 0
  406. pgpool_1 | >>>>>> Waiting for backend 1 to start pgpool (WAIT_BACKEND_TIMEOUT=60)
  407. pgpool_1 | 2018/09/17 12:41:33 Waiting for host: tcp://pgslave1:5432
  408. pgmaster_1 | 2018-09-17 12:41:33.743 UTC [65] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
  409. pgmaster_1 | 2018-09-17 12:41:33.758 UTC [144] LOG: database system was shut down at 2018-09-17 12:41:33 UTC
  410. pgmaster_1 | 2018-09-17 12:41:33.759 UTC [145] LOG: incomplete startup packet
  411. pgmaster_1 | 2018-09-17 12:41:33.766 UTC [65] LOG: database system is ready to accept connections
  412. pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 30 times more)
  413. pgslave2_1 | psql: could not connect to server: Connection refused
  414. pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
  415. pgslave2_1 | TCP/IP connections on port 5432?
  416. pgslave4_1 | >>>>>> Host pgslave3:5432 is not accessible (will try 30 times more)
  417. pgslave4_1 | >>>>>> Host pgslave3:5432 is not accessiblepsql: could not connect to server: Connection refused
  418. pgslave4_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
  419. pgslave4_1 | TCP/IP connections on port 5432?
  420. pgslave1_1 | >>>>>> Host pgmaster:5432 is not accessible (will try 30 times more)
  421. pgslave3_1 | >>>>>> Host pgmaster:5432 is not accessible (will try 30 times more)
  422. pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 29 times more)
  423. pgslave2_1 | psql: could not connect to server: Connection refused
  424. pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
  425. pgslave2_1 | TCP/IP connections on port 5432?
  426. pgslave4_1 | (will try 29 times more)
  427. pgslave4_1 | >>>>>> Host pgslave3:5432 is not accessiblepsql: could not connect to server: Connection refused
  428. pgslave4_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
  429. pgslave4_1 | TCP/IP connections on port 5432?
  430. pgslave1_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 29 times more)
  431. pgslave3_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 29 times more)
  432. pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 28 times more)
  433. pgslave2_1 | psql: could not connect to server: Connection refused
  434. pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
  435. pgslave2_1 | TCP/IP connections on port 5432?
  436. pgslave4_1 | (will try 28 times more)
  437. pgslave4_1 | psql: could not connect to server: Connection refused
  438. pgslave4_1 | Is the server running on host "pgslave3" (192.168.112.4) and accepting
  439. pgslave4_1 | TCP/IP connections on port 5432?
  440. backup_1 | 2018-09-17 12:42:01,617 [33] barman.config DEBUG: Including configuration file: upstream.conf
  441. backup_1 | 2018-09-17 12:42:01,618 [33] barman.cli DEBUG: Initialised Barman version 2.4 (config: /etc/barman.conf, args: {'server_name': ['pg_cluster'], 'format': 'console', 'quiet': False, 'command': 'show_server', 'debug': False})
  442. backup_1 | 2018-09-17 12:42:01,632 [33] barman.backup_executor DEBUG: The default backup strategy for postgres backup_method is: concurrent_backup
  443. backup_1 | 2018-09-17 12:42:01,632 [33] barman.server DEBUG: Retention policy for server pg_cluster: RECOVERY WINDOW OF 30 DAYS
  444. backup_1 | 2018-09-17 12:42:01,632 [33] barman.server DEBUG: WAL retention policy for server pg_cluster: MAIN
  445. backup_1 | 2018-09-17 12:42:01,654 [33] barman.command_wrappers DEBUG: Command: ['/usr/bin/pg_receivewal', '--version']
  446. pgmaster_1 | >>>>>> RECOVERY_WAL_ID is empty!
  447. pgmaster_1 | >>> Not in recovery state (anymore)
  448. pgmaster_1 | >>> Waiting for local postgres server start...
  449. pgmaster_1 | >>> Wait schema replication_db.public on pgmaster:5432(user: replication_user,password: *******), will try 9 times with delay 10 seconds (TIMEOUT=90)
  450. pgslave1_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 28 times more)
  451. pgmaster_1 | >>>>>> Schema replication_db.public exists on host pgmaster:5432!
  452. pgmaster_1 | >>> Registering node with role master
  453. pgmaster_1 | INFO: connecting to primary database...
  454. pgmaster_1 | NOTICE: attempting to install extension "repmgr"
  455. pgmaster_1 | NOTICE: "repmgr" extension successfully installed
  456. pgmaster_1 | INFO: executing notification command for event "cluster_created"
  457. pgmaster_1 | DETAIL: command is:
  458. pgmaster_1 | /usr/local/bin/cluster/repmgr/events/router.sh 1 cluster_created 1 "2018-09-17 12:42:01.875268+00" ""
  459. pgmaster_1 | [REPMGR EVENT] Node id: 1; Event type: cluster_created; Success [1|0]: 1; Time: 2018-09-17 12:42:01.875268+00; Details:
  460. pgslave3_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 28 times more)
  461. pgmaster_1 | INFO: executing notification command for event "primary_register"
  462. pgmaster_1 | DETAIL: command is:
  463. pgmaster_1 | /usr/local/bin/cluster/repmgr/events/router.sh 1 primary_register 1 "2018-09-17 12:42:01.885953+00" ""
  464. pgmaster_1 | [REPMGR EVENT] Node id: 1; Event type: primary_register; Success [1|0]: 1; Time: 2018-09-17 12:42:01.885953+00; Details:
  465. pgmaster_1 | NOTICE: primary node record (id: 1) registered
  466. pgmaster_1 | >>> Starting repmgr daemon...
  467. pgslave3_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
  468. pgmaster_1 | [2018-09-17 12:42:01] [NOTICE] repmgrd (repmgr 4.0.6) starting up
  469. pgmaster_1 | INFO: looking for configuration file in /etc
  470. pgmaster_1 | INFO: configuration file found at: "/etc/repmgr.conf"
  471. pgmaster_1 | [2018-09-17 12:42:01] [INFO] connecting to database "user=replication_user password=replication_pass host=pgmaster dbname=replication_db port=5432 connect_timeout=2"
  472. pgmaster_1 | [2018-09-17 12:42:01] [NOTICE] starting monitoring of node "node1" (ID: 1)
  473. pgmaster_1 | [2018-09-17 12:42:01] [INFO] executing notification command for event "repmgrd_start"
  474. pgmaster_1 | [2018-09-17 12:42:01] [DETAIL] command is:
  475. pgmaster_1 | /usr/local/bin/cluster/repmgr/events/router.sh 1 repmgrd_start 1 "2018-09-17 12:42:01.962474+00" "monitoring cluster primary \"node1\" (node ID: 1)"
  476. pgmaster_1 | [2018-09-17 12:42:01] [NOTICE] monitoring cluster primary "node1" (node ID: 1)
  477. pgslave3_1 | >>> REPLICATION_UPSTREAM_NODE_ID=1
  478. pgslave3_1 | >>> Sending in background postgres start...
  479. pgslave3_1 | >>> Waiting for upstream postgres server...
  480. pgslave3_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
  481. backup_1 | 2018-09-17 12:42:02,052 [33] barman.command_wrappers DEBUG: Command return code: 0
  482. backup_1 | 2018-09-17 12:42:02,052 [33] barman.command_wrappers DEBUG: Command stdout: pg_receivewal (PostgreSQL) 10.5 (Debian 10.5-1.pgdg80+1)
  483. backup_1 |
  484. backup_1 | 2018-09-17 12:42:02,052 [33] barman.command_wrappers DEBUG: Command stderr:
  485. backup_1 | 2018-09-17 12:42:02,054 [33] barman.wal_archiver DEBUG: Look for 'barman_receive_wal' in 'synchronous_standby_names': ['']
  486. backup_1 | 2018-09-17 12:42:02,054 [33] barman.wal_archiver DEBUG: Synchronous WAL streaming for barman_receive_wal: False
  487. backup_1 | 2018-09-17 12:42:02,054 [33] barman.command_wrappers DEBUG: Command: ['/usr/bin/pg_basebackup', '--version']
  488. pgslave3_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
  489. pgslave3_1 | >>> Starting standby node...
  490. pgslave3_1 | >>> Instance hasn't been set up yet.
  491. pgslave3_1 | >>> Clonning primary node...
  492. pgslave3_1 | >>> Waiting for upstream postgres server...
  493. pgslave3_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
  494. pgslave3_1 | NOTICE: destination directory "/var/lib/postgresql/data" provided
  495. pgslave3_1 | INFO: connecting to source node
  496. pgslave3_1 | DETAIL: connection string is: host=pgmaster user=replication_user port=5432 dbname=replication_db
  497. pgslave3_1 | DETAIL: current installation size is 37 MB
  498. pgslave3_1 | INFO: checking and correcting permissions on existing directory "/var/lib/postgresql/data"
  499. pgslave3_1 | NOTICE: >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
  500. pgslave3_1 | starting backup (using pg_basebackup)...
  501. pgslave3_1 | INFO: executing:
  502. pgslave3_1 | /usr/lib/postgresql/10/bin/pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h pgmaster -p 5432 -U replication_user -c fast -X stream -S repmgr_slot_4
  503. pgslave3_1 | >>> Waiting for cloning on this node is over(if any in progress): CLEAN_UP_ON_FAIL=, INTERVAL=30
  504. pgslave3_1 | >>> Replicated: 4
  505. backup_1 | 2018-09-17 12:42:02,441 [33] barman.command_wrappers DEBUG: Command return code: 0
  506. backup_1 | 2018-09-17 12:42:02,442 [33] barman.command_wrappers DEBUG: Command stdout: pg_basebackup (PostgreSQL) 10.5 (Debian 10.5-1.pgdg80+1)
  507. backup_1 |
  508. backup_1 | 2018-09-17 12:42:02,442 [33] barman.command_wrappers DEBUG: Command stderr:
  509. backup_1 | Creating replication slot: barman_the_backupper
  510. backup_1 | 2018-09-17 12:42:02,562 [38] barman.config DEBUG: Including configuration file: upstream.conf
  511. backup_1 | 2018-09-17 12:42:02,562 [38] barman.cli DEBUG: Initialised Barman version 2.4 (config: /etc/barman.conf, args: {'reset': False, 'server_name': 'pg_cluster', 'format': 'console', 'stop': False, 'create_slot': True, 'quiet': False, 'drop_slot': False, 'command': 'receive_wal', 'debug': False})
  512. backup_1 | 2018-09-17 12:42:02,576 [38] barman.backup_executor DEBUG: The default backup strategy for postgres backup_method is: concurrent_backup
  513. backup_1 | 2018-09-17 12:42:02,577 [38] barman.server DEBUG: Retention policy for server pg_cluster: RECOVERY WINDOW OF 30 DAYS
  514. backup_1 | 2018-09-17 12:42:02,577 [38] barman.server DEBUG: WAL retention policy for server pg_cluster: MAIN
  515. backup_1 | 2018-09-17 12:42:02,579 [38] barman.server INFO: Creating physical replication slot 'barman_the_backupper' on server 'pg_cluster'
  516. backup_1 | 2018-09-17 12:42:02,632 [38] barman.server INFO: Replication slot 'barman_the_backupper' created
  517. backup_1 | Creating physical replication slot 'barman_the_backupper' on server 'pg_cluster'
  518. backup_1 | Replication slot 'barman_the_backupper' created
  519. backup_1 | 2018-09-17 12:42:02,739 [39] barman.config DEBUG: Including configuration file: upstream.conf
  520. backup_1 | 2018-09-17 12:42:02,740 [39] barman.cli DEBUG: Initialised Barman version 2.4 (config: /etc/barman.conf, args: {'debug': False, 'command': 'cron', 'quiet': False, 'format': 'console'})
  521. backup_1 | 2018-09-17 12:42:02,754 [39] barman.backup_executor DEBUG: The default backup strategy for postgres backup_method is: concurrent_backup
  522. backup_1 | 2018-09-17 12:42:02,754 [39] barman.server DEBUG: Retention policy for server pg_cluster: RECOVERY WINDOW OF 30 DAYS
  523. backup_1 | 2018-09-17 12:42:02,754 [39] barman.server DEBUG: WAL retention policy for server pg_cluster: MAIN
  524. backup_1 | 2018-09-17 12:42:02,754 [39] barman.command_wrappers DEBUG: BarmanSubProcess: ['/usr/bin/python', '/usr/bin/barman', '-c', '/etc/barman.conf', '-q', 'archive-wal', 'pg_cluster']
  525. pgslave3_1 | NOTICE: standby clone (using pg_basebackup) complete
  526. pgslave3_1 | NOTICE: you can now start your PostgreSQL server
  527. pgslave3_1 | HINT: for example: pg_ctl -D /var/lib/postgresql/data start
  528. pgslave3_1 | HINT: after starting the server, you need to register this standby with "repmgr standby register"
  529. pgslave3_1 | INFO: executing notification command for event "standby_clone"
  530. pgslave3_1 | DETAIL: command is:
  531. pgslave3_1 | /usr/local/bin/cluster/repmgr/events/router.sh 4 standby_clone 1 "2018-09-17 12:42:02.833449+00" "cloned from host \"pgmaster\", port 5432; backup method: pg_basebackup; --force: Y"
  532. pgslave3_1 | [REPMGR EVENT] Node id: 4; Event type: standby_clone; Success [1|0]: 1; Time: 2018-09-17 12:42:02.833449+00; Details: cloned from host "pgmaster", port 5432; backup method: pg_basebackup; --force: Y
  533. pgslave3_1 | >>> Configuring /var/lib/postgresql/data/postgresql.conf
  534. pgslave3_1 | >>>>>> Will add configs to the exists file
  535. pgslave3_1 | >>>>>> Adding config 'listen_addresses'=''*''
  536. pgslave3_1 | >>>>>> Adding config 'shared_preload_libraries'=''repmgr''
  537. pgslave3_1 | >>> Starting postgres...
  538. pgslave3_1 | >>> Waiting for local postgres server recovery if any in progress:LAUNCH_RECOVERY_CHECK_INTERVAL=30
  539. pgslave3_1 | >>> Recovery is in progress:
  540. pgslave3_1 | 2018-09-17 12:42:02.928 UTC [168] LOG: listening on IPv4 address "0.0.0.0", port 5432
  541. pgslave3_1 | 2018-09-17 12:42:02.929 UTC [168] LOG: listening on IPv6 address "::", port 5432
  542. pgslave3_1 | 2018-09-17 12:42:02.934 UTC [168] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
  543. pgslave3_1 | 2018-09-17 12:42:02.954 UTC [177] LOG: database system was interrupted; last known up at 2018-09-17 12:42:02 UTC
  544. pgslave3_1 | 2018-09-17 12:42:03.029 UTC [177] LOG: entering standby mode
  545. pgslave3_1 | 2018-09-17 12:42:03.036 UTC [177] LOG: redo starts at 0/2000028
  546. pgslave3_1 | 2018-09-17 12:42:03.039 UTC [177] LOG: consistent recovery state reached at 0/20000F8
  547. pgslave3_1 | 2018-09-17 12:42:03.039 UTC [168] LOG: database system is ready to accept read only connections
  548. pgslave3_1 | 2018-09-17 12:42:03.051 UTC [181] LOG: started streaming WAL from primary at 0/3000000 on timeline 1
  549. backup_1 | 2018-09-17 12:42:03,112 [39] barman.command_wrappers DEBUG: BarmanSubProcess: subprocess started. pid: 40
  550. backup_1 | 2018-09-17 12:42:03,113 [39] barman.command_wrappers DEBUG: BarmanSubProcess: ['/usr/bin/python', '/usr/bin/barman', '-c', '/etc/barman.conf', '-q', 'receive-wal', 'pg_cluster']
  551. backup_1 | 2018-09-17 12:42:03,449 [39] barman.command_wrappers DEBUG: BarmanSubProcess: subprocess started. pid: 41
  552. backup_1 | Starting WAL archiving for server pg_cluster
  553. backup_1 | Starting streaming archiver for server pg_cluster
  554. pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 27 times more)
  555. pgslave2_1 | psql: could not connect to server: Connection refused
  556. pgslave2_1 | Is the server running on host "pgslave1" (192.168.112.5) and accepting
  557. pgslave2_1 | TCP/IP connections on port 5432?
  558. pgslave4_1 | >>>>>> Host pgslave3:5432 is not accessible (will try 27 times more)
  559. pgslave4_1 | >>>>>> Schema replication_db.repmgr exists on host pgslave3:5432!
  560. pgslave4_1 | >>> Can not get REPLICATION_UPSTREAM_NODE_ID from LOCK file or by CURRENT_REPLICATION_PRIMARY_HOST=pgslave3
  561. dockercompose_pgslave4_1 exited with code 1
  562. pgslave1_1 | >>>>>> Schema replication_db.repmgr is still not accessible on host pgmaster:5432 (will try 27 times more)
  563. pgslave1_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
  564. pgslave1_1 | >>> REPLICATION_UPSTREAM_NODE_ID=1
  565. pgslave1_1 | >>> Sending in background postgres start...
  566. pgslave1_1 | >>> Waiting for upstream postgres server...
  567. pgslave1_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
  568. pgslave1_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
  569. pgslave1_1 | >>> Starting standby node...
  570. pgslave1_1 | >>> Instance hasn't been set up yet.
  571. pgslave1_1 | >>> Clonning primary node...
  572. pgslave1_1 | >>> Waiting for upstream postgres server...
  573. pgslave1_1 | >>> Wait schema replication_db.repmgr on pgmaster:5432(user: replication_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
  574. pgslave1_1 | NOTICE: destination directory "/var/lib/postgresql/data" provided
  575. pgslave1_1 | INFO: connecting to source node
  576. pgslave1_1 | DETAIL: connection string is: host=pgmaster user=replication_user port=5432 dbname=replication_db
  577. pgslave1_1 | DETAIL: current installation size is 37 MB
  578. pgslave1_1 | >>>>>> Schema replication_db.repmgr exists on host pgmaster:5432!
  579. pgslave1_1 | >>> Waiting for cloning on this node is over(if any in progress): CLEAN_UP_ON_FAIL=, INTERVAL=30
  580. pgslave1_1 | INFO: checking and correcting permissions on existing directory "/var/lib/postgresql/data"
  581. pgslave1_1 | >>> Replicated: 4
  582. pgslave1_1 | NOTICE: starting backup (using pg_basebackup)...
  583. pgslave1_1 | INFO: executing:
  584. pgslave1_1 | /usr/lib/postgresql/10/bin/pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h pgmaster -p 5432 -U replication_user -c fast -X stream -S repmgr_slot_2
  585. pgslave1_1 | NOTICE: standby clone (using pg_basebackup) complete
  586. pgslave1_1 | NOTICE: you can now start your PostgreSQL server
  587. pgslave1_1 | HINT: for example: pg_ctl -D /var/lib/postgresql/data start
  588. pgslave1_1 | HINT: after starting the server, you need to register this standby with "repmgr standby register"
  589. pgslave1_1 | INFO: executing notification command for event "standby_clone"
  590. pgslave1_1 | DETAIL: command is:
  591. pgslave1_1 | /usr/local/bin/cluster/repmgr/events/router.sh 2 standby_clone 1 "2018-09-17 12:42:12.787654+00" "cloned from host \"pgmaster\", port 5432; backup method: pg_basebackup; --force: Y"
  592. pgslave1_1 | [REPMGR EVENT] Node id: 2; Event type: standby_clone; Success [1|0]: 1; Time: 2018-09-17 12:42:12.787654+00; Details: cloned from host "pgmaster", port 5432; backup method: pg_basebackup; --force: Y
  593. pgslave1_1 | >>> Configuring /var/lib/postgresql/data/postgresql.conf
  594. pgslave1_1 | >>>>>> Will add configs to the exists file
  595. pgslave1_1 | >>>>>> Adding config 'max_replication_slots'='10'
  596. pgslave1_1 | >>>>>> Adding config 'shared_preload_libraries'=''repmgr''
  597. pgslave1_1 | >>> Starting postgres...
  598. pgslave1_1 | >>> Waiting for local postgres server recovery if any in progress:LAUNCH_RECOVERY_CHECK_INTERVAL=30
  599. pgslave1_1 | >>> Recovery is in progress:
  600. pgslave1_1 | 2018-09-17 12:42:12.954 UTC [184] LOG: listening on IPv4 address "0.0.0.0", port 5432
  601. pgslave1_1 | 2018-09-17 12:42:12.954 UTC [184] LOG: listening on IPv6 address "::", port 5432
  602. pgpool_1 | 2018/09/17 12:42:12 Connected to tcp://pgslave1:5432
  603. pgpool_1 | >>>>>> Adding backend 1
  604. pgpool_1 | >>>>>> Waiting for backend 3 to start pgpool (WAIT_BACKEND_TIMEOUT=60)
  605. pgpool_1 | 2018/09/17 12:42:12 Waiting for host: tcp://pgslave3:5432
  606. pgpool_1 | 2018/09/17 12:42:12 Connected to tcp://pgslave3:5432
  607. pgslave3_1 | 2018-09-17 12:42:12.961 UTC [184] LOG: incomplete startup packet
  608. pgpool_1 | >>>>>> Adding backend 3
  609. pgpool_1 | >>>>>> Waiting for backend 2 to start pgpool (WAIT_BACKEND_TIMEOUT=60)
  610. pgslave1_1 | 2018-09-17 12:42:12.963 UTC [184] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
  611. pgpool_1 | 2018/09/17 12:42:12 Waiting for host: tcp://pgslave2:5432
  612. pgslave1_1 | 2018-09-17 12:42:12.980 UTC [193] LOG: database system was interrupted; last known up at 2018-09-17 12:42:12 UTC
  613. pgslave1_1 | 2018-09-17 12:42:12.980 UTC [194] LOG: incomplete startup packet
  614. pgslave1_1 | 2018-09-17 12:42:13.044 UTC [193] LOG: entering standby mode
  615. pgslave1_1 | 2018-09-17 12:42:13.051 UTC [193] LOG: redo starts at 0/4000028
  616. pgslave1_1 | 2018-09-17 12:42:13.053 UTC [193] LOG: consistent recovery state reached at 0/40000F8
  617. pgslave1_1 | 2018-09-17 12:42:13.053 UTC [184] LOG: database system is ready to accept read only connections
  618. pgslave1_1 | 2018-09-17 12:42:13.059 UTC [198] LOG: started streaming WAL from primary at 0/5000000 on timeline 1
  619. pgslave2_1 | >>>>>> Host pgslave1:5432 is not accessible (will try 26 times more)
  620. pgslave2_1 | >>>>>> Schema replication_db.repmgr exists on host pgslave1:5432!
  621. pgslave2_1 | >>> Can not get REPLICATION_UPSTREAM_NODE_ID from LOCK file or by CURRENT_REPLICATION_PRIMARY_HOST=pgslave1
  622. dockercompose_pgslave2_1 exited with code 1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement