Advertisement
Guest User

Untitled

a guest
Feb 13th, 2020
173
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 17.56 KB | None | 0 0
  1. 2020-02-12 03:13:01.570 7efc0a736700 -1 received signal: Hangup from pkill -1 -x ceph-mon|ceph-mgr|ceph-mds|ceph-osd|ceph-fuse|radosgw (PID: 35672) UID: 0
  2. 2020-02-12 04:59:39.286 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client apollo-08.local (5630828), after 319.848 seconds
  3. 2020-02-12 04:59:39.286 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5630828 (v1:10.0.3.48:0/2734757866)
  4. 2020-02-12 04:59:39.286 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5630828 (v1:10.0.3.48:0/2734757866)
  5. 2020-02-12 05:00:22.669 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39184 from mon.0
  6. 2020-02-12 05:00:25.724 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39185 from mon.0
  7. 2020-02-12 05:00:29.778 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39186 from mon.0
  8. 2020-02-12 05:00:37.999 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39187 from mon.0
  9. 2020-02-12 05:00:42.078 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39188 from mon.0
  10. 2020-02-12 05:00:46.273 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39189 from mon.0
  11. 2020-02-12 05:00:50.409 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39190 from mon.0
  12. 2020-02-12 05:00:57.721 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39191 from mon.0
  13. 2020-02-12 05:01:01.844 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39192 from mon.0
  14. 2020-02-12 05:01:06.056 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39193 from mon.0
  15. 2020-02-12 05:01:10.273 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39194 from mon.0
  16. 2020-02-12 05:01:17.912 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39195 from mon.0
  17. 2020-02-12 05:01:22.094 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39196 from mon.0
  18. 2020-02-12 05:01:25.807 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39197 from mon.0
  19. 2020-02-12 05:01:29.864 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39198 from mon.0
  20. 2020-02-12 05:01:38.375 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39199 from mon.0
  21. 2020-02-12 05:01:41.738 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39200 from mon.0
  22. 2020-02-12 05:01:45.890 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39201 from mon.0
  23. 2020-02-12 05:01:49.968 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39202 from mon.0
  24. 2020-02-12 05:01:58.324 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39203 from mon.0
  25. 2020-02-12 05:02:02.547 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39204 from mon.0
  26. 2020-02-12 05:02:06.003 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39205 from mon.0
  27. 2020-02-12 05:02:10.460 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39206 from mon.0
  28. 2020-02-12 05:02:18.362 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39207 from mon.0
  29. 2020-02-12 05:02:21.852 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39208 from mon.0
  30. 2020-02-12 05:02:25.976 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39209 from mon.0
  31. 2020-02-12 05:02:30.431 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39210 from mon.0
  32. 2020-02-12 05:02:38.153 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39211 from mon.0
  33. 2020-02-12 05:02:42.358 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39212 from mon.0
  34. 2020-02-12 05:02:46.512 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39213 from mon.0
  35. 2020-02-12 05:02:49.692 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39214 from mon.0
  36. 2020-02-12 05:02:58.496 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39215 from mon.0
  37. 2020-02-12 05:03:02.675 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39216 from mon.0
  38. 2020-02-12 05:03:05.853 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39217 from mon.0
  39. 2020-02-12 05:03:10.175 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39218 from mon.0
  40. 2020-02-12 05:03:17.934 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39219 from mon.0
  41. 2020-02-12 05:03:22.393 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39220 from mon.0
  42. 2020-02-12 05:03:25.765 7efc09cab700 1 mds.cephmds-01 Updating MDS map to version 39221 from mon.0
  43. 2020-02-12 05:18:47.422 7efc0d73c700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.48:0/2734757866 conn(0x56184d410c00 0x5617b0933000 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 1), sending RESETSESSION
  44. 2020-02-12 05:19:50.421 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  45. 2020-02-12 05:19:50.421 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5630828 v1:10.0.3.48:0/2734757866 after 300086 (allowed interval 45)
  46. 2020-02-12 05:19:50.421 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.48:0/2734757866 conn(0x56255d2a5400 0x561680b9a000 :6801 s=OPENED pgs=66349 cs=1 l=0).fault server, going to standby
  47. 2020-02-12 05:19:50.423 7efc0d73c700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.48:0/2734757866 conn(0x562a49b28400 0x561617a74800 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept peer reset, then tried to connect to us, replacing
  48. 2020-02-13 00:24:00.267 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5631089), after 303.104 seconds
  49. 2020-02-13 00:24:00.267 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5631089 (v1:10.0.3.40:0/2043271839)
  50. 2020-02-13 00:24:00.267 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5631089 (v1:10.0.3.40:0/2043271839)
  51. 2020-02-13 00:24:40.267 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5585169), after 303.105 seconds
  52. 2020-02-13 00:24:40.267 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5585169 (v1:10.0.3.40:0/923857591)
  53. 2020-02-13 00:24:40.267 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5585169 (v1:10.0.3.40:0/923857591)
  54. 2020-02-13 00:29:16.077 7efc0d73c700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/923857591 conn(0x5617c7160400 0x56235c74b000 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 1), sending RESETSESSION
  55. 2020-02-13 00:30:44.077 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2043271839 conn(0x562331686c00 0x56194ee36800 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 1), sending RESETSESSION
  56. 2020-02-13 00:30:44.078 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  57. 2020-02-13 00:30:44.078 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5631089 v1:10.0.3.40:0/2043271839 after 369140 (allowed interval 45)
  58. 2020-02-13 00:30:44.078 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2043271839 conn(0x5624e3eccc00 0x56215a6c7000 :6801 s=OPENED pgs=6 cs=1 l=0).fault server, going to standby
  59. 2020-02-13 00:34:15.276 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5611446), after 303.2 seconds
  60. 2020-02-13 00:34:15.276 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5611446 (v1:10.0.3.40:0/2261193169)
  61. 2020-02-13 00:34:15.276 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5611446 (v1:10.0.3.40:0/2261193169)
  62. 2020-02-13 00:34:20.276 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5631794), after 301.096 seconds
  63. 2020-02-13 00:34:20.276 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5631794 (v1:10.0.3.40:0/2548358226)
  64. 2020-02-13 00:34:20.276 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5631794 (v1:10.0.3.40:0/2548358226)
  65. 2020-02-13 00:34:30.276 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5621620), after 301.113 seconds
  66. 2020-02-13 00:34:30.276 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5621620 (v1:10.0.3.40:0/1305818163)
  67. 2020-02-13 00:34:30.276 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5621620 (v1:10.0.3.40:0/1305818163)
  68. 2020-02-13 00:35:00.276 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5612430), after 301.097 seconds
  69. 2020-02-13 00:35:00.276 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5612430 (v1:10.0.3.40:0/1189221117)
  70. 2020-02-13 00:35:00.276 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5612430 (v1:10.0.3.40:0/1189221117)
  71. 2020-02-13 00:35:20.277 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5604983), after 301.098 seconds
  72. 2020-02-13 00:35:20.277 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5604983 (v1:10.0.3.40:0/2780174751)
  73. 2020-02-13 00:35:20.277 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5604983 (v1:10.0.3.40:0/2780174751)
  74. 2020-02-13 00:35:50.278 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5621203), after 303.21 seconds
  75. 2020-02-13 00:35:50.278 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5621203 (v1:10.0.3.40:0/2873777953)
  76. 2020-02-13 00:35:50.278 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5621203 (v1:10.0.3.40:0/2873777953)
  77. 2020-02-13 00:37:58.791 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2261193169 conn(0x562427dd6400 0x5617a29e6000 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 2), sending RESETSESSION
  78. 2020-02-13 00:37:58.792 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  79. 2020-02-13 00:37:58.792 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5585169 v1:10.0.3.40:0/923857591 after 369574 (allowed interval 45)
  80. 2020-02-13 00:37:58.794 7efc0d73c700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/923857591 conn(0x5617c7160400 0x56235c74b000 :6801 s=OPENED pgs=50445 cs=1 l=0).fault server, going to standby
  81. 2020-02-13 00:37:58.832 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/923857591 conn(0x56189ca9c400 0x561985baa800 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept peer reset, then tried to connect to us, replacing
  82. 2020-02-13 00:37:59.064 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2548358226 conn(0x5629104d3400 0x5616178b8000 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 1), sending RESETSESSION
  83. 2020-02-13 00:37:59.064 7efc0d73c700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/1189221117 conn(0x561eb75a0000 0x56194ee35000 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 1), sending RESETSESSION
  84. 2020-02-13 00:37:59.064 7efc0c73a700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/1305818163 conn(0x561e28c92000 0x56194ee36800 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 1), sending RESETSESSION
  85. 2020-02-13 00:37:59.064 7efc0c73a700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2873777953 conn(0x56186b8ecc00 0x56161775d800 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 1), sending RESETSESSION
  86. 2020-02-13 00:37:59.064 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2780174751 conn(0x562353831800 0x5617d6d44800 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept we reset (peer sent cseq 1), sending RESETSESSION
  87. 2020-02-13 00:37:59.462 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  88. 2020-02-13 00:37:59.462 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5621620 v1:10.0.3.40:0/1305818163 after 369575 (allowed interval 45)
  89. 2020-02-13 00:37:59.463 7efc0c73a700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/1305818163 conn(0x5629104d3400 0x5616178b8000 :6801 s=OPENED pgs=654 cs=1 l=0).fault server, going to standby
  90. 2020-02-13 00:37:59.463 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/1305818163 conn(0x5617c7160400 0x56235c74b000 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept peer reset, then tried to connect to us, replacing
  91. 2020-02-13 00:38:00.894 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  92. 2020-02-13 00:38:00.894 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5631794 v1:10.0.3.40:0/2548358226 after 369576 (allowed interval 45)
  93. 2020-02-13 00:38:00.894 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  94. 2020-02-13 00:38:00.894 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5612430 v1:10.0.3.40:0/1189221117 after 369576 (allowed interval 45)
  95. 2020-02-13 00:38:00.894 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  96. 2020-02-13 00:38:00.894 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5604983 v1:10.0.3.40:0/2780174751 after 369576 (allowed interval 45)
  97. 2020-02-13 00:38:00.894 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  98. 2020-02-13 00:38:00.894 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5611446 v1:10.0.3.40:0/2261193169 after 369576 (allowed interval 45)
  99. 2020-02-13 00:38:00.894 7efc0d73c700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/1189221117 conn(0x561eb75a0000 0x56194ee35000 :6801 s=OPENED pgs=32 cs=1 l=0).fault server, going to standby
  100. 2020-02-13 00:38:00.894 7efc09cab700 1 mds.0.server no longer in reconnect state, ignoring reconnect, sending close
  101. 2020-02-13 00:38:00.894 7efc09cab700 0 log_channel(cluster) log [INF] : denied reconnect attempt (mds is up:active) from client.5621203 v1:10.0.3.40:0/2873777953 after 369576 (allowed interval 45)
  102. 2020-02-13 00:38:00.894 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2780174751 conn(0x562427dd6400 0x5617a29e6000 :6801 s=OPENED pgs=71 cs=1 l=0).fault server, going to standby
  103. 2020-02-13 00:38:00.894 7efc0d73c700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2261193169 conn(0x561eb759c400 0x561633b50000 :6801 s=OPENED pgs=96 cs=1 l=0).fault server, going to standby
  104. 2020-02-13 00:38:00.894 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2548358226 conn(0x5628ce3a8800 0x56163b29b000 :6801 s=OPENED pgs=87 cs=1 l=0).fault server, going to standby
  105. 2020-02-13 00:38:00.894 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/2873777953 conn(0x56288199f400 0x56215a6c5800 :6801 s=OPENED pgs=254 cs=1 l=0).fault server, going to standby
  106. 2020-02-13 00:38:00.905 7efc0cf3b700 0 --1- [v2:10.0.3.21:6800/1534973070,v1:10.0.3.21:6801/1534973070] >> v1:10.0.3.40:0/1189221117 conn(0x56204d8e3800 0x56235c74a000 :6801 s=ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_message_2 accept peer reset, then tried to connect to us, replacing
  107. 2020-02-13 00:42:45.283 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5616482), after 301.208 seconds
  108. 2020-02-13 00:42:45.283 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5616482 (v1:10.0.3.40:0/1633339273)
  109. 2020-02-13 00:42:45.283 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5616482 (v1:10.0.3.40:0/1633339273)
  110. 2020-02-13 00:42:55.283 7efc074a6700 0 log_channel(cluster) log [WRN] : evicting unresponsive client zeus.icbi.local (5631458), after 302.2 seconds
  111. 2020-02-13 00:42:55.283 7efc074a6700 1 mds.0.38828 Evicting (and blacklisting) client session 5631458 (v1:10.0.3.40:0/798683906)
  112. 2020-02-13 00:42:55.283 7efc074a6700 0 log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 5631458 (v1:10.0.3.40:0/798683906)
  113. 2020-02-13 03:29:01.581 7efc0a736700 -1 Fail to open '/proc/37602/cmdline' error = (2) No such file or directory
  114. 2020-02-13 03:29:01.581 7efc0a736700 -1 received signal: Hangup from <unknown> (PID: 37602) UID: 0
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement