Advertisement
Guest User

Untitled

a guest
Sep 9th, 2019
71
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 25.24 KB | None | 0 0
  1. nohup: ignoring input
  2. 2019-09-09T19:24:33.710+0200 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
  3. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] MongoDB starting : pid=3227 port=27017 dbpath=/home/lamp/rocket.chat/mongodb 64-bit host=uvias.com
  4. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] db version v4.2.0
  5. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30
  6. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
  7. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] allocator: tcmalloc
  8. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] modules: none
  9. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] build environment:
  10. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] distmod: ubuntu1804
  11. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] distarch: x86_64
  12. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] target_arch: x86_64
  13. 2019-09-09T19:24:33.712+0200 I CONTROL [initandlisten] options: { net: { bindIp: true, unixDomainSocket: { pathPrefix: "/home/lamp/rocket.chat/mongodb" } }, processManagement: { pidFilePath: "/home/lamp/rocket.chat/mongodb/mongod.pid" }, replication: { replSet: "rs01" }, storage: { dbPath: "/home/lamp/rocket.chat/mongodb" } }
  14. 2019-09-09T19:24:33.712+0200 W NETWORK [initandlisten] Skipping empty bind address
  15. 2019-09-09T19:24:33.712+0200 I STORAGE [initandlisten]
  16. 2019-09-09T19:24:33.712+0200 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
  17. 2019-09-09T19:24:33.712+0200 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
  18. 2019-09-09T19:24:33.712+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7463M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
  19. 2019-09-09T19:24:34.900+0200 I STORAGE [initandlisten] WiredTiger message [1568049874:900873][3227:0x7f44090cdb00], txn-recover: Set global recovery timestamp: (0,0)
  20. 2019-09-09T19:24:35.413+0200 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
  21. 2019-09-09T19:24:35.667+0200 I STORAGE [initandlisten] Timestamp monitor starting
  22. 2019-09-09T19:24:35.831+0200 I CONTROL [initandlisten]
  23. 2019-09-09T19:24:35.831+0200 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
  24. 2019-09-09T19:24:35.831+0200 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
  25. 2019-09-09T19:24:35.831+0200 I CONTROL [initandlisten]
  26. 2019-09-09T19:24:35.831+0200 I CONTROL [initandlisten]
  27. 2019-09-09T19:24:35.831+0200 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 256 processes, 1024 files. Number of processes should be at least 512 : 0.5 times number of files.
  28. 2019-09-09T19:24:35.834+0200 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
  29. 2019-09-09T19:24:35.834+0200 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
  30. 2019-09-09T19:24:35.835+0200 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
  31. 2019-09-09T19:24:35.835+0200 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
  32. 2019-09-09T19:24:35.835+0200 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 6e7fb1ba-7b2b-49f5-9bd1-fbbd7eafdc76 and options: { capped: true, size: 10485760 }
  33. 2019-09-09T19:24:35.979+0200 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
  34. 2019-09-09T19:24:35.980+0200 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
  35. 2019-09-09T19:24:35.980+0200 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/lamp/rocket.chat/mongodb/diagnostic.data'
  36. 2019-09-09T19:24:35.981+0200 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: d85168c0-f8d4-4c6c-b2a0-5c321a058224 and options: {}
  37. 2019-09-09T19:24:36.000+0200 W REPL [ftdc] Rollback ID is not initialized yet.
  38. 2019-09-09T19:24:36.129+0200 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
  39. 2019-09-09T19:24:36.129+0200 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
  40. 2019-09-09T19:24:36.129+0200 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: c88c67be-efad-48bc-959e-fb522cdad15d and options: {}
  41. 2019-09-09T19:24:36.292+0200 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
  42. 2019-09-09T19:24:36.292+0200 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>
  43. 2019-09-09T19:24:36.292+0200 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: d5331e93-554d-4f85-8445-97077d44dc30 and options: {}
  44. 2019-09-09T19:24:36.434+0200 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
  45. 2019-09-09T19:24:36.434+0200 I SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>
  46. 2019-09-09T19:24:36.435+0200 I REPL [initandlisten] Did not find local initialized voted for document at startup.
  47. 2019-09-09T19:24:36.435+0200 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
  48. 2019-09-09T19:24:36.435+0200 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 84806012-cb3e-4b63-9cd9-4f1815e73a6d and options: {}
  49. 2019-09-09T19:24:36.566+0200 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
  50. 2019-09-09T19:24:36.566+0200 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version: <unsharded>
  51. 2019-09-09T19:24:36.566+0200 I REPL [initandlisten] Initialized the rollback ID to 1
  52. 2019-09-09T19:24:36.566+0200 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
  53. 2019-09-09T19:24:36.566+0200 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
  54. 2019-09-09T19:24:36.566+0200 I NETWORK [initandlisten] Listening on /home/lamp/rocket.chat/mongodb/mongodb-27017.sock
  55. 2019-09-09T19:24:36.566+0200 I SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
  56. 2019-09-09T19:24:36.566+0200 I NETWORK [initandlisten] waiting for connections on port 27017
  57. 2019-09-09T19:24:36.566+0200 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
  58. 2019-09-09T19:26:00.652+0200 I NETWORK [listener] connection accepted from anonymous unix socket:27017 #1 (1 connection now open)
  59. 2019-09-09T19:26:00.652+0200 I NETWORK [conn1] received client metadata from anonymous unix socket:27017 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } }
  60. 2019-09-09T19:27:52.747+0200 I COMMAND [conn1] initiate : no configuration specified. Using a default configuration for the set
  61. 2019-09-09T19:27:52.747+0200 I NETWORK [conn1] getaddrinfo("") failed: Name or service not known
  62. 2019-09-09T19:27:52.956+0200 I COMMAND [conn1] created this configuration for initiation : { _id: "rs01", version: 1, members: [ { _id: 0, host: ":27017" } ] }
  63. 2019-09-09T19:27:52.956+0200 I REPL [conn1] replSetInitiate admin command received from client
  64. 2019-09-09T19:27:52.956+0200 E REPL [conn1] replSet initiate got InvalidReplicaSetConfig: FailedToParse: Empty host component parsing HostAndPort from ":27017" for member:{ _id: 0, host: ":27017" } while parsing { _id: "rs01", version: 1, members: [ { _id: 0, host: ":27017" } ] }
  65. 2019-09-09T19:27:52.956+0200 I COMMAND [conn1] command admin.$cmd appName: "MongoDB Shell" command: replSetInitiate { replSetInitiate: undefined, lsid: { id: UUID("8a395b1c-f571-478b-adad-255591049871") }, $clusterTime: { clusterTime: Timestamp(0, 0), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 ok:0 errMsg:"FailedToParse: Empty host component parsing HostAndPort from \":27017\" for member:{ _id: 0, host: \":27017\" }" errName:InvalidReplicaSetConfig errCode:93 reslen:331 locks:{} protocol:op_msg 209ms
  66. 2019-09-09T19:29:36.566+0200 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
  67. 2019-09-09T19:29:36.566+0200 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
  68. 2019-09-09T19:29:57.708+0200 I REPL [conn1] replSetInitiate admin command received from client
  69. 2019-09-09T19:29:57.830+0200 W NETWORK [conn1] getaddrinfo("/home/lamp/rocket.chat/mongodb/mongodb-27017.sock") failed: No address associated with hostname
  70. 2019-09-09T19:29:57.830+0200 I NETWORK [listener] connection accepted from anonymous unix socket:27017 #3 (2 connections now open)
  71. 2019-09-09T19:29:57.831+0200 I REPL [conn1] replSetInitiate config object with 1 members parses ok
  72. 2019-09-09T19:29:57.831+0200 I NETWORK [conn3] end connection anonymous unix socket:27017 (1 connection now open)
  73. 2019-09-09T19:29:57.831+0200 I REPL [conn1] ******
  74. 2019-09-09T19:29:57.831+0200 I REPL [conn1] creating replication oplog of size: 17841MB...
  75. 2019-09-09T19:29:57.831+0200 I STORAGE [conn1] createCollection: local.oplog.rs with generated UUID: 1ec99ecb-a8c5-4327-af17-b001e6300ec6 and options: { capped: true, size: 18708133683.0, autoIndexId: false }
  76. 2019-09-09T19:29:58.012+0200 I STORAGE [conn1] Starting OplogTruncaterThread local.oplog.rs
  77. 2019-09-09T19:29:58.012+0200 I STORAGE [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
  78. 2019-09-09T19:29:58.012+0200 I STORAGE [conn1] Scanning the oplog to determine where to place markers for truncation
  79. 2019-09-09T19:29:58.241+0200 I REPL [conn1] ******
  80. 2019-09-09T19:29:58.241+0200 I STORAGE [conn1] createCollection: local.system.replset with generated UUID: f52ec21a-03c2-4459-a13d-84c414043026 and options: {}
  81. 2019-09-09T19:29:58.881+0200 I INDEX [conn1] index build: done building index _id_ on ns local.system.replset
  82. 2019-09-09T19:29:58.942+0200 I SHARDING [conn1] Marking collection local.replset.oplogTruncateAfterPoint as collection version: <unsharded>
  83. 2019-09-09T19:29:58.942+0200 I STORAGE [conn1] createCollection: admin.system.version with provided UUID: 072cf26d-b777-4e1d-b183-87f172411503 and options: { uuid: UUID("072cf26d-b777-4e1d-b183-87f172411503") }
  84. 2019-09-09T19:30:01.613+0200 I INDEX [conn1] index build: done building index _id_ on ns admin.system.version
  85. 2019-09-09T19:30:01.613+0200 I COMMAND [conn1] setting featureCompatibilityVersion to 4.2
  86. 2019-09-09T19:30:01.613+0200 I NETWORK [conn1] Skip closing connection for connection # 1
  87. 2019-09-09T19:30:01.614+0200 I REPL [conn1] New replica set config in use: { _id: "rs01", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "/home/lamp/rocket.chat/mongodb/mongodb-27017.sock", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5d768c15fdff0c504a3426e2') } }
  88. 2019-09-09T19:30:01.614+0200 I REPL [conn1] This node is /home/lamp/rocket.chat/mongodb/mongodb-27017.sock in the config
  89. 2019-09-09T19:30:01.614+0200 I REPL [conn1] transition to STARTUP2 from STARTUP
  90. 2019-09-09T19:30:01.614+0200 I REPL [conn1] Starting replication storage threads
  91. 2019-09-09T19:30:01.615+0200 I REPL [conn1] transition to RECOVERING from STARTUP2
  92. 2019-09-09T19:30:01.615+0200 I REPL [conn1] Starting replication fetcher thread
  93. 2019-09-09T19:30:01.615+0200 I REPL [conn1] Starting replication applier thread
  94. 2019-09-09T19:30:01.615+0200 I REPL [conn1] Starting replication reporter thread
  95. 2019-09-09T19:30:01.615+0200 I REPL [rsSync-0] Starting oplog application
  96. 2019-09-09T19:30:01.615+0200 I COMMAND [conn1] command local.system.replset appName: "MongoDB Shell" command: replSetInitiate { replSetInitiate: { _id: "rs01", version: 1.0, members: [ { _id: 0.0, host: "/home/lamp/rocket.chat/mongodb/mongodb-27017.sock" } ] }, lsid: { id: UUID("8a395b1c-f571-478b-adad-255591049871") }, $clusterTime: { clusterTime: Timestamp(0, 0), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:163 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 19 } }, ReplicationStateTransition: { acquireCount: { w: 19 } }, Global: { acquireCount: { r: 4, w: 13, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 478 } }, Database: { acquireCount: { r: 2, w: 2, W: 11 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 1444 } }, Collection: { acquireCount: { r: 2, w: 2 } }, Mutex: { acquireCount: { r: 16 } }, oplog: { acquireCount: { r: 1, w: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 3906ms
  97. 2019-09-09T19:30:01.615+0200 I REPL [rsSync-0] transition to SECONDARY from RECOVERING
  98. 2019-09-09T19:30:01.615+0200 I ELECTION [rsSync-0] conducting a dry run election to see if we could be elected. current term: 0
  99. 2019-09-09T19:30:01.615+0200 I ELECTION [replexec-0] dry election run succeeded, running for election in term 1
  100. 2019-09-09T19:30:01.883+0200 I ELECTION [replexec-0] election succeeded, assuming primary role in term 1
  101. 2019-09-09T19:30:01.883+0200 I REPL [replexec-0] transition to PRIMARY from SECONDARY
  102. 2019-09-09T19:30:01.883+0200 I REPL [replexec-0] Resetting sync source to empty, which was :27017
  103. 2019-09-09T19:30:01.883+0200 I REPL [replexec-0] Entering primary catch-up mode.
  104. 2019-09-09T19:30:01.883+0200 I REPL [replexec-0] Exited primary catch-up mode.
  105. 2019-09-09T19:30:01.883+0200 I REPL [replexec-0] Stopping replication producer
  106. 2019-09-09T19:30:02.615+0200 I REPL [RstlKillOpThread] Starting to kill user operations
  107. 2019-09-09T19:30:02.615+0200 I REPL [RstlKillOpThread] Stopped killing user operations
  108. 2019-09-09T19:30:03.616+0200 I REPL [RstlKillOpThread] Starting to kill user operations
  109. 2019-09-09T19:30:03.616+0200 I REPL [RstlKillOpThread] Stopped killing user operations
  110. 2019-09-09T19:30:03.616+0200 I SHARDING [rsSync-0] Marking collection config.transactions as collection version: <unsharded>
  111. 2019-09-09T19:30:03.616+0200 I STORAGE [rsSync-0] createCollection: config.transactions with generated UUID: 32e40538-c0a3-4935-96db-973bc94fe5c7 and options: {}
  112. 2019-09-09T19:30:04.004+0200 I INDEX [rsSync-0] index build: done building index _id_ on ns config.transactions
  113. 2019-09-09T19:30:04.005+0200 I REPL [rsSync-0] transition to primary complete; database writes are now permitted
  114. 2019-09-09T19:30:04.005+0200 I SHARDING [monitoring-keys-for-HMAC] Marking collection admin.system.keys as collection version: <unsharded>
  115. 2019-09-09T19:30:04.005+0200 I STORAGE [monitoring-keys-for-HMAC] createCollection: admin.system.keys with generated UUID: 4b737513-faac-49d8-81aa-5bce9541f69b and options: {}
  116. 2019-09-09T19:30:04.066+0200 I STORAGE [WTJournalFlusher] Triggering the first stable checkpoint. Initial Data: Timestamp(1568050198, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1568050204, 1)
  117. 2019-09-09T19:30:04.397+0200 I INDEX [monitoring-keys-for-HMAC] index build: done building index _id_ on ns admin.system.keys
  118. 2019-09-09T19:30:04.520+0200 I COMMAND [monitoring-keys-for-HMAC] command admin.system.keys command: insert { insert: "system.keys", bypassDocumentValidation: false, ordered: true, documents: [ { _id: 6734724344666128385, purpose: "HMAC", key: BinData(0, 0E074EC6F829A236E12D340BBE3DEC517E24EA42), expiresAt: Timestamp(1575826204, 0) } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, $db: "admin" } ninserted:1 keysInserted:1 numYields:0 reslen:230 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { w: 3 } }, Database: { acquireCount: { W: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 514ms
  119. 2019-09-09T19:30:04.642+0200 I COMMAND [monitoring-keys-for-HMAC] command admin.system.keys command: insert { insert: "system.keys", bypassDocumentValidation: false, ordered: true, documents: [ { _id: 6734724344666128386, purpose: "HMAC", key: BinData(0, E1A856E478BA8448BA46BCD20EA72D848B0087BF), expiresAt: Timestamp(1583602204, 0) } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, $db: "admin" } ninserted:1 keysInserted:1 numYields:0 reslen:230 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { W: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 121ms
  120. 2019-09-09T19:33:07.140+0200 I NETWORK [conn1] end connection anonymous unix socket:27017 (0 connections now open)
  121. 2019-09-09T19:34:14.661+0200 I NETWORK [listener] connection accepted from anonymous unix socket:27017 #4 (1 connection now open)
  122. 2019-09-09T19:34:14.665+0200 I NETWORK [conn4] received client metadata from anonymous unix socket:27017 conn4: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-55-generic" }, platform: "Node.js v8.16.1, LE, mongodb-core: 3.1.5" }
  123. 2019-09-09T19:34:14.670+0200 I NETWORK [conn4] end connection anonymous unix socket:27017 (0 connections now open)
  124. 2019-09-09T19:34:36.566+0200 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
  125. 2019-09-09T19:34:36.566+0200 I STORAGE [LogicalSessionCacheRefresh] createCollection: config.system.sessions with provided UUID: b135367a-eb0b-4cf2-aca6-55882641819b and options: { uuid: UUID("b135367a-eb0b-4cf2-aca6-55882641819b") }
  126. 2019-09-09T19:34:37.423+0200 I INDEX [LogicalSessionCacheRefresh] index build: done building index _id_ on ns config.system.sessions
  127. 2019-09-09T19:34:38.182+0200 I INDEX [LogicalSessionCacheRefresh] index build: starting on config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 } using method: Hybrid
  128. 2019-09-09T19:34:38.182+0200 I INDEX [LogicalSessionCacheRefresh] build may temporarily use up to 500 megabytes of RAM
  129. 2019-09-09T19:34:38.182+0200 I INDEX [LogicalSessionCacheRefresh] index build: collection scan done. scanned 0 total records in 0 seconds
  130. 2019-09-09T19:34:38.182+0200 I INDEX [LogicalSessionCacheRefresh] index build: inserted 0 keys from external sorter into index in 0 seconds
  131. 2019-09-09T19:34:38.430+0200 I INDEX [LogicalSessionCacheRefresh] index build: done building index lsidTTLIndex on ns config.system.sessions
  132. 2019-09-09T19:34:38.523+0200 I COMMAND [LogicalSessionCacheRefresh] command config.system.sessions command: createIndexes { createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], $db: "config" } numYields:0 reslen:239 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 2 } }, Collection: { acquireCount: { r: 4, w: 1, R: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 1956ms
  133. 2019-09-09T19:42:56.393+0200 I NETWORK [listener] connection accepted from anonymous unix socket:27017 #5 (1 connection now open)
  134. 2019-09-09T19:42:56.393+0200 I NETWORK [conn5] received client metadata from anonymous unix socket:27017 conn5: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.0" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } }
  135. 2019-09-09T19:45:24.667+0200 I NETWORK [conn5] end connection anonymous unix socket:27017 (0 connections now open)
  136. 2019-09-09T19:49:36.881+0200 I COMMAND [LogicalSessionCacheRefresh] command config.$cmd command: delete { delete: "system.sessions", ordered: false, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 reslen:230 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 3 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 314ms
  137. 2019-09-09T19:51:56.569+0200 I CONTROL [signalProcessingThread] got signal 1 (Hangup), will terminate after current cmd ends
  138. 2019-09-09T19:51:56.569+0200 I REPL [RstlKillOpThread] Starting to kill user operations
  139. 2019-09-09T19:51:56.569+0200 I REPL [RstlKillOpThread] Stopped killing user operations
  140. 2019-09-09T19:52:06.575+0200 I REPL [RstlKillOpThread] Starting to kill user operations
  141. 2019-09-09T19:52:06.575+0200 I REPL [RstlKillOpThread] Stopped killing user operations
  142. 2019-09-09T19:52:06.575+0200 I STORAGE [signalProcessingThread] Failed to stepDown in non-command initiated shutdown path ExceededTimeLimit: No electable secondaries caught up as of 2019-09-09T19:52:06.575+0200. Please use the replSetStepDown command with the argument {force: true} to force node to step down.
  143. 2019-09-09T19:52:06.576+0200 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
  144. 2019-09-09T19:52:06.576+0200 I NETWORK [signalProcessingThread] removing socket file: /home/lamp/rocket.chat/mongodb/mongodb-27017.sock
  145. 2019-09-09T19:52:06.576+0200 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
  146. 2019-09-09T19:52:06.576+0200 I REPL [signalProcessingThread] shutting down replication subsystems
  147. 2019-09-09T19:52:06.576+0200 I REPL [signalProcessingThread] Stopping replication reporter thread
  148. 2019-09-09T19:52:06.576+0200 I REPL [signalProcessingThread] Stopping replication fetcher thread
  149. 2019-09-09T19:52:06.576+0200 I REPL [signalProcessingThread] Stopping replication applier thread
  150. 2019-09-09T19:52:06.576+0200 I REPL [rsSync-0] Finished oplog application
  151. 2019-09-09T19:52:06.704+0200 I REPL [rsBackgroundSync] Stopping replication producer
  152. 2019-09-09T19:52:06.704+0200 I REPL [signalProcessingThread] Stopping replication storage threads
  153. 2019-09-09T19:52:06.705+0200 I ASIO [RS] Killing all outstanding egress activity.
  154. 2019-09-09T19:52:06.705+0200 I ASIO [RS] Killing all outstanding egress activity.
  155. 2019-09-09T19:52:06.705+0200 I ASIO [Replication] Killing all outstanding egress activity.
  156. 2019-09-09T19:52:06.706+0200 I CONTROL [signalProcessingThread] Shutting down free monitoring
  157. 2019-09-09T19:52:06.706+0200 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
  158. 2019-09-09T19:52:06.710+0200 I STORAGE [signalProcessingThread] Deregistering all the collections
  159. 2019-09-09T19:52:06.710+0200 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
  160. 2019-09-09T19:52:06.710+0200 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
  161. 2019-09-09T19:52:06.710+0200 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
  162. 2019-09-09T19:52:06.999+0200 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
  163. 2019-09-09T19:52:06.999+0200 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
  164. 2019-09-09T19:52:06.999+0200 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
  165. 2019-09-09T19:52:07.099+0200 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
  166. 2019-09-09T19:52:07.099+0200 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
  167. 2019-09-09T19:52:07.099+0200 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
  168. 2019-09-09T19:52:08.937+0200 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
  169. 2019-09-09T19:52:08.937+0200 I CONTROL [signalProcessingThread] now exiting
  170. 2019-09-09T19:52:08.937+0200 I CONTROL [signalProcessingThread] shutting down with code:0
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement