Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Attaching to docker-cuckoo_web_1, docker-cuckoo_api_1, docker-cuckoo_cuckoo_1, docker-cuckoo_postgres_1, docker-cuckoo_elasticsearch_1, docker-cuckoo_mongo_1
- api_1 | ===> Use default ports and hosts if not specified...
- api_1 | ES_HOST=elasticsearch
- api_1 | ES_PORT=9200
- api_1 | MONGO_HOST=mongo
- api_1 | MONGO_TCP_PORT=27017
- api_1 | POSTGRES_HOST=postgres
- api_1 | POSTGRES_TCP_PORT=5432
- api_1 | RESULTSERVER_HOST=0.0.0.0
- api_1 | RESULTSERVER_PORT=2042
- api_1 |
- api_1 | ===> Update /cuckoo/conf/reporting.conf if needed...
- api_1 |
- api_1 | ===> Waiting on elasticsearch(http://elasticsearch:9200) to start....
- api_1 | Elasticsearch is ready!
- api_1 |
- api_1 | ===> Waiting for MongoDB(mongo:27017) to start...MongoDB is ready!
- api_1 |
- api_1 | ===> Waiting for Postgres(postgres:5432) to start....Postgres is ready!
- api_1 | 2019-04-09 09:06:09,000 [werkzeug] INFO: * Running on http://0.0.0.0:1337/ (Press CTRL+C to quit)
- postgres_1 | The files belonging to this database system will be owned by user "postgres".
- postgres_1 | This user must also own the server process.
- postgres_1 |
- postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
- postgres_1 | The default database encoding has accordingly been set to "UTF8".
- postgres_1 | The default text search configuration will be set to "english".
- postgres_1 |
- postgres_1 | Data page checksums are disabled.
- postgres_1 |
- postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data/pgdata ... ok
- postgres_1 | creating subdirectories ... ok
- postgres_1 | selecting default max_connections ... 100
- postgres_1 | selecting default shared_buffers ... 128MB
- postgres_1 | selecting dynamic shared memory implementation ... posix
- postgres_1 | creating configuration files ... ok
- postgres_1 | running bootstrap script ... ok
- postgres_1 | performing post-bootstrap initialization ... ok
- postgres_1 | syncing data to disk ... ok
- postgres_1 |
- postgres_1 | Success. You can now start the database server using:
- postgres_1 |
- postgres_1 | pg_ctl -D /var/lib/postgresql/data/pgdata -l logfile start
- postgres_1 |
- postgres_1 |
- postgres_1 | WARNING: enabling "trust" authentication for local connections
- postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
- postgres_1 | --auth-local and --auth-host, the next time you run initdb.
- postgres_1 | waiting for server to start....2019-04-09 09:04:06.075 UTC [43] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
- postgres_1 | 2019-04-09 09:04:06.185 UTC [44] LOG: database system was shut down at 2019-04-09 09:04:00 UTC
- postgres_1 | 2019-04-09 09:04:06.228 UTC [43] LOG: database system is ready to accept connections
- postgres_1 | done
- postgres_1 | server started
- postgres_1 |
- postgres_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
- postgres_1 |
- postgres_1 | waiting for server to shut down...2019-04-09 09:04:06.323 UTC [43] LOG: received fast shutdown request
- postgres_1 | .2019-04-09 09:04:06.377 UTC [43] LOG: aborting any active transactions
- postgres_1 | 2019-04-09 09:04:06.379 UTC [43] LOG: background worker "logical replication launcher" (PID 50) exited with exit code 1
- postgres_1 | 2019-04-09 09:04:06.380 UTC [45] LOG: shutting down
- postgres_1 | 2019-04-09 09:04:06.684 UTC [43] LOG: database system is shut down
- postgres_1 | done
- postgres_1 | server stopped
- postgres_1 |
- postgres_1 | PostgreSQL init process complete; ready for start up.
- postgres_1 |
- postgres_1 | 2019-04-09 09:04:06.774 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
- postgres_1 | 2019-04-09 09:04:06.774 UTC [1] LOG: listening on IPv6 address "::", port 5432
- postgres_1 | 2019-04-09 09:04:07.316 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
- postgres_1 | 2019-04-09 09:04:08.011 UTC [52] LOG: database system was shut down at 2019-04-09 09:04:06 UTC
- postgres_1 | 2019-04-09 09:04:08.012 UTC [54] LOG: incomplete startup packet
- postgres_1 | 2019-04-09 09:04:08.012 UTC [53] LOG: incomplete startup packet
- postgres_1 | 2019-04-09 09:04:08.012 UTC [55] LOG: incomplete startup packet
- postgres_1 | 2019-04-09 09:04:08.088 UTC [1] LOG: database system is ready to accept connections
- postgres_1 | 2019-04-09 09:06:06.929 UTC [64] ERROR: duplicate key value violates unique constraint "pg_type_typname_nsp_index"
- postgres_1 | 2019-04-09 09:06:06.929 UTC [64] DETAIL: Key (typname, typnamespace)=(status_type, 2200) already exists.
- postgres_1 | 2019-04-09 09:06:06.929 UTC [64] STATEMENT: CREATE TYPE status_type AS ENUM ('pending', 'running', 'completed', 'reported', 'recovered', 'failed_analysis', 'failed_processing', 'failed_reporting')
- postgres_1 | 2019-04-09 09:06:06.929 UTC [63] ERROR: duplicate key value violates unique constraint "pg_type_typname_nsp_index"
- postgres_1 | 2019-04-09 09:06:06.929 UTC [63] DETAIL: Key (typname, typnamespace)=(status_type, 2200) already exists.
- postgres_1 | 2019-04-09 09:06:06.929 UTC [63] STATEMENT: CREATE TYPE status_type AS ENUM ('pending', 'running', 'completed', 'reported', 'recovered', 'failed_analysis', 'failed_processing', 'failed_reporting')
- cuckoo_1 | ===> Use default ports and hosts if not specified...
- cuckoo_1 | ES_HOST=elasticsearch
- cuckoo_1 | ES_PORT=9200
- cuckoo_1 | MONGO_HOST=mongo
- cuckoo_1 | MONGO_TCP_PORT=27017
- cuckoo_1 | POSTGRES_HOST=postgres
- cuckoo_1 | POSTGRES_TCP_PORT=5432
- cuckoo_1 | RESULTSERVER=0.0.0.0
- cuckoo_1 | RESULTSERVER_HOST=0.0.0.0
- cuckoo_1 | RESULTSERVER_PORT=2042
- cuckoo_1 |
- cuckoo_1 | ===> Update /cuckoo/conf/reporting.conf if needed...
- cuckoo_1 |
- cuckoo_1 | ===> Waiting on elasticsearch(http://elasticsearch:9200) to start......
- cuckoo_1 | Elasticsearch is ready!
- cuckoo_1 |
- cuckoo_1 | ===> Waiting for MongoDB(mongo:27017) to start...MongoDB is ready!
- cuckoo_1 |
- cuckoo_1 | ===> Waiting for Postgres(postgres:5432) to start....Postgres is ready!
- cuckoo_1 | [36m
- cuckoo_1 | _______ _____ _____
- cuckoo_1 | /::\ \ /\ \ /\ \
- cuckoo_1 | /::::\ \ /::\____\ /::\ \
- cuckoo_1 | /::::::\ \ /::::| | /::::\ \
- cuckoo_1 | /::::::::\ \ /:::::| | /::::::\ \
- cuckoo_1 | /:::/~~\:::\ \ /::::::| | /:::/\:::\ \
- cuckoo_1 | /:::/ \:::\ \ /:::/|::| | /:::/ \:::\ \
- cuckoo_1 | /:::/ / \:::\ \ /:::/ |::| | /:::/ \:::\ \
- cuckoo_1 | /:::/____/ \:::\____\ /:::/ |::|___|______ /:::/ / \:::\ \
- cuckoo_1 | |:::| | |:::| | /:::/ |::::::::\ \ /:::/ / \:::\ ___\
- cuckoo_1 | |:::|____| |:::| |/:::/ |:::::::::\____\/:::/____/ ___\:::| |
- cuckoo_1 | \:::\ \ /:::/ / \::/ / ~~~~~/:::/ /\:::\ \ /\ /:::|____|
- cuckoo_1 | \:::\ \ /:::/ / \/____/ /:::/ / \:::\ /::\ \::/ /
- cuckoo_1 | \:::\ /:::/ / /:::/ / \:::\ \:::\ \/____/
- cuckoo_1 | \:::\__/:::/ / /:::/ / \:::\ \:::\____\
- cuckoo_1 | \::::::::/ / /:::/ / \:::\ /:::/ /
- cuckoo_1 | \::::::/ / /:::/ / \:::\/:::/ /
- cuckoo_1 | \::::/ / /:::/ / \::::::/ /
- cuckoo_1 | \::/____/ /:::/ / \::::/ /
- cuckoo_1 | ~~ \::/ / \::/____/
- cuckoo_1 | \/____/
- cuckoo_1 | it's Cuckoo![0m
- cuckoo_1 |
- cuckoo_1 | Cuckoo Sandbox [33m2.0.5[0m
- cuckoo_1 | www.cuckoosandbox.org
- cuckoo_1 | Copyright (c) 2010-2017
- cuckoo_1 |
- cuckoo_1 | 2019-04-09 09:06:06,929 [cuckoo] CRITICAL: [31m[31mCuckooDatabaseError: Unable to create or connect to database: (psycopg2.IntegrityError) duplicate key value violates unique constraint "pg_type_typname_nsp_index"
- cuckoo_1 | DETAIL: Key (typname, typnamespace)=(status_type, 2200) already exists.
- cuckoo_1 | [SQL: "CREATE TYPE status_type AS ENUM ('pending', 'running', 'completed', 'reported', 'recovered', 'failed_analysis', 'failed_processing', 'failed_reporting')"][0m[0m
- elasticsearch_1 | [2019-04-09T09:03:59,561][INFO ][o.e.n.Node ] [] initializing ...
- elasticsearch_1 | [2019-04-09T09:03:59,856][INFO ][o.e.e.NodeEnvironment ] [1nKTJhI] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda3)]], net usable_space [102.1gb], net total_space [112.2gb], spins? [possibly], types [ext4]
- elasticsearch_1 | [2019-04-09T09:03:59,857][INFO ][o.e.e.NodeEnvironment ] [1nKTJhI] heap size [1.9gb], compressed ordinary object pointers [true]
- elasticsearch_1 | [2019-04-09T09:03:59,858][INFO ][o.e.n.Node ] node name [1nKTJhI] derived from node ID [1nKTJhIPTpmheAQwrj4q1A]; set [node.name] to override
- elasticsearch_1 | [2019-04-09T09:03:59,858][INFO ][o.e.n.Node ] version[5.6.15], pid[1], build[fe7575a/2019-02-13T16:21:45.880Z], OS[Linux/4.15.0-45-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_201/25.201-b08]
- elasticsearch_1 | [2019-04-09T09:03:59,858][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [aggs-matrix-stats]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [ingest-common]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [lang-expression]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [lang-groovy]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [lang-mustache]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [lang-painless]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [parent-join]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [percolator]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [reindex]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [transport-netty3]
- elasticsearch_1 | [2019-04-09T09:04:00,670][INFO ][o.e.p.PluginsService ] [1nKTJhI] loaded module [transport-netty4]
- elasticsearch_1 | [2019-04-09T09:04:00,671][INFO ][o.e.p.PluginsService ] [1nKTJhI] no plugins loaded
- elasticsearch_1 | [2019-04-09T09:04:01,808][INFO ][o.e.d.DiscoveryModule ] [1nKTJhI] using discovery type [zen]
- elasticsearch_1 | [2019-04-09T09:04:02,243][INFO ][o.e.n.Node ] initialized
- elasticsearch_1 | [2019-04-09T09:04:02,243][INFO ][o.e.n.Node ] [1nKTJhI] starting ...
- elasticsearch_1 | [2019-04-09T09:04:02,389][INFO ][o.e.t.TransportService ] [1nKTJhI] publish_address {172.21.0.2:9300}, bound_addresses {0.0.0.0:9300}
- elasticsearch_1 | [2019-04-09T09:04:02,398][INFO ][o.e.b.BootstrapChecks ] [1nKTJhI] bound or publishing to a non-loopback address, enforcing bootstrap checks
- elasticsearch_1 | [2019-04-09T09:04:05,442][INFO ][o.e.c.s.ClusterService ] [1nKTJhI] new_master {1nKTJhI}{1nKTJhIPTpmheAQwrj4q1A}{jq23aeIfTki9tmy72eavkA}{172.21.0.2}{172.21.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
- elasticsearch_1 | [2019-04-09T09:04:05,456][INFO ][o.e.h.n.Netty4HttpServerTransport] [1nKTJhI] publish_address {172.21.0.2:9200}, bound_addresses {0.0.0.0:9200}
- elasticsearch_1 | [2019-04-09T09:04:05,456][INFO ][o.e.n.Node ] [1nKTJhI] started
- elasticsearch_1 | [2019-04-09T09:04:05,666][INFO ][o.e.g.GatewayService ] [1nKTJhI] recovered [0] indices into cluster_state
- web_1 | ===> Use default ports and hosts if not specified...
- web_1 | ES_HOST=elasticsearch
- web_1 | ES_PORT=9200
- web_1 | MONGO_HOST=mongo
- web_1 | MONGO_TCP_PORT=27017
- web_1 | POSTGRES_HOST=postgres
- web_1 | POSTGRES_TCP_PORT=5432
- web_1 | RESULTSERVER_HOST=0.0.0.0
- web_1 | RESULTSERVER_PORT=2042
- web_1 |
- web_1 | ===> Update /cuckoo/conf/reporting.conf if needed...
- web_1 |
- web_1 | ===> Waiting on elasticsearch(http://elasticsearch:9200) to start...
- web_1 | Elasticsearch is ready!
- web_1 |
- web_1 | ===> Waiting for MongoDB(mongo:27017) to start...MongoDB is ready!
- web_1 |
- web_1 | ===> Waiting for Postgres(postgres:5432) to start....Postgres is ready!
- web_1 | Traceback (most recent call last):
- web_1 | File "/usr/bin/cuckoo", line 11, in <module>
- web_1 | load_entry_point('Cuckoo==2.0.5.3', 'console_scripts', 'cuckoo')()
- web_1 | File "/usr/lib/python2.7/site-packages/click/core.py", line 716, in __call__
- web_1 | return self.main(*args, **kwargs)
- web_1 | File "/usr/lib/python2.7/site-packages/click/core.py", line 696, in main
- web_1 | rv = self.invoke(ctx)
- web_1 | File "/usr/lib/python2.7/site-packages/click/core.py", line 1060, in invoke
- web_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
- web_1 | File "/usr/lib/python2.7/site-packages/click/core.py", line 889, in invoke
- web_1 | return ctx.invoke(self.callback, **ctx.params)
- web_1 | File "/usr/lib/python2.7/site-packages/click/core.py", line 534, in invoke
- web_1 | return callback(*args, **kwargs)
- web_1 | File "/usr/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
- web_1 | return f(get_current_context(), *args, **kwargs)
- web_1 | File "/usr/lib/python2.7/site-packages/cuckoo/main.py", line 570, in web
- web_1 | Database().connect()
- web_1 | File "/usr/lib/python2.7/site-packages/cuckoo/core/database.py", line 444, in connect
- web_1 | self._create_tables()
- web_1 | File "/usr/lib/python2.7/site-packages/cuckoo/core/database.py", line 452, in _create_tables
- web_1 | "Unable to create or connect to database: %s" % e
- web_1 | cuckoo.common.exceptions.CuckooDatabaseError: Unable to create or connect to database: (psycopg2.IntegrityError) duplicate key value violates unique constraint "pg_type_typname_nsp_index"
- web_1 | DETAIL: Key (typname, typnamespace)=(status_type, 2200) already exists.
- web_1 | [SQL: "CREATE TYPE status_type AS ENUM ('pending', 'running', 'completed', 'reported', 'recovered', 'failed_analysis', 'failed_processing', 'failed_reporting')"]
- mongo_1 | 2019-04-09T09:04:00.669+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
- mongo_1 | 2019-04-09T09:04:00.672+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=48442f748050
- mongo_1 | 2019-04-09T09:04:00.672+0000 I CONTROL [initandlisten] db version v4.0.8
- mongo_1 | 2019-04-09T09:04:00.672+0000 I CONTROL [initandlisten] git version: 9b00696ed75f65e1ebc8d635593bed79b290cfbb
- mongo_1 | 2019-04-09T09:04:00.672+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
- mongo_1 | 2019-04-09T09:04:00.672+0000 I CONTROL [initandlisten] allocator: tcmalloc
- mongo_1 | 2019-04-09T09:04:00.672+0000 I CONTROL [initandlisten] modules: none
- mongo_1 | 2019-04-09T09:04:00.672+0000 I CONTROL [initandlisten] build environment:
- mongo_1 | 2019-04-09T09:04:00.672+0000 I CONTROL [initandlisten] distmod: ubuntu1604
- mongo_1 | 2019-04-09T09:04:00.673+0000 I CONTROL [initandlisten] distarch: x86_64
- mongo_1 | 2019-04-09T09:04:00.673+0000 I CONTROL [initandlisten] target_arch: x86_64
- mongo_1 | 2019-04-09T09:04:00.673+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }
- mongo_1 | 2019-04-09T09:04:00.673+0000 I STORAGE [initandlisten]
- mongo_1 | 2019-04-09T09:04:00.673+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
- mongo_1 | 2019-04-09T09:04:00.673+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
- mongo_1 | 2019-04-09T09:04:00.673+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3316M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
- mongo_1 | 2019-04-09T09:04:02.229+0000 I STORAGE [initandlisten] WiredTiger message [1554800642:229491][1:0x7f36aaff4a40], txn-recover: Set global recovery timestamp: 0
- mongo_1 | 2019-04-09T09:04:02.610+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
- mongo_1 | 2019-04-09T09:04:03.915+0000 I CONTROL [initandlisten]
- mongo_1 | 2019-04-09T09:04:03.915+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
- mongo_1 | 2019-04-09T09:04:03.915+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
- mongo_1 | 2019-04-09T09:04:03.915+0000 I CONTROL [initandlisten]
- mongo_1 | 2019-04-09T09:04:03.916+0000 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: eb94170c-a6d9-47da-9e45-29cb2c2b289f
- mongo_1 | 2019-04-09T09:04:04.293+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 4.0
- mongo_1 | 2019-04-09T09:04:04.296+0000 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: ac03661b-0ffa-49f3-8922-98412bd56ccf
- mongo_1 | 2019-04-09T09:04:04.734+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
- mongo_1 | 2019-04-09T09:04:04.735+0000 I NETWORK [initandlisten] waiting for connections on port 27017
- mongo_1 | 2019-04-09T09:04:04.736+0000 I STORAGE [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: e9764bc6-f747-4892-88bd-6e3180235a24
- mongo_1 | 2019-04-09T09:04:05.792+0000 I INDEX [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
- mongo_1 | 2019-04-09T09:04:05.792+0000 I INDEX [LogicalSessionCacheRefresh] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
- mongo_1 | 2019-04-09T09:04:05.793+0000 I INDEX [LogicalSessionCacheRefresh] build index done. scanned 0 total records. 0 secs
- mongo_1 | 2019-04-09T09:04:05.793+0000 I COMMAND [LogicalSessionCacheRefresh] command config.$cmd command: createIndexes { createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], $db: "config" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_msg 1057ms
- mongo_1 | 2019-04-09T09:04:06.066+0000 I NETWORK [listener] connection accepted from 172.21.0.6:54300 #1 (1 connection now open)
- mongo_1 | 2019-04-09T09:04:06.066+0000 I NETWORK [conn1] end connection 172.21.0.6:54300 (0 connections now open)
- mongo_1 | 2019-04-09T09:04:06.074+0000 I NETWORK [listener] connection accepted from 172.21.0.5:51698 #2 (1 connection now open)
- mongo_1 | 2019-04-09T09:04:06.074+0000 I NETWORK [conn2] end connection 172.21.0.5:51698 (0 connections now open)
- mongo_1 | 2019-04-09T09:04:06.551+0000 I NETWORK [listener] connection accepted from 172.21.0.7:42016 #3 (1 connection now open)
- mongo_1 | 2019-04-09T09:04:06.552+0000 I NETWORK [conn3] end connection 172.21.0.7:42016 (0 connections now open)
Advertisement
Add Comment
Please, Sign In to add comment