Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- anton@velociti:/datasets$ mkdir mlperf-bert
- anton@velociti:/datasets$ cd !$
- cd mlperf-bert
- anton@velociti:/datasets/mlperf-bert$ l
- anton@velociti:/datasets/mlperf-bert$ git clone https://github.com/mlperf/inference
- Cloning into 'inference'...
- remote: Enumerating objects: 68, done.
- remote: Counting objects: 100% (68/68), done.
- remote: Compressing objects: 100% (60/60), done.
- remote: Total 5473 (delta 10), reused 26 (delta 3), pack-reused 5405
- Receiving objects: 100% (5473/5473), 424.58 MiB | 25.73 MiB/s, done.
- Resolving deltas: 100% (3147/3147), done.
- Checking connectivity... done.
- anton@velociti:/datasets/mlperf-bert$ ls
- inference
- anton@velociti:/datasets/mlperf-bert$ cd inference/
- anton@velociti:/datasets/mlperf-bert/inference$ l
- build/ build_overrides/ CONTRIBUTING.md LICENSE.md loadgen_pymodule_setup_lib.py Makefile README.md third_party/ v0.7/
- BUILD.gn calibration/ DEPS loadgen/ loadgen_pymodule_setup_src.py others/ SubmissionExample.ipynb* v0.5/
- anton@velociti:/datasets/mlperf-bert/inference$ cd ..
- anton@velociti:/datasets/mlperf-bert$ ln -s inference/v0.7/
- language/ mlperf.conf speech_recognition/
- anton@velociti:/datasets/mlperf-bert$ ln -s inference/v0.7/language/bert/ .
- anton@velociti:/datasets/mlperf-bert$ l
- bert@ inference/
- anton@velociti:/datasets/mlperf-bert$ cd bert
- anton@velociti:/datasets/mlperf-bert/bert$ l
- bert_config.json DeepLearningExamples/ Makefile onnxruntime_SUT.py README.md squad_eval.py tf_SUT.py
- bert_tf_to_pytorch.py Dockerfile MLPerf INT8 BERT Finetuning.pdf pytorch_SUT.py run.py squad_QSL.py user.conf
- anton@velociti:/datasets/mlperf-bert/bert$ make setup
- make[1]: Entering directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- Submodule 'v0.7/language/bert/DeepLearningExamples' (https://github.com/NVIDIA/DeepLearningExamples.git) registered for path 'DeepLearningExamples'
- Cloning into 'v0.7/language/bert/DeepLearningExamples'...
- remote: Enumerating objects: 68, done.
- remote: Counting objects: 100% (68/68), done.
- remote: Compressing objects: 100% (56/56), done.
- remote: Total 6589 (delta 20), reused 21 (delta 7), pack-reused 6521
- Receiving objects: 100% (6589/6589), 37.84 MiB | 18.31 MiB/s, done.
- Resolving deltas: 100% (3191/3191), done.
- Checking connectivity... done.
- Submodule path 'DeepLearningExamples': checked out 'b03375bd6c2c5233130e61a3be49e26d1a20ac7c'
- make[1]: Leaving directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- make[1]: Entering directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- --2020-04-30 10:24:17-- https://github.com/rajpurkar/SQuAD-explorer/blob/master/dataset/dev-v1.1.json?raw=true
- Resolving github.com (github.com)... 140.82.118.3
- Connecting to github.com (github.com)|140.82.118.3|:443... connected.
- HTTP request sent, awaiting response... 302 Found
- Location: https://github.com/rajpurkar/SQuAD-explorer/raw/master/dataset/dev-v1.1.json [following]
- --2020-04-30 10:24:18-- https://github.com/rajpurkar/SQuAD-explorer/raw/master/dataset/dev-v1.1.json
- Reusing existing connection to github.com:443.
- HTTP request sent, awaiting response... 302 Found
- Location: https://raw.githubusercontent.com/rajpurkar/SQuAD-explorer/master/dataset/dev-v1.1.json [following]
- --2020-04-30 10:24:19-- https://raw.githubusercontent.com/rajpurkar/SQuAD-explorer/master/dataset/dev-v1.1.json
- Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.60.133
- Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.60.133|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 4854279 (4.6M) [text/plain]
- Saving to: ‘build/data/dev-v1.1.json’
- build/data/dev-v1.1.json 100%[================================================================================================================>] 4.63M 4.14MB/s in 1.1s
- 2020-04-30 10:24:20 (4.14 MB/s) - ‘build/data/dev-v1.1.json’ saved [4854279/4854279]
- --2020-04-30 10:24:20-- https://github.com/allenai/bi-att-flow/raw/master/squad/evaluate-v1.1.py
- Resolving github.com (github.com)... 140.82.118.3
- Connecting to github.com (github.com)|140.82.118.3|:443... connected.
- HTTP request sent, awaiting response... 302 Found
- Location: https://raw.githubusercontent.com/allenai/bi-att-flow/master/squad/evaluate-v1.1.py [following]
- --2020-04-30 10:24:21-- https://raw.githubusercontent.com/allenai/bi-att-flow/master/squad/evaluate-v1.1.py
- Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.60.133
- Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.60.133|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 3419 (3.3K) [text/plain]
- Saving to: ‘build/data/evaluate-v1.1.py’
- build/data/evaluate-v1.1.py 100%[================================================================================================================>] 3.34K --.-KB/s in 0s
- 2020-04-30 10:24:21 (31.4 MB/s) - ‘build/data/evaluate-v1.1.py’ saved [3419/3419]
- make[1]: Leaving directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- make[1]: Entering directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- make[2]: Entering directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- --2020-04-30 10:24:21-- https://zenodo.org/record/3733868/files/model.ckpt-5474.data-00000-of-00001?download=1
- Resolving zenodo.org (zenodo.org)... 188.184.117.155
- Connecting to zenodo.org (zenodo.org)|188.184.117.155|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 4013330464 (3.7G) [application/octet-stream]
- Saving to: ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.ckpt-5474.data-00000-of-00001’
- build/data/bert_tf_v1_1_large_fp32_384_v2/model.ck 100%[================================================================================================================>] 3.74G 25.3MB/s in 2m 32s "
- 2020-04-30 10:26:53 (25.3 MB/s) - ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.ckpt-5474.data-00000-of-00001’ saved [4013330464/4013330464]
- --2020-04-30 10:26:53-- https://zenodo.org/record/3733868/files/model.ckpt-5474.index?download=1
- Resolving zenodo.org (zenodo.org)... 188.184.117.155
- Connecting to zenodo.org (zenodo.org)|188.184.117.155|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 44414 (43K) [application/octet-stream]
- Saving to: ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.ckpt-5474.index’
- build/data/bert_tf_v1_1_large_fp32_384_v2/model.ck 100%[================================================================================================================>] 43.37K --.-KB/s in 0.08s
- 2020-04-30 10:26:54 (578 KB/s) - ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.ckpt-5474.index’ saved [44414/44414]
- --2020-04-30 10:26:54-- https://zenodo.org/record/3733868/files/model.ckpt-5474.meta?download=1
- Resolving zenodo.org (zenodo.org)... 188.184.117.155
- Connecting to zenodo.org (zenodo.org)|188.184.117.155|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 10629131 (10M) [application/octet-stream]
- Saving to: ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.ckpt-5474.meta’
- build/data/bert_tf_v1_1_large_fp32_384_v2/model.ck 100%[================================================================================================================>] 10.14M 18.0MB/s in 0.6s
- 2020-04-30 10:26:55 (18.0 MB/s) - ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.ckpt-5474.meta’ saved [10629131/10629131]
- --2020-04-30 10:26:55-- https://zenodo.org/record/3733868/files/vocab.txt?download=1
- Resolving zenodo.org (zenodo.org)... 188.184.117.155
- Connecting to zenodo.org (zenodo.org)|188.184.117.155|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 231508 (226K) [text/plain]
- Saving to: ‘build/data/bert_tf_v1_1_large_fp32_384_v2/vocab.txt’
- build/data/bert_tf_v1_1_large_fp32_384_v2/vocab.tx 100%[================================================================================================================>] 226.08K 1.11MB/s in 0.2s
- 2020-04-30 10:26:56 (1.11 MB/s) - ‘build/data/bert_tf_v1_1_large_fp32_384_v2/vocab.txt’ saved [231508/231508]
- make[2]: Leaving directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- make[2]: Entering directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- --2020-04-30 10:26:56-- https://zenodo.org/record/3733896/files/model.pytorch?download=1
- Resolving zenodo.org (zenodo.org)... 188.184.117.155
- Connecting to zenodo.org (zenodo.org)|188.184.117.155|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 1340665723 (1.2G) [application/octet-stream]
- Saving to: ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.pytorch’
- build/data/bert_tf_v1_1_large_fp32_384_v2/model.py 100%[================================================================================================================>] 1.25G 26.0MB/s in 52s
- 2020-04-30 10:27:49 (24.6 MB/s) - ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.pytorch’ saved [1340665723/1340665723]
- make[2]: Leaving directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- make[2]: Entering directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- --2020-04-30 10:27:49-- https://zenodo.org/record/3733910/files/model.onnx?download=1
- Resolving zenodo.org (zenodo.org)... 188.184.117.155
- Connecting to zenodo.org (zenodo.org)|188.184.117.155|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 1340711828 (1.2G) [application/octet-stream]
- Saving to: ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.onnx’
- build/data/bert_tf_v1_1_large_fp32_384_v2/model.on 100%[================================================================================================================>] 1.25G 25.8MB/s in 53s
- 2020-04-30 10:28:43 (24.1 MB/s) - ‘build/data/bert_tf_v1_1_large_fp32_384_v2/model.onnx’ saved [1340711828/1340711828]
- --2020-04-30 10:28:43-- https://zenodo.org/record/3750364/files/bert_large_v1_1_fake_quant.onnx?download=1
- Resolving zenodo.org (zenodo.org)... 188.184.117.155
- Connecting to zenodo.org (zenodo.org)|188.184.117.155|:443... connected.
- HTTP request sent, awaiting response... 200 OK
- Length: 1340870170 (1.2G) [application/octet-stream]
- Saving to: ‘build/data/bert_tf_v1_1_large_fp32_384_v2/bert_large_v1_1_fake_quant.onnx’
- build/data/bert_tf_v1_1_large_fp32_384_v2/bert_lar 100%[================================================================================================================>] 1.25G 25.9MB/s in 57s
- 2020-04-30 10:29:40 (22.6 MB/s) - ‘build/data/bert_tf_v1_1_large_fp32_384_v2/bert_large_v1_1_fake_quant.onnx’ saved [1340870170/1340870170]
- make[2]: Leaving directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- make[1]: Leaving directory '/datasets/mlperf-bert/inference/v0.7/language/bert'
- anton@velociti:/datasets/mlperf-bert/bert$ cd build/
- anton@velociti:/datasets/mlperf-bert/bert/build$ l
- data/ mlperf.conf result/
- anton@velociti:/datasets/mlperf-bert/bert/build$ du -hs data/
- 7.5G data/
- anton@velociti:/datasets/mlperf-bert/bert/build$ du -hs result/
- 4.0K result/
- anton@velociti:/datasets/mlperf-bert/bert/build$ vim mlperf.conf
- anton@velociti:/datasets/mlperf-bert/bert/build$ cd ..
- anton@velociti:/datasets/mlperf-bert/bert$ ls
- bert_config.json build Dockerfile MLPerf INT8 BERT Finetuning.pdf pytorch_SUT.py run.py squad_QSL.py user.conf
- bert_tf_to_pytorch.py DeepLearningExamples Makefile onnxruntime_SUT.py README.md squad_eval.py tf_SUT.py
- anton@velociti:/datasets/mlperf-bert/bert$ make build_docker
- 19.08-py3: Pulling from nvidia/tensorrtserver
- 7413c47ba209: Pulling fs layer
- 0fe7e7cbb2e8: Pulling fs layer
- 1d425c982345: Pulling fs layer
- 344da5c95cec: Waiting
- ae62549b429d: Waiting
- e275e0ef6c20: Waiting
- 4090c4d315fe: Waiting
- 00a11b299176: Waiting
- 74a29ca83919: Pulling fs layer
- a1abd2d74110: Waiting
- 90d7249fe09b: Waiting
- 5db1b1a35ea4: Pulling fs layer
- b160969adc93: Pull complete
- 0179f14b1047: Pull complete
- a58b5dcd3fa6: Pull complete
- e7af950e37dd: Pull complete
- e880be2d991d: Pull complete
- b7c0ae26dc75: Pull complete
- 423736729fa4: Pull complete
- 9595d4b4fa6d: Pull complete
- d18ab9b3cee4: Pull complete
- d13f74634ff4: Pull complete
- 6465f099eaee: Pull complete
- 1d25a5143caf: Pull complete
- 1488e34e1ef6: Pull complete
- c0b9035f7b0d: Pull complete
- e12a027580b2: Pull complete
- 2195a5a8e51b: Pull complete
- 68d9a4bdc44b: Pull complete
- 79ac09aadede: Pull complete
- 4dfca455860d: Pull complete
- 8031f1622bfe: Pull complete
- d70a9aeed337: Pull complete
- 60cf0e9e8c63: Pull complete
- 8aafc5cadacf: Pull complete
- 6d9af9715b5a: Pull complete
- a55309038303: Pull complete
- Digest: sha256:438b6c2ddfd095faf3453f348c8639ea5be0c28a687a604d6f691f07469076c6
- Status: Downloaded newer image for nvcr.io/nvidia/tensorrtserver:19.08-py3
- nvcr.io/nvidia/tensorrtserver:19.08-py3
- Sending build context to Docker daemon 1.544MB
- Step 1/19 : ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:19.10-py3
- Step 2/19 : FROM ${FROM_IMAGE_NAME}
- 19.10-py3: Pulling from nvidia/tensorflow
- 5667fdb72017: Already exists
- d83811f270d5: Already exists
- ee671aafb583: Already exists
- 7fc152dfb3a6: Already exists
- dbc57626691b: Already exists
- e20092842144: Already exists
- d64c76da70d5: Already exists
- 429f0b34bf97: Already exists
- 39d853a0098c: Already exists
- dc9dfc23df66: Already exists
- 1a32524cb863: Already exists
- d3d394313ced: Already exists
- 857b6050fd78: Already exists
- 3a51649b9b50: Already exists
- 885e286ed6cc: Already exists
- 62be33d17790: Already exists
- 6a7d05a28b83: Already exists
- 11ff4c1b1e9b: Already exists
- 252fb308c785: Already exists
- 4749ee710260: Already exists
- 47668c0cb079: Already exists
- 4f9ec6b1521d: Already exists
- 292b425b68e8: Already exists
- 93e46b746825: Already exists
- 9334b2469b1a: Pulling fs layer
- a9d3427ef8f1: Pulling fs layer
- 0a91c68ff9a1: Pulling fs layer
- f5f626660a65: Waiting
- e16685627c50: Waiting
- 99dfb0f50bad: Waiting
- 5e0430538e53: Waiting
- c3c3189112dc: Waiting
- 0a9b551dba88: Waiting
- dd23726b6281: Waiting
- bcf5e26ee78d: Waiting
- 57897222b520: Waiting
- 35cf7ceb758a: Pulling fs layer
- 9fe8816ccbaf: Waiting
- 568529d84601: Waiting
- feac13e821f0: Waiting
- 63bf01aa2e10: Pulling fs layer
- 8237dc0aa519: Waiting
- e08f7bfdba39: Pull complete
- 5dbebdcd9dd4: Pull complete
- 10b59eb6c0e7: Pull complete
- 4321391484e3: Pull complete
- e34075c0c812: Pull complete
- 5c3fd4b3c64e: Pull complete
- 5773b85768cb: Pull complete
- 8a463a607b0d: Pull complete
- 5c41a81fbb6e: Pull complete
- e36b20174218: Pull complete
- d6bb84f1169f: Pull complete
- 6bb825deb8a9: Pull complete
- 01ee13459f0e: Pull complete
- 4fa013da199b: Pull complete
- a62afce2d344: Pull complete
- 480282b992b1: Pull complete
- 2f94baa14ffe: Pull complete
- eb272cd761dd: Pull complete
- a8494a349af6: Pull complete
- a11b5605d0d6: Pull complete
- 7a5d03ee7ce5: Pull complete
- b4c087b35334: Pull complete
- 913dbe33ddb0: Pull complete
- 5cd9e9488409: Pull complete
- b3ef5d01df03: Pull complete
- Digest: sha256:0d35081931b0a77b3688132366d0d28595d937236e7265e1ea26e0e1e66ab349
- Status: Downloaded newer image for nvcr.io/nvidia/tensorflow:19.10-py3
- ---> 2f18fb3723f5
- Step 3/19 : RUN apt-get update && apt-get install -y pbzip2 pv bzip2 libcurl4 curl
- ---> Running in c25ff5836ec8
- Get:1 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
- Get:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
- Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
- Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
- Get:5 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [186 kB]
- Get:6 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [11.3 MB]
- Get:7 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [843 kB]
- Get:8 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages [1344 kB]
- Get:9 http://archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages [13.5 kB]
- Get:10 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [19.8 kB]
- Get:11 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1205 kB]
- Get:12 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [66.8 kB]
- Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1376 kB]
- Get:14 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [8286 B]
- Get:15 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [7671 B]
- Get:16 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [52.4 kB]
- Get:17 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [8505 B]
- Get:18 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [908 kB]
- Fetched 17.9 MB in 2s (7453 kB/s)
- Reading package lists...
- Reading package lists...
- Building dependency tree...
- Reading state information...
- bzip2 is already the newest version (1.0.6-8.1ubuntu0.2).
- curl is already the newest version (7.58.0-2ubuntu3.8).
- libcurl4 is already the newest version (7.58.0-2ubuntu3.8).
- libcurl4 set to manually installed.
- Suggested packages:
- doc-base
- The following NEW packages will be installed:
- pbzip2 pv
- 0 upgraded, 2 newly installed, 0 to remove and 72 not upgraded.
- Need to get 85.4 kB of archives.
- After this operation, 218 kB of additional disk space will be used.
- Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 pbzip2 amd64 1.1.9-1build1 [37.1 kB]
- Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 pv amd64 1.6.6-1 [48.3 kB]
- debconf: unable to initialize frontend: Dialog
- debconf: (TERM is not set, so the dialog frontend is not usable.)
- debconf: falling back to frontend: Readline
- debconf: unable to initialize frontend: Readline
- debconf: (This frontend requires a controlling tty.)
- debconf: falling back to frontend: Teletype
- dpkg-preconfigure: unable to re-open stdin:
- Fetched 85.4 kB in 0s (664 kB/s)
- Selecting previously unselected package pbzip2.
- (Reading database ... 34678 files and directories currently installed.)
- Preparing to unpack .../pbzip2_1.1.9-1build1_amd64.deb ...
- Unpacking pbzip2 (1.1.9-1build1) ...
- Selecting previously unselected package pv.
- Preparing to unpack .../archives/pv_1.6.6-1_amd64.deb ...
- Unpacking pv (1.6.6-1) ...
- Setting up pv (1.6.6-1) ...
- Setting up pbzip2 (1.1.9-1build1) ...
- Removing intermediate container c25ff5836ec8
- ---> 1b75bfee983c
- Step 4/19 : RUN pip install toposort networkx pytest nltk tqdm html2text progressbar
- ---> Running in a2bbc1fb4484
- Collecting toposort
- Downloading https://files.pythonhosted.org/packages/e9/8a/321cd8ea5f4a22a06e3ba30ef31ec33bea11a3443eeb1d89807640ee6ed4/toposort-1.5-py2.py3-none-any.whl
- Collecting networkx
- Downloading https://files.pythonhosted.org/packages/41/8f/dd6a8e85946def36e4f2c69c84219af0fa5e832b018c970e92f2ad337e45/networkx-2.4-py3-none-any.whl (1.6MB)
- Collecting pytest
- Downloading https://files.pythonhosted.org/packages/c7/e2/c19c667f42f72716a7d03e8dd4d6f63f47d39feadd44cc1ee7ca3089862c/pytest-5.4.1-py3-none-any.whl (246kB)
- Requirement already satisfied: nltk in /usr/local/lib/python3.6/dist-packages (3.2.5)
- Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (4.36.1)
- Collecting html2text
- Downloading https://files.pythonhosted.org/packages/ae/88/14655f727f66b3e3199f4467bafcc88283e6c31b562686bf606264e09181/html2text-2020.1.16-py3-none-any.whl
- Collecting progressbar
- Downloading https://files.pythonhosted.org/packages/a3/a6/b8e451f6cff1c99b4747a2f7235aa904d2d49e8e1464e0b798272aa84358/progressbar-2.5.tar.gz
- Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx) (4.4.0)
- Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest) (19.1.0)
- Collecting pluggy<1.0,>=0.12 (from pytest)
- Downloading https://files.pythonhosted.org/packages/a0/28/85c7aa31b80d150b772fbe4a229487bc6644da9ccb7e427dd8cc60cb8a62/pluggy-0.13.1-py2.py3-none-any.whl
- Collecting more-itertools>=4.0.0 (from pytest)
- Downloading https://files.pythonhosted.org/packages/72/96/4297306cc270eef1e3461da034a3bebe7c84eff052326b130824e98fc3fb/more_itertools-8.2.0-py3-none-any.whl (43kB)
- Collecting packaging (from pytest)
- Downloading https://files.pythonhosted.org/packages/62/0a/34641d2bf5c917c96db0ded85ae4da25b6cd922d6b794648d4e7e07c88e5/packaging-20.3-py2.py3-none-any.whl
- Collecting py>=1.5.0 (from pytest)
- Downloading https://files.pythonhosted.org/packages/99/8d/21e1767c009211a62a8e3067280bfce76e89c9f876180308515942304d2d/py-1.8.1-py2.py3-none-any.whl (83kB)
- Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest) (0.1.7)
- Collecting importlib-metadata>=0.12; python_version < "3.8" (from pytest)
- Downloading https://files.pythonhosted.org/packages/ad/e4/891bfcaf868ccabc619942f27940c77a8a4b45fd8367098955bb7e152fb1/importlib_metadata-1.6.0-py2.py3-none-any.whl
- Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from nltk) (1.12.0)
- Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->pytest) (2.4.2)
- Collecting zipp>=0.5 (from importlib-metadata>=0.12; python_version < "3.8"->pytest)
- Downloading https://files.pythonhosted.org/packages/b2/34/bfcb43cc0ba81f527bc4f40ef41ba2ff4080e047acb0586b56b3d017ace4/zipp-3.1.0-py3-none-any.whl
- Building wheels for collected packages: progressbar
- Building wheel for progressbar (setup.py): started
- Building wheel for progressbar (setup.py): finished with status 'done'
- Created wheel for progressbar: filename=progressbar-2.5-cp36-none-any.whl size=12073 sha256=4fd1797e364c2a30fbf88de45494fbc1b1529776530bbb40c5e9ddc9779e6d17
- Stored in directory: /root/.cache/pip/wheels/c0/e9/6b/ea01090205e285175842339aa3b491adeb4015206cda272ff0
- Successfully built progressbar
- Installing collected packages: toposort, networkx, zipp, importlib-metadata, pluggy, more-itertools, packaging, py, pytest, html2text, progressbar
- Successfully installed html2text-2020.1.16 importlib-metadata-1.6.0 more-itertools-8.2.0 networkx-2.4 packaging-20.3 pluggy-0.13.1 progressbar-2.5 py-1.8.1 pytest-5.4.1 toposort-1.5 zipp-3.1.0
- WARNING: You are using pip version 19.2.3, however version 20.1 is available.
- You should consider upgrading via the 'pip install --upgrade pip' command.
- Removing intermediate container a2bbc1fb4484
- ---> 6c39c4c2bb58
- Step 5/19 : WORKDIR /workspace
- ---> Running in ca05f28bfd56
- Removing intermediate container ca05f28bfd56
- ---> dcd94b9bf861
- Step 6/19 : RUN git clone https://github.com/openai/gradient-checkpointing.git
- ---> Running in 26187d1329f7
- Cloning into 'gradient-checkpointing'...
- Removing intermediate container 26187d1329f7
- ---> 6fd85362ed90
- Step 7/19 : RUN git clone https://github.com/attardi/wikiextractor.git
- ---> Running in 35bea095240a
- Cloning into 'wikiextractor'...
- Removing intermediate container 35bea095240a
- ---> ee0c21bf7627
- Step 8/19 : RUN git clone https://github.com/soskek/bookcorpus.git
- ---> Running in 4f14a7b4bb8a
- Cloning into 'bookcorpus'...
- Removing intermediate container 4f14a7b4bb8a
- ---> 9ddcf0a8e9a8
- Step 9/19 : RUN git clone https://github.com/titipata/pubmed_parser
- ---> Running in 5ab9a66a48f4
- Cloning into 'pubmed_parser'...
- Removing intermediate container 5ab9a66a48f4
- ---> e10cae650300
- Step 10/19 : RUN pip3 install /workspace/pubmed_parser
- ---> Running in 5ea6a9fd6133
- Processing ./pubmed_parser
- Collecting lxml (from pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/dd/ba/a0e6866057fc0bbd17192925c1d63a3b85cf522965de9bc02364d08e5b84/lxml-4.5.0-cp36-cp36m-manylinux1_x86_64.whl (5.8MB)
- Collecting unidecode (from pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/d0/42/d9edfed04228bacea2d824904cae367ee9efd05e6cce7ceaaedd0b0ad964/Unidecode-1.1.1-py2.py3-none-any.whl (238kB)
- Collecting requests (from pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/1a/70/1935c770cb3be6e3a8b78ced23d7e0f3b187f5cbfab4749523ed65d7c9b1/requests-2.23.0-py2.py3-none-any.whl (58kB)
- Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from pubmed-parser==0.2.2) (1.12.0)
- Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pubmed-parser==0.2.2) (1.14.5)
- Requirement already satisfied: pytest in /usr/local/lib/python3.6/dist-packages (from pubmed-parser==0.2.2) (5.4.1)
- Collecting pytest-cov (from pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/b9/54/3673ee8be482f81527678ac894276223b9814bb7262e4f730469bb7bf70e/pytest_cov-2.8.1-py2.py3-none-any.whl
- Collecting chardet<4,>=3.0.2 (from requests->pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
- Collecting idna<3,>=2.5 (from requests->pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/89/e3/afebe61c546d18fb1709a61bee788254b40e736cff7271c7de5de2dc4128/idna-2.9-py2.py3-none-any.whl (58kB)
- Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests->pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/e1/e5/df302e8017440f111c11cc41a6b432838672f5a70aa29227bf58149dc72f/urllib3-1.25.9-py2.py3-none-any.whl (126kB)
- Collecting certifi>=2017.4.17 (from requests->pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/57/2b/26e37a4b034800c960a00c4e1b3d9ca5d7014e983e6e729e33ea2f36426c/certifi-2020.4.5.1-py2.py3-none-any.whl (157kB)
- Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from pytest->pubmed-parser==0.2.2) (1.6.0)
- Requirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest->pubmed-parser==0.2.2) (0.13.1)
- Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest->pubmed-parser==0.2.2) (0.1.7)
- Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest->pubmed-parser==0.2.2) (19.1.0)
- Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest->pubmed-parser==0.2.2) (1.8.1)
- Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest->pubmed-parser==0.2.2) (8.2.0)
- Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest->pubmed-parser==0.2.2) (20.3)
- Collecting coverage>=4.4 (from pytest-cov->pubmed-parser==0.2.2)
- Downloading https://files.pythonhosted.org/packages/2a/3e/fc18ecef69f174c13493576f46966053c1da07fd8721962530dc1a10b1ca/coverage-5.1-cp36-cp36m-manylinux1_x86_64.whl (227kB)
- Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest->pubmed-parser==0.2.2) (3.1.0)
- Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->pytest->pubmed-parser==0.2.2) (2.4.2)
- Building wheels for collected packages: pubmed-parser
- Building wheel for pubmed-parser (setup.py): started
- Building wheel for pubmed-parser (setup.py): finished with status 'done'
- Created wheel for pubmed-parser: filename=pubmed_parser-0.2.2-cp36-none-any.whl size=18164 sha256=2c0168cc7c9233ddc6d8965f40c7a4665d41fc4d79bda01d04fd57862a9ede48
- Stored in directory: /tmp/pip-ephem-wheel-cache-zqs3szp0/wheels/70/0e/94/406257b015fc1ba650bee2b5e3fd979b281504f67008d482f3
- Successfully built pubmed-parser
- Installing collected packages: lxml, unidecode, chardet, idna, urllib3, certifi, requests, coverage, pytest-cov, pubmed-parser
- Successfully installed certifi-2020.4.5.1 chardet-3.0.4 coverage-5.1 idna-2.9 lxml-4.5.0 pubmed-parser-0.2.2 pytest-cov-2.8.1 requests-2.23.0 unidecode-1.1.1 urllib3-1.25.9
- WARNING: You are using pip version 19.2.3, however version 20.1 is available.
- You should consider upgrading via the 'pip install --upgrade pip' command.
- Removing intermediate container 5ea6a9fd6133
- ---> 9ddd63ede005
- Step 11/19 : ARG TRTIS_CLIENTS_URL=https://github.com/NVIDIA/tensorrt-inference-server/releases/download/v1.5.0/v1.5.0_ubuntu1804.clients.tar.gz
- ---> Running in 23b23f074b25
- Removing intermediate container 23b23f074b25
- ---> 8ef021ca840b
- Step 12/19 : RUN mkdir -p /workspace/install && curl -L ${TRTIS_CLIENTS_URL} | tar xvz -C /workspace/install
- ---> Running in 70d395341e91
- % Total % Received % Xferd Average Speed Time Time Time Current
- Dload Upload Total Spent Left Speed
- 100 173 100 173 0 0 542 0 --:--:-- --:--:-- --:--:-- 542
- 100 645 100 645 0 0 907 0 --:--:-- --:--:-- --:--:-- 907
- 0 4081k 0 16957 0 0 12875 0 0:05:24 0:00:01 0:05:23 12875bin/
- bin/ensemble_image_client
- bin/simple_client
- bin/simple_sequence_client
- bin/image_client
- bin/perf_client
- bin/simple_string_client
- bin/simple_callback_client
- include/
- include/model_config.pb.h
- include/request_status.pb.h
- include/api.pb.h
- include/request_http.h
- include/request.h
- include/request_grpc.h
- include/server_status.pb.h
- lib/
- lib/librequest.so
- python/
- python/simple_string_client.py
- python/simple_callback_client.py
- python/simple_sequence_client.py
- python/grpc_image_client.py
- python/tensorrtserver-1.5.0-py2.py3-none-linux_x86_64.whl
- 100 4081k 100 4081k 0 0 2068k 0 0:00:01 0:00:01 --:--:-- 6197k
- python/simple_client.py
- python/image_client.py
- python/ensemble_image_client.py
- Removing intermediate container 70d395341e91
- ---> 41f339d2b113
- Step 13/19 : RUN pip install /workspace/install/python/tensorrtserver*.whl
- ---> Running in 0922ea83f892
- Processing ./install/python/tensorrtserver-1.5.0-py2.py3-none-linux_x86_64.whl
- Requirement already satisfied: protobuf>=3.5.0 in /usr/local/lib/python3.6/dist-packages (from tensorrtserver==1.5.0) (3.9.2)
- Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from tensorrtserver==1.5.0) (0.17.1)
- Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from tensorrtserver==1.5.0) (1.14.5)
- Requirement already satisfied: grpcio in /usr/local/lib/python3.6/dist-packages (from tensorrtserver==1.5.0) (1.24.0)
- Requirement already satisfied: six>=1.9 in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.5.0->tensorrtserver==1.5.0) (1.12.0)
- Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.5.0->tensorrtserver==1.5.0) (41.2.0)
- Installing collected packages: tensorrtserver
- Successfully installed tensorrtserver-1.5.0
- WARNING: You are using pip version 19.2.3, however version 20.1 is available.
- You should consider upgrading via the 'pip install --upgrade pip' command.
- Removing intermediate container 0922ea83f892
- ---> 5439d5041948
- Step 14/19 : WORKDIR /workspace/bert
- ---> Running in cdc10a1c4ccc
- Removing intermediate container cdc10a1c4ccc
- ---> 896a10f80ca3
- Step 15/19 : COPY . .
- ---> 2fc7a82e9b87
- Step 16/19 : ENV PYTHONPATH /workspace/bert
- ---> Running in 09ea32b5d7c0
- Removing intermediate container 09ea32b5d7c0
- ---> 93d6de700a43
- Step 17/19 : ENV BERT_PREP_WORKING_DIR /workspace/bert/data
- ---> Running in 27170a169f51
- Removing intermediate container 27170a169f51
- ---> 27bad162ca42
- Step 18/19 : ENV PATH //workspace/install/bin:${PATH}
- ---> Running in d378567dfb83
- Removing intermediate container d378567dfb83
- ---> 43355dd7e85a
- Step 19/19 : ENV LD_LIBRARY_PATH /workspace/install/lib:${LD_LIBRARY_PATH}
- ---> Running in 494de2d4fd94
- Removing intermediate container 494de2d4fd94
- ---> 54f70b651e42
- Successfully built 54f70b651e42
- Successfully tagged mlperf-inference-bert:latest
- Sending build context to Docker daemon 3.072kB
- Step 1/9 : ARG BASE_IMAGE
- Step 2/9 : FROM ${BASE_IMAGE}
- ---> 54f70b651e42
- Step 3/9 : RUN cd /tmp && git clone https://github.com/mlperf/inference.git && cd inference && git submodule update --init third_party/pybind && cd loadgen && python3 setup.py install && cd /tmp &&
- rm -rf inference
- ---> Running in fb8831a23980
- Cloning into 'inference'...
- Submodule 'third_party/pybind' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind'
- Cloning into '/tmp/inference/third_party/pybind'...
- Submodule path 'third_party/pybind': checked out '25abf7efba0b2990f5a6dfb0a31bc65c0f2f4d17'
- running install
- running bdist_egg
- running egg_info
- creating mlperf_loadgen.egg-info
- writing mlperf_loadgen.egg-info/PKG-INFO
- writing dependency_links to mlperf_loadgen.egg-info/dependency_links.txt
- writing top-level names to mlperf_loadgen.egg-info/top_level.txt
- writing manifest file 'mlperf_loadgen.egg-info/SOURCES.txt'
- reading manifest file 'mlperf_loadgen.egg-info/SOURCES.txt'
- writing manifest file 'mlperf_loadgen.egg-info/SOURCES.txt'
- installing library code to build/bdist.linux-x86_64/egg
- running install_lib
- running build_ext
- building 'mlperf_loadgen' extension
- creating build
- creating build/temp.linux-x86_64-3.6
- creating build/temp.linux-x86_64-3.6/bindings
- creating build/temp.linux-x86_64-3.6/generated
- x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=5 -I. -I../
- third_party/pybind/include -I/usr/include/python3.6m -c loadgen.cc -o build/temp.linux-x86_64-3.6/loadgen.o
- x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=5 -I. -I../
- third_party/pybind/include -I/usr/include/python3.6m -c logging.cc -o build/temp.linux-x86_64-3.6/logging.o
- x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=5 -I. -I../
- third_party/pybind/include -I/usr/include/python3.6m -c test_settings_internal.cc -o build/temp.linux-x86_64-3.6/test_settings_internal.o
- x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=5 -I. -I../
- third_party/pybind/include -I/usr/include/python3.6m -c utils.cc -o build/temp.linux-x86_64-3.6/utils.o
- x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=5 -I. -I../
- third_party/pybind/include -I/usr/include/python3.6m -c version.cc -o build/temp.linux-x86_64-3.6/version.o
- x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=5 -I. -I../
- third_party/pybind/include -I/usr/include/python3.6m -c bindings/python_api.cc -o build/temp.linux-x86_64-3.6/bindings/python_api.o
- x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=5 -I. -I../
- third_party/pybind/include -I/usr/include/python3.6m -c generated/version_generated.cc -o build/temp.linux-x86_64-3.6/generated/version_generated.o
- creating build/lib.linux-x86_64-3.6
- x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-secur
- ity -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/loadgen.o build/temp.linux-x86_64-3.6/logging.o build/temp.linux-x86_64-3.6/test_settings_internal.o build/temp.linux-x86_64-3.6/utils.o bui
- ld/temp.linux-x86_64-3.6/version.o build/temp.linux-x86_64-3.6/bindings/python_api.o build/temp.linux-x86_64-3.6/generated/version_generated.o -o build/lib.linux-x86_64-3.6/mlperf_loadgen.cpython-36m-x86_
- 64-linux-gnu.so
- creating build/bdist.linux-x86_64
- creating build/bdist.linux-x86_64/egg
- copying build/lib.linux-x86_64-3.6/mlperf_loadgen.cpython-36m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg
- creating stub loader for mlperf_loadgen.cpython-36m-x86_64-linux-gnu.so
- byte-compiling build/bdist.linux-x86_64/egg/mlperf_loadgen.py to mlperf_loadgen.cpython-36.pyc
- creating build/bdist.linux-x86_64/egg/EGG-INFO
- copying mlperf_loadgen.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
- copying mlperf_loadgen.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
- copying mlperf_loadgen.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
- copying mlperf_loadgen.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
- writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
- zip_safe flag not set; analyzing archive contents...
- __pycache__.mlperf_loadgen.cpython-36: module references __file__
- creating dist
- creating 'dist/mlperf_loadgen-0.5a0-py3.6-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it
- removing 'build/bdist.linux-x86_64/egg' (and everything under it)
- Processing mlperf_loadgen-0.5a0-py3.6-linux-x86_64.egg
- creating /usr/local/lib/python3.6/dist-packages/mlperf_loadgen-0.5a0-py3.6-linux-x86_64.egg
- Extracting mlperf_loadgen-0.5a0-py3.6-linux-x86_64.egg to /usr/local/lib/python3.6/dist-packages
- Adding mlperf-loadgen 0.5a0 to easy-install.pth file
- Installed /usr/local/lib/python3.6/dist-packages/mlperf_loadgen-0.5a0-py3.6-linux-x86_64.egg
- Processing dependencies for mlperf-loadgen==0.5a0
- Finished processing dependencies for mlperf-loadgen==0.5a0
- Removing intermediate container fb8831a23980
- ---> b9aa6a3a5bc4
- Step 4/9 : RUN python3 -m pip install torch==1.4.0 onnx==1.6.0 transformers==2.4.0 onnxruntime==1.2.0 numpy==1.18.0
- ---> Running in d197a0f5bba1
- Collecting torch==1.4.0
- Downloading https://files.pythonhosted.org/packages/24/19/4804aea17cd136f1705a5e98a00618cb8f6ccc375ad8bfa437408e09d058/torch-1.4.0-cp36-cp36m-manylinux1_x86_64.whl (753.4MB)
- Collecting onnx==1.6.0
- Downloading https://files.pythonhosted.org/packages/f5/f4/e126b60d109ad1e80020071484b935980b7cce1e4796073aab086a2d6902/onnx-1.6.0-cp36-cp36m-manylinux1_x86_64.whl (4.8MB)
- Collecting transformers==2.4.0
- Downloading https://files.pythonhosted.org/packages/c6/38/c30b6a4b86705311c428a234ef752f6c4c4ffdd75422a829f1f2766136c3/transformers-2.4.0-py3-none-any.whl (475kB)
- Collecting onnxruntime==1.2.0
- Downloading https://files.pythonhosted.org/packages/69/39/404df5ee608c548dacde43a17faf0248b183fa6163cf9c06aca6a511d760/onnxruntime-1.2.0-cp36-cp36m-manylinux1_x86_64.whl (3.7MB)
- Collecting numpy==1.18.0
- Downloading https://files.pythonhosted.org/packages/92/e6/45f71bd24f4e37629e9db5fb75caab919507deae6a5a257f9e4685a5f931/numpy-1.18.0-cp36-cp36m-manylinux1_x86_64.whl (20.1MB)
- Collecting typing-extensions>=3.6.2.1 (from onnx==1.6.0)
- Downloading https://files.pythonhosted.org/packages/0c/0e/3f026d0645d699e7320b59952146d56ad7c374e9cd72cd16e7c74e657a0f/typing_extensions-3.7.4.2-py3-none-any.whl
- Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from onnx==1.6.0) (1.12.0)
- Requirement already satisfied: protobuf in /usr/local/lib/python3.6/dist-packages (from onnx==1.6.0) (3.9.2)
- Collecting regex!=2019.12.17 (from transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/ac/46/ba38a04bfe4db6177ea89cde0bb7814ae677eafab8e51335729c5387ecdd/regex-2020.4.4-cp36-cp36m-manylinux2010_x86_64.whl (679kB)
- Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers==2.4.0) (4.36.1)
- Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers==2.4.0) (2.23.0)
- Collecting filelock (from transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/93/83/71a2ee6158bb9f39a90c0dea1637f81d5eef866e188e1971a1b1ab01a35a/filelock-3.0.12-py3-none-any.whl
- Requirement already satisfied: sentencepiece in /usr/local/lib/python3.6/dist-packages (from transformers==2.4.0) (0.1.82)
- Collecting tokenizers==0.0.11 (from transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/5e/36/7af38d572c935f8e0462ec7b4f7a46d73a2b3b1a938f50a5e8132d5b2dc5/tokenizers-0.0.11-cp36-cp36m-manylinux1_x86_64.whl (3.1MB)
- Collecting sacremoses (from transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/99/50/93509f906a40bffd7d175f97fd75ea328ad9bd91f48f59c4bd084c94a25e/sacremoses-0.0.41.tar.gz (883kB)
- Collecting boto3 (from transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/52/f6/20eee4b17af26e40a528cd4769d325bc48b0a3e614b6a92cb329417df31c/boto3-1.12.49-py2.py3-none-any.whl (128kB)
- Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->onnx==1.6.0) (41.2.0)
- Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==2.4.0) (2020.4.5.1)
- Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==2.4.0) (1.25.9)
- Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==2.4.0) (3.0.4)
- Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==2.4.0) (2.9)
- Collecting click (from sacremoses->transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/click-7.1.2-py2.py3-none-any.whl (82kB)
- Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==2.4.0) (0.13.2)
- Collecting jmespath<1.0.0,>=0.7.1 (from boto3->transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/a3/43/1e939e1fcd87b827fe192d0c9fc25b48c5b3368902bfb913de7754b0dc03/jmespath-0.9.5-py2.py3-none-any.whl
- Collecting botocore<1.16.0,>=1.15.49 (from boto3->transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/0f/2c/80605ef2f2800815dd2b0be0000295481b8e965b94541a3ed245c683c6f2/botocore-1.15.49-py2.py3-none-any.whl (6.2MB)
- Collecting s3transfer<0.4.0,>=0.3.0 (from boto3->transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/69/79/e6afb3d8b0b4e96cefbdc690f741d7dd24547ff1f94240c997a26fa908d3/s3transfer-0.3.3-py2.py3-none-any.whl (69kB)
- Collecting docutils<0.16,>=0.10 (from botocore<1.16.0,>=1.15.49->boto3->transformers==2.4.0)
- Downloading https://files.pythonhosted.org/packages/22/cd/a6aa959dca619918ccb55023b4cb151949c64d4d5d55b3f4ffd7eee0c6e8/docutils-0.15.2-py3-none-any.whl (547kB)
- Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.49->boto3->transformers==2.4.0) (2.8.0)
- Building wheels for collected packages: sacremoses
- Building wheel for sacremoses (setup.py): started
- Building wheel for sacremoses (setup.py): finished with status 'done'
- Created wheel for sacremoses: filename=sacremoses-0.0.41-cp36-none-any.whl size=893334 sha256=3f7db266227f9b40cf8a05054d8fd4c711c54050f266b9e42e44b0f8d6c319e1
- Stored in directory: /root/.cache/pip/wheels/22/5a/d4/b020a81249de7dc63758a34222feaa668dbe8ebfe9170cc9b1
- Successfully built sacremoses
- Installing collected packages: torch, typing-extensions, numpy, onnx, regex, filelock, tokenizers, click, sacremoses, jmespath, docutils, botocore, s3transfer, boto3, transformers, onnxruntime
- Found existing installation: numpy 1.14.5
- Uninstalling numpy-1.14.5:
- Successfully uninstalled numpy-1.14.5
- Successfully installed boto3-1.12.49 botocore-1.15.49 click-7.1.2 docutils-0.15.2 filelock-3.0.12 jmespath-0.9.5 numpy-1.18.0 onnx-1.6.0 onnxruntime-1.2.0 regex-2020.4.4 s3transfer-0.3.3 sacremoses-0.0.41
- tokenizers-0.0.11 torch-1.4.0 transformers-2.4.0 typing-extensions-3.7.4.2
- WARNING: You are using pip version 19.2.3, however version 20.1 is available.
- You should consider upgrading via the 'pip install --upgrade pip' command.
- Removing intermediate container d197a0f5bba1
- ---> 182535c8a9db
- Step 5/9 : ARG GID
- ---> Running in 059e557f9cc7
- Removing intermediate container 059e557f9cc7
- ---> dd6decf92785
- Step 6/9 : ARG UID
- ---> Running in 2b5cce4052d3
- Removing intermediate container 2b5cce4052d3
- ---> 54ce96025122
- Step 7/9 : ARG GROUP
- ---> Running in 2b2c700261e1
- Removing intermediate container 2b2c700261e1
- ---> cc40d1231097
- Step 8/9 : ARG USER
- ---> Running in 5ccbef29685e
- Removing intermediate container 5ccbef29685e
- ---> 411e812cfa95
- Step 9/9 : RUN echo root:root | chpasswd && groupadd -f -g ${GID} ${GROUP} && useradd -G sudo -g ${GID} -u ${UID} -m ${USER} && echo ${USER}:${USER} | chpasswd && echo -e "\nexport PS1=\"(mlperf) \\u@
- \\h:\\w\\$ \"" | tee -a /home/${USER}/.bashrc && echo -e "\n%sudo ALL=(ALL:ALL) NOPASSWD:ALL\n" | tee -a /etc/sudoers
- ---> Running in 98cab9fa1667
- export PS1="(mlperf) \u@\h:\w\$ "
- %sudo ALL=(ALL:ALL) NOPASSWD:ALL
- Removing intermediate container 98cab9fa1667
- ---> ba8010f97488
- Successfully built ba8010f97488
- Successfully tagged mlperf-inference-bert:latest
- anton@velociti:/datasets/mlperf-bert/bert$ ls
- bert_config.json build Dockerfile MLPerf INT8 BERT Finetuning.pdf pytorch_SUT.py run.py squad_QSL.py user.conf
- bert_tf_to_pytorch.py DeepLearningExamples Makefile onnxruntime_SUT.py README.md squad_eval.py tf_SUT.py
- anton@velociti:/datasets/mlperf-bert/bert$ vim README.md
- anton@velociti:/datasets/mlperf-bert/bert$ make launch_docker
- ================
- == TensorFlow ==
- ================
- NVIDIA Release 19.10 (build 8471601)
- TensorFlow Version 1.14.0
- Container image Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
- Copyright 2017-2019 The TensorFlow Authors. All rights reserved.
- Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.
- NVIDIA modifications are covered by the license terms that apply to the underlying project or file.
- NOTE: MOFED driver for multi-node communication was not detected.
- Multi-node communication performance may be reduced.
- (mlperf) anton@mlperf-inference-bert-anton:/workspace$ ^C
- (mlperf) anton@mlperf-inference-bert-anton:/workspace$ python3 run.py --backend=tf --scenario=Offline
- 2020-04-30 11:41:37.317871: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
- /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be u
- nderstood as (type, (1,)) / '(1,)type'.
- _np_qint8 = np.dtype([("qint8", np.int8, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be u
- nderstood as (type, (1,)) / '(1,)type'.
- _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be u
- nderstood as (type, (1,)) / '(1,)type'.
- _np_qint16 = np.dtype([("qint16", np.int16, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be u
- nderstood as (type, (1,)) / '(1,)type'.
- _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be u
- nderstood as (type, (1,)) / '(1,)type'.
- _np_qint32 = np.dtype([("qint32", np.int32, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be u
- nderstood as (type, (1,)) / '(1,)type'.
- np_resource = np.dtype([("resource", np.ubyte, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it wi
- ll be understood as (type, (1,)) / '(1,)type'.
- _np_qint8 = np.dtype([("qint8", np.int8, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it wi
- ll be understood as (type, (1,)) / '(1,)type'.
- _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it wi
- ll be understood as (type, (1,)) / '(1,)type'.
- _np_qint16 = np.dtype([("qint16", np.int16, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it wi
- ll be understood as (type, (1,)) / '(1,)type'.
- _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it wi
- ll be understood as (type, (1,)) / '(1,)type'.
- _np_qint32 = np.dtype([("qint32", np.int32, 1)])
- /usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it wi
- ll be understood as (type, (1,)) / '(1,)type'.
- np_resource = np.dtype([("resource", np.ubyte, 1)])
- WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod-0.18.1-py3.6-linux-x86_64.egg/horovod/tensorflow/__init__.py:117: The name tf.global_variables is deprecated. Please use tf.compat.v1
- .global_variables instead.
- WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod-0.18.1-py3.6-linux-x86_64.egg/horovod/tensorflow/__init__.py:143: The name tf.get_default_graph is deprecated. Please use tf.compat.v
- 1.get_default_graph instead.
- Loading TF model...
- WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmphvhdgfd0
- Constructing SUT...
- Creating tokenizer...
- Reading examples...
- WARNING:tensorflow:From /workspace/DeepLearningExamples/TensorFlow/LanguageModeling/BERT/utils/create_squad_data.py:154: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.
- Converting examples to features...
- Constructing QSL...
- Finished constructing QSL.
- Finished constructing SUT.
- Running Loadgen test...
- WARNING:tensorflow:From /workspace/DeepLearningExamples/TensorFlow/LanguageModeling/BERT/modeling.py:176: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
- WARNING:tensorflow:From /workspace/DeepLearningExamples/TensorFlow/LanguageModeling/BERT/modeling.py:427: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.
- WARNING:tensorflow:
- The TensorFlow contrib module will not be included in TensorFlow 2.0.
- For more information, please see:
- * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
- * https://github.com/tensorflow/addons
- * https://github.com/tensorflow/io (for I/O related ops)
- If you depend on functionality not listed there, please file an issue.
- WARNING:tensorflow:From /workspace/DeepLearningExamples/TensorFlow/LanguageModeling/BERT/modeling.py:683: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
- Instructions for updating:
- Use keras.layers.dense instead.
- WARNING:tensorflow:From /workspace/tf_SUT.py:138: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.
- WARNING:tensorflow:From /workspace/tf_SUT.py:143: The name tf.train.init_from_checkpoint is deprecated. Please use tf.compat.v1.train.init_from_checkpoint instead.
- WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:1354: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will
- be removed in a future version.
- Instructions for updating:
- Use tf.where in 2.0, which has the same broadcast rule as np.where
- 2020-04-30 11:42:43.462104: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2294835000 Hz
- 2020-04-30 11:42:43.465558: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2270440 executing computations on platform Host. Devices:
- 2020-04-30 11:42:43.465583: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
- 2020-04-30 11:42:43.468060: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
- 2020-04-30 11:42:43.911451: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returnin
- g NUMA node zero
- 2020-04-30 11:42:43.912302: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x194df2b0 executing computations on platform CUDA. Devices:
- 2020-04-30 11:42:43.912325: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1
- 2020-04-30 11:42:43.912765: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returnin
- g NUMA node zero
- 2020-04-30 11:42:43.913442: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
- name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
- pciBusID: 0000:02:00.0
- 2020-04-30 11:42:43.913493: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
- 2020-04-30 11:42:44.556544: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
- 2020-04-30 11:42:44.901690: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
- 2020-04-30 11:42:45.008312: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
- 2020-04-30 11:42:45.742467: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
- 2020-04-30 11:42:45.776540: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
- 2020-04-30 11:42:46.953738: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
- 2020-04-30 11:42:46.954059: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returnin
- g NUMA node zero
- 2020-04-30 11:42:46.954858: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returnin
- g NUMA node zero
- 2020-04-30 11:42:46.955454: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
- 2020-04-30 11:42:46.955510: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
- 2020-04-30 11:42:47.383389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
- 2020-04-30 11:42:47.383437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
- 2020-04-30 11:42:47.383449: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
- 2020-04-30 11:42:47.383873: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returnin
- g NUMA node zero
- 2020-04-30 11:42:47.385120: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returnin
- g NUMA node zero
- 2020-04-30 11:42:47.386193: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7533 MB memory) -> physical GPU (device: 0
- , name: GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1)
- 2020-04-30 11:42:57.015515: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.
- If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not vi
- a TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
- 2020-04-30 11:42:58.438500: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
- 2020-04-30 11:43:01.548433: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.615660: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.618022: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.619908: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.621795: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.623592: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.625536: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.627317: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.629054: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.630958: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.632730: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.634669: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.636448: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.638161: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.639834: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.641785: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.643594: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.645439: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.647194: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.648897: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.650614: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.652415: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.654287: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.656066: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.657895: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.659578: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.661349: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.663161: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.664923: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.666695: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.668523: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.670279: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.671970: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.673926: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.675696: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.677511: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.679329: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.681065: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.682802: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.684774: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.686503: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.688235: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.690103: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.691753: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.693467: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.695327: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.697058: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.698787: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.700598: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.702271: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.703928: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.705830: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.707723: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.709600: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.711437: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.713127: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.714890: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.716812: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.718703: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.720594: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.722372: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.724121: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.725908: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.727788: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.729684: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.731546: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.733419: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.735205: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.736972: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.738874: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.740707: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.742528: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.744365: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.746108: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.747846: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.749725: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.751561: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.753393: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.755232: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.756971: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.758750: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.760629: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.762472: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.764288: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.766133: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.767883: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.769647: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.771506: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.773376: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.775200: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.777037: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.778764: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.780548: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.782378: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.784118: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.785906: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.787640: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.789337: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.791024: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.792854: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.794652: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.796371: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.798204: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.799861: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.801588: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.803544: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.805407: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.807187: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.809005: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.810780: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.812446: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.814301: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.816110: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.817924: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.819812: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.821574: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.823345: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.825132: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.826901: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.828642: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.830422: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.832182: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.833965: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.835801: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.837672: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.839463: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.841292: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.842987: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.844656: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.846529: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.848374: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.850202: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.852024: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.853800: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.855704: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.857679: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.859528: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.861545: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.863395: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.865157: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.866985: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.868867: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.870718: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.872639: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.874460: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.882681: E tensorflow/stream_executor/cuda/cuda_blas.cc:250] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
- 2020-04-30 11:43:01.882717: W tensorflow/stream_executor/stream.cc:1916] attempting to perform BLAS operation using StreamExecutor without BLAS support
- Traceback (most recent call last):
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1356, in _do_call
- return fn(*args)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
- options, feed_dict, fetch_list, target_list, run_metadata)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
- run_metadata)
- tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
- (0) Internal: Blas GEMM launch failed : a.shape=(384, 1024), b.shape=(1024, 1024), m=384, n=1024, k=1024
- [[{{node bert/encoder/layer_0/attention/self/value/MatMul}}]]
- [[Reshape_1/_2373]]
- (1) Internal: Blas GEMM launch failed : a.shape=(384, 1024), b.shape=(1024, 1024), m=384, n=1024, k=1024
- [[{{node bert/encoder/layer_0/attention/self/value/MatMul}}]]
- 0 successful operations.
- 0 derived errors ignored.
- During handling of the above exception, another exception occurred:
- Traceback (most recent call last):
- File "run.py", line 89, in <module>
- main()
- File "run.py", line 80, in main
- lg.StartTestWithLogSettings(sut.sut, sut.qsl.qsl, settings, log_settings)
- File "/workspace/tf_SUT.py", line 65, in issue_queries
- for i, result in enumerate(self.estimator.predict(input_fn)):
- File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 637, in predict
- preds_evaluated = mon_sess.run(predictions)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 754, in run
- run_metadata=run_metadata)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1252, in run
- run_metadata=run_metadata)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1353, in run
- raise six.reraise(*original_exc_info)
- File "/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
- raise value
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1338, in run
- return self._sess.run(*args, **kwargs)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1411, in run
- run_metadata=run_metadata)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1169, in run
- return self._sess.run(*args, **kwargs)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 950, in run
- run_metadata_ptr)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1173, in _run
- feed_dict_tensor, options, run_metadata)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1350, in _do_run
- run_metadata)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1370, in _do_call
- raise type(e)(node_def, op, message)
- tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
- (0) Internal: Blas GEMM launch failed : a.shape=(384, 1024), b.shape=(1024, 1024), m=384, n=1024, k=1024
- [[node bert/encoder/layer_0/attention/self/value/MatMul (defined at /tmp/tmpbh_34cn1.py:61) ]]
- [[Reshape_1/_2373]]
- (1) Internal: Blas GEMM launch failed : a.shape=(384, 1024), b.shape=(1024, 1024), m=384, n=1024, k=1024
- [[node bert/encoder/layer_0/attention/self/value/MatMul (defined at /tmp/tmpbh_34cn1.py:61) ]]
- 0 successful operations.
- 0 derived errors ignored.
- Errors may have originated from an input operation.
- Input Source operations connected to node bert/encoder/layer_0/attention/self/value/MatMul:
- bert/encoder/layer_0/attention/self/value/kernel/read (defined at /workspace/DeepLearningExamples/TensorFlow/LanguageModeling/BERT/modeling.py:699)
- bert/encoder/Reshape_1 (defined at /workspace/DeepLearningExamples/TensorFlow/LanguageModeling/BERT/modeling.py:954)
- Input Source operations connected to node bert/encoder/layer_0/attention/self/value/MatMul:
- bert/encoder/layer_0/attention/self/value/kernel/read (defined at /workspace/DeepLearningExamples/TensorFlow/LanguageModeling/BERT/modeling.py:699)
- bert/encoder/Reshape_1 (defined at /workspace/DeepLearningExamples/TensorFlow/LanguageModeling/BERT/modeling.py:954)
- Original stack trace for 'bert/encoder/layer_0/attention/self/value/MatMul':
- File "run.py", line 89, in <module>
- main()
- File "run.py", line 80, in main
- lg.StartTestWithLogSettings(sut.sut, sut.qsl.qsl, settings, log_settings)
- File "/workspace/tf_SUT.py", line 65, in issue_queries
- for i, result in enumerate(self.estimator.predict(input_fn)):
- File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 619, in predict
- features, None, ModeKeys.PREDICT, self.config)
- File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1146, in _call_model_fn
- model_fn_results = self._model_fn(features=featuSegmentation fault (core dumped)
- (mlperf) anton@mlperf-inference-bert-anton:/workspace$
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement