Advertisement
lamiastella

total3d understanding installation log

Dec 9th, 2020 (edited)
279
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 44.14 KB | None | 0 0
  1. (base) mona@mona:~/research$ git clone https://github.com/yinyunie/Total3DUnderstanding.git
  2. Cloning into 'Total3DUnderstanding'...
  3. remote: Enumerating objects: 206, done.
  4. remote: Counting objects: 100% (206/206), done.
  5. remote: Compressing objects: 100% (181/181), done.
  6. remote: Total 206 (delta 31), reused 192 (delta 20), pack-reused 0
  7. Receiving objects: 100% (206/206), 4.23 MiB | 19.59 MiB/s, done.
  8. Resolving deltas: 100% (31/31), done.
  9. (base) mona@mona:~/research$ cd Total3DUnderstanding/
  10. (base) mona@mona:~/research/Total3DUnderstanding$ conda env create -f environment.yml
  11. Collecting package metadata (repodata.json): done
  12. Solving environment: done
  13.  
  14. Downloading and Extracting Packages
  15. numpy-1.18.1 | 5 KB | ################################################# | 100%
  16. torchvision-0.3.0 | 3.7 MB | ################################################# | 100%
  17. cffi-1.14.0 | 225 KB | ################################################# | 100%
  18. cudatoolkit-9.0 | 237.0 MB | ################################################# | 100%
  19. libxml2-2.9.10 | 1.2 MB | ################################################# | 100%
  20. six-1.15.0 | 27 KB | ################################################# | 100%
  21. libnetcdf-4.6.1 | 833 KB | ################################################# | 100%
  22. pillow-7.1.2 | 604 KB | ################################################# | 100%
  23. jsoncpp-1.8.4 | 132 KB | ################################################# | 100%
  24. python-dateutil-2.8. | 215 KB | ################################################# | 100%
  25. ninja-1.9.0 | 1.2 MB | ################################################# | 100%
  26. future-0.18.2 | 639 KB | ################################################# | 100%
  27. hdf4-4.2.13 | 714 KB | ################################################# | 100%
  28. pytorch-1.1.0 | 377.0 MB | ################################################# | 100%
  29. pyyaml-5.3.1 | 180 KB | ################################################# | 100%
  30. mkl_random-1.1.1 | 327 KB | ################################################# | 100%
  31. pandas-1.0.5 | 7.8 MB | ################################################# | 100%
  32. olefile-0.46 | 48 KB | ################################################# | 100%
  33. shapely-1.7.0 | 394 KB | ################################################# | 100%
  34. libcurl-7.69.1 | 431 KB | ################################################# | 100%
  35. vtk-8.2.0 | 28.4 MB | ################################################# | 100%
  36. libogg-1.3.2 | 194 KB | ################################################# | 100%
  37. sqlite-3.31.1 | 1.1 MB | ################################################# | 100%
  38. mkl-service-2.3.0 | 52 KB | ################################################# | 100%
  39. libtheora-1.1.1 | 330 KB | ################################################# | 100%
  40. libvorbis-1.3.6 | 389 KB | ################################################# | 100%
  41. mkl_fft-1.0.15 | 155 KB | ################################################# | 100%
  42. numpy-base-1.18.1 | 4.2 MB | ################################################# | 100%
  43. scipy-1.4.1 | 14.6 MB | ################################################# | 100%
  44. curl-7.69.1 | 137 KB | ################################################# | 100%
  45. setuptools-47.1.1 | 514 KB | ################################################# | 100%
  46. pip-20.0.2 | 1.7 MB | ################################################# | 100%
  47. certifi-2020.6.20 | 155 KB | ################################################# | 100%
  48. Preparing transaction: done
  49. Verifying transaction: done
  50. Executing transaction: done
  51. Installing pip dependencies: / Ran pip subprocess with arguments:
  52. ['/home/mona/anaconda3/envs/Total3D/bin/python', '-m', 'pip', 'install', '-U', '-r', '/home/mona/research/Total3DUnderstanding/condaenv.usbz06he.requirements.txt']
  53. Pip subprocess output:
  54. Collecting cycler==0.10.0
  55. Using cached cycler-0.10.0-py2.py3-none-any.whl (6.5 kB)
  56. Collecting jellyfish==0.8.2
  57. Downloading jellyfish-0.8.2-cp36-cp36m-manylinux2014_x86_64.whl (93 kB)
  58. Collecting kiwisolver==1.2.0
  59. Using cached kiwisolver-1.2.0-cp36-cp36m-manylinux1_x86_64.whl (88 kB)
  60. Collecting matplotlib==3.2.2
  61. Downloading matplotlib-3.2.2-cp36-cp36m-manylinux1_x86_64.whl (12.4 MB)
  62. Collecting opencv-python==4.2.0.34
  63. Downloading opencv_python-4.2.0.34-cp36-cp36m-manylinux1_x86_64.whl (28.2 MB)
  64. Collecting pyparsing==2.4.7
  65. Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
  66. Collecting seaborn==0.10.1
  67. Using cached seaborn-0.10.1-py3-none-any.whl (215 kB)
  68. Requirement already satisfied, skipping upgrade: six in /home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages (from cycler==0.10.0->-r /home/mona/research/Total3DUnderstanding/condaenv.usbz06he.requirements.txt (line 1)) (1.15.0)
  69. Requirement already satisfied, skipping upgrade: numpy>=1.11 in /home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages (from matplotlib==3.2.2->-r /home/mona/research/Total3DUnderstanding/condaenv.usbz06he.requirements.txt (line 4)) (1.18.1)
  70. Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in /home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages (from matplotlib==3.2.2->-r /home/mona/research/Total3DUnderstanding/condaenv.usbz06he.requirements.txt (line 4)) (2.8.1)
  71. Requirement already satisfied, skipping upgrade: scipy>=1.0.1 in /home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages (from seaborn==0.10.1->-r /home/mona/research/Total3DUnderstanding/condaenv.usbz06he.requirements.txt (line 7)) (1.4.1)
  72. Requirement already satisfied, skipping upgrade: pandas>=0.22.0 in /home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages (from seaborn==0.10.1->-r /home/mona/research/Total3DUnderstanding/condaenv.usbz06he.requirements.txt (line 7)) (1.0.5)
  73. Requirement already satisfied, skipping upgrade: pytz>=2017.2 in /home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages (from pandas>=0.22.0->seaborn==0.10.1->-r /home/mona/research/Total3DUnderstanding/condaenv.usbz06he.requirements.txt (line 7)) (2020.1)
  74. Installing collected packages: cycler, jellyfish, kiwisolver, pyparsing, matplotlib, opencv-python, seaborn
  75. Successfully installed cycler-0.10.0 jellyfish-0.8.2 kiwisolver-1.2.0 matplotlib-3.2.2 opencv-python-4.2.0.34 pyparsing-2.4.7 seaborn-0.10.1
  76.  
  77. done
  78. #
  79. # To activate this environment, use
  80. #
  81. # $ conda activate Total3D
  82. #
  83. # To deactivate an active environment, use
  84. #
  85. # $ conda deactivate
  86.  
  87. (base) mona@mona:~/research/Total3DUnderstanding$ conda activate Total3D
  88. (Total3D) mona@mona:~/research/Total3DUnderstanding$ python main.py configs/total3d.yaml --mode demo --demo_path demo/inputs/1
  89. Loading configurations.
  90. {'method': 'TOTAL3D', 'resume': False, 'finetune': True, 'weight': ['out/pretrained_models/pretrained_model.pth'], 'seed': 123, 'device': {'use_gpu': True, 'gpu_ids': '0', 'num_workers': 2}, 'data': {'dataset': 'sunrgbd', 'split': 'data/sunrgbd/splits', 'tmn_subnetworks': 2, 'face_samples': 1, 'with_edge_classifier': True}, 'model': {'layout_estimation': {'method': 'PoseNet', 'loss': 'PoseLoss'}, 'object_detection': {'method': 'Bdb3DNet', 'loss': 'DetLoss'}, 'mesh_reconstruction': {'method': 'DensTMNet', 'loss': 'ReconLoss'}}, 'optimizer': {'method': 'Adam', 'lr': '1e-4', 'betas': [0.9, 0.999], 'eps': '1e-08', 'weight_decay': '1e-04'}, 'scheduler': {'patience': 5, 'factor': 0.5, 'threshold': 0.01}, 'train': {'epochs': 400, 'phase': 'joint', 'freeze': ['mesh_reconstruction'], 'batch_size': 2}, 'test': {'phase': 'joint', 'batch_size': 2}, 'demo': {'phase': 'joint'}, 'log': {'vis_path': 'out/total3d/2020-12-09T15:00:36.822598/visualization', 'save_results': True, 'vis_step': 100, 'print_step': 50, 'path': 'out/total3d/2020-12-09T15:00:36.822598'}, 'config': 'configs/total3d.yaml', 'mode': 'demo', 'demo_path': 'demo/inputs/1'}
  91. Data save path: out/total3d/2020-12-09T15:00:36.822598
  92. Loading device settings.
  93. GPU mode is on.
  94. GPU Ids: 0 used.
  95. Loading model.
  96. Downloading: "https://download.pytorch.org/models/resnet34-333f7ec4.pth" to /home/mona/.cache/torch/checkpoints/resnet34-333f7ec4.pth
  97. 100.0%
  98. Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /home/mona/.cache/torch/checkpoints/resnet18-5c106cde.pth
  99. 100.0%
  100. TOTAL3D(
  101. (layout_estimation): DataParallel(
  102. (module): PoseNet(
  103. (resnet): ResNet(
  104. (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  105. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  106. (relu): ReLU(inplace)
  107. (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  108. (layer1): Sequential(
  109. (0): BasicBlock(
  110. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  111. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  112. (relu): ReLU(inplace)
  113. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  114. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  115. )
  116. (1): BasicBlock(
  117. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  118. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  119. (relu): ReLU(inplace)
  120. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  121. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  122. )
  123. (2): BasicBlock(
  124. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  125. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  126. (relu): ReLU(inplace)
  127. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  128. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  129. )
  130. )
  131. (layer2): Sequential(
  132. (0): BasicBlock(
  133. (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  134. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  135. (relu): ReLU(inplace)
  136. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  137. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  138. (downsample): Sequential(
  139. (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
  140. (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  141. )
  142. )
  143. (1): BasicBlock(
  144. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  145. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  146. (relu): ReLU(inplace)
  147. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  148. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  149. )
  150. (2): BasicBlock(
  151. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  152. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  153. (relu): ReLU(inplace)
  154. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  155. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  156. )
  157. (3): BasicBlock(
  158. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  159. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  160. (relu): ReLU(inplace)
  161. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  162. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  163. )
  164. )
  165. (layer3): Sequential(
  166. (0): BasicBlock(
  167. (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  168. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  169. (relu): ReLU(inplace)
  170. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  171. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  172. (downsample): Sequential(
  173. (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
  174. (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  175. )
  176. )
  177. (1): BasicBlock(
  178. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  179. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  180. (relu): ReLU(inplace)
  181. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  182. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  183. )
  184. (2): BasicBlock(
  185. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  186. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  187. (relu): ReLU(inplace)
  188. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  189. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  190. )
  191. (3): BasicBlock(
  192. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  193. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  194. (relu): ReLU(inplace)
  195. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  196. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  197. )
  198. (4): BasicBlock(
  199. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  200. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  201. (relu): ReLU(inplace)
  202. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  203. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  204. )
  205. (5): BasicBlock(
  206. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  207. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  208. (relu): ReLU(inplace)
  209. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  210. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  211. )
  212. )
  213. (layer4): Sequential(
  214. (0): BasicBlock(
  215. (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  216. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  217. (relu): ReLU(inplace)
  218. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  219. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  220. (downsample): Sequential(
  221. (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
  222. (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  223. )
  224. )
  225. (1): BasicBlock(
  226. (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  227. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  228. (relu): ReLU(inplace)
  229. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  230. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  231. )
  232. (2): BasicBlock(
  233. (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  234. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  235. (relu): ReLU(inplace)
  236. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  237. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  238. )
  239. )
  240. (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0)
  241. )
  242. (fc_1): Linear(in_features=2048, out_features=1024, bias=True)
  243. (fc_2): Linear(in_features=1024, out_features=8, bias=True)
  244. (fc_layout): Linear(in_features=2048, out_features=2048, bias=True)
  245. (fc_3): Linear(in_features=2048, out_features=1024, bias=True)
  246. (fc_4): Linear(in_features=1024, out_features=4, bias=True)
  247. (fc_5): Linear(in_features=2048, out_features=1024, bias=True)
  248. (fc_6): Linear(in_features=1024, out_features=6, bias=True)
  249. (relu_1): LeakyReLU(negative_slope=0.2, inplace)
  250. (dropout_1): Dropout(p=0.5)
  251. )
  252. )
  253. (object_detection): Bdb3DNet(
  254. (resnet): DataParallel(
  255. (module): ResNet(
  256. (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  257. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  258. (relu): ReLU(inplace)
  259. (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  260. (layer1): Sequential(
  261. (0): BasicBlock(
  262. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  263. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  264. (relu): ReLU(inplace)
  265. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  266. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  267. )
  268. (1): BasicBlock(
  269. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  270. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  271. (relu): ReLU(inplace)
  272. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  273. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  274. )
  275. (2): BasicBlock(
  276. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  277. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  278. (relu): ReLU(inplace)
  279. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  280. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  281. )
  282. )
  283. (layer2): Sequential(
  284. (0): BasicBlock(
  285. (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  286. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  287. (relu): ReLU(inplace)
  288. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  289. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  290. (downsample): Sequential(
  291. (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
  292. (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  293. )
  294. )
  295. (1): BasicBlock(
  296. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  297. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  298. (relu): ReLU(inplace)
  299. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  300. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  301. )
  302. (2): BasicBlock(
  303. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  304. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  305. (relu): ReLU(inplace)
  306. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  307. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  308. )
  309. (3): BasicBlock(
  310. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  311. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  312. (relu): ReLU(inplace)
  313. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  314. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  315. )
  316. )
  317. (layer3): Sequential(
  318. (0): BasicBlock(
  319. (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  320. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  321. (relu): ReLU(inplace)
  322. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  323. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  324. (downsample): Sequential(
  325. (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
  326. (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  327. )
  328. )
  329. (1): BasicBlock(
  330. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  331. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  332. (relu): ReLU(inplace)
  333. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  334. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  335. )
  336. (2): BasicBlock(
  337. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  338. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  339. (relu): ReLU(inplace)
  340. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  341. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  342. )
  343. (3): BasicBlock(
  344. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  345. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  346. (relu): ReLU(inplace)
  347. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  348. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  349. )
  350. (4): BasicBlock(
  351. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  352. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  353. (relu): ReLU(inplace)
  354. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  355. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  356. )
  357. (5): BasicBlock(
  358. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  359. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  360. (relu): ReLU(inplace)
  361. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  362. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  363. )
  364. )
  365. (layer4): Sequential(
  366. (0): BasicBlock(
  367. (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  368. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  369. (relu): ReLU(inplace)
  370. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  371. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  372. (downsample): Sequential(
  373. (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
  374. (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  375. )
  376. )
  377. (1): BasicBlock(
  378. (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  379. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  380. (relu): ReLU(inplace)
  381. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  382. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  383. )
  384. (2): BasicBlock(
  385. (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  386. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  387. (relu): ReLU(inplace)
  388. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  389. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  390. )
  391. )
  392. (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0)
  393. )
  394. )
  395. (relnet): RelationNet(
  396. (fc_g): Linear(in_features=64, out_features=16, bias=True)
  397. (threshold): Threshold(threshold=1e-06, value=1e-06)
  398. (softmax): Softmax()
  399. (fc_K): Linear(in_features=2048, out_features=1024, bias=True)
  400. (fc_Q): Linear(in_features=2048, out_features=1024, bias=True)
  401. (conv_s): Conv1d(1, 1, kernel_size=(1,), stride=(1,))
  402. )
  403. (fc1): Linear(in_features=2089, out_features=128, bias=True)
  404. (fc2): Linear(in_features=128, out_features=3, bias=True)
  405. (fc3): Linear(in_features=2089, out_features=128, bias=True)
  406. (fc4): Linear(in_features=128, out_features=12, bias=True)
  407. (fc5): Linear(in_features=2089, out_features=128, bias=True)
  408. (fc_centroid): Linear(in_features=128, out_features=12, bias=True)
  409. (fc_off_1): Linear(in_features=2089, out_features=128, bias=True)
  410. (fc_off_2): Linear(in_features=128, out_features=2, bias=True)
  411. (relu_1): LeakyReLU(negative_slope=0.2)
  412. (dropout_1): Dropout(p=0.5)
  413. )
  414. (mesh_reconstruction): DataParallel(
  415. (module): DensTMNet(
  416. (encoder): ResNet_Full(
  417. (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  418. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  419. (relu): ReLU(inplace)
  420. (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  421. (layer1): Sequential(
  422. (0): BasicBlock(
  423. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  424. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  425. (relu): ReLU(inplace)
  426. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  427. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  428. )
  429. (1): BasicBlock(
  430. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  431. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  432. (relu): ReLU(inplace)
  433. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  434. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  435. )
  436. )
  437. (layer2): Sequential(
  438. (0): BasicBlock(
  439. (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  440. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  441. (relu): ReLU(inplace)
  442. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  443. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  444. (downsample): Sequential(
  445. (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
  446. (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  447. )
  448. )
  449. (1): BasicBlock(
  450. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  451. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  452. (relu): ReLU(inplace)
  453. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  454. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  455. )
  456. )
  457. (layer3): Sequential(
  458. (0): BasicBlock(
  459. (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  460. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  461. (relu): ReLU(inplace)
  462. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  463. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  464. (downsample): Sequential(
  465. (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
  466. (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  467. )
  468. )
  469. (1): BasicBlock(
  470. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  471. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  472. (relu): ReLU(inplace)
  473. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  474. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  475. )
  476. )
  477. (layer4): Sequential(
  478. (0): BasicBlock(
  479. (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  480. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  481. (relu): ReLU(inplace)
  482. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  483. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  484. (downsample): Sequential(
  485. (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
  486. (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  487. )
  488. )
  489. (1): BasicBlock(
  490. (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  491. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  492. (relu): ReLU(inplace)
  493. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  494. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  495. )
  496. )
  497. (avgpool): AvgPool2d(kernel_size=7, stride=7, padding=0)
  498. (fc): Linear(in_features=512, out_features=1024, bias=True)
  499. )
  500. (decoders): ModuleList(
  501. (0): PointGenCon(
  502. (conv1): Conv1d(1036, 1036, kernel_size=(1,), stride=(1,))
  503. (conv2): Conv1d(1036, 518, kernel_size=(1,), stride=(1,))
  504. (conv3): Conv1d(518, 259, kernel_size=(1,), stride=(1,))
  505. (conv4): Conv1d(259, 3, kernel_size=(1,), stride=(1,))
  506. (th): Tanh()
  507. (bn1): BatchNorm1d(1036, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  508. (bn2): BatchNorm1d(518, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  509. (bn3): BatchNorm1d(259, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  510. )
  511. (1): PointGenCon(
  512. (conv1): Conv1d(1036, 1036, kernel_size=(1,), stride=(1,))
  513. (conv2): Conv1d(1036, 518, kernel_size=(1,), stride=(1,))
  514. (conv3): Conv1d(518, 259, kernel_size=(1,), stride=(1,))
  515. (conv4): Conv1d(259, 3, kernel_size=(1,), stride=(1,))
  516. (th): Tanh()
  517. (bn1): BatchNorm1d(1036, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  518. (bn2): BatchNorm1d(518, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  519. (bn3): BatchNorm1d(259, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  520. )
  521. )
  522. (error_estimators): ModuleList(
  523. (0): EREstimate(
  524. (conv1): Conv1d(1036, 1036, kernel_size=(1,), stride=(1,))
  525. (conv2): Conv1d(1036, 518, kernel_size=(1,), stride=(1,))
  526. (conv3): Conv1d(518, 259, kernel_size=(1,), stride=(1,))
  527. (conv4): Conv1d(259, 1, kernel_size=(1,), stride=(1,))
  528. (bn1): BatchNorm1d(1036, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  529. (bn2): BatchNorm1d(518, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  530. (bn3): BatchNorm1d(259, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  531. )
  532. )
  533. )
  534. )
  535. )
  536. Begin to finetune from the existing weight.
  537. Loading checkpoint from out/pretrained_models/pretrained_model.pth.
  538. set() subnet missed.
  539. Weights for finetuning loaded.
  540. ----------------------------------------------------------------------------------------------------
  541. Loading data.
  542. Traceback (most recent call last):
  543. File "main.py", line 38, in <module>
  544. demo.run(cfg)
  545. File "/home/mona/research/Total3DUnderstanding/demo.py", line 147, in run
  546. est_data = net(data)
  547. File "/home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
  548. result = self.forward(*input, **kwargs)
  549. File "/home/mona/research/Total3DUnderstanding/models/total3d/modules/network.py", line 67, in forward
  550. lo_centroid_result, lo_coeffs_result = self.layout_estimation(data['image'])
  551. File "/home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
  552. result = self.forward(*input, **kwargs)
  553. File "/home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
  554. return self.module(*inputs[0], **kwargs[0])
  555. File "/home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
  556. result = self.forward(*input, **kwargs)
  557. File "/home/mona/research/Total3DUnderstanding/models/total3d/modules/layout_estimation.py", line 63, in forward
  558. cam = self.fc_1(x)
  559. File "/home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
  560. result = self.forward(*input, **kwargs)
  561. File "/home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 92, in forward
  562. return F.linear(input, self.weight, self.bias)
  563. File "/home/mona/anaconda3/envs/Total3D/lib/python3.6/site-packages/torch/nn/functional.py", line 1406, in linear
  564. ret = torch.addmm(bias, input, weight.t())
  565. RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1556653183467/work/aten/src/THC/THCBlas.cu:259
  566. (Total3D) mona@mona:~/research/Total3DUnderstanding$ python
  567. Python 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21)
  568. [GCC 7.3.0] on linux
  569. Type "help", "copyright", "credits" or "license" for more information.
  570. >>> import torch
  571. >>> torch.cuda.is_available()
  572. True
  573. >>> torch.__version__
  574. '1.1.0'
  575. >>> quit()
  576. (Total3D) mona@mona:~/research/Total3DUnderstanding$ ls
  577. total 104K
  578. drwxrwxr-x 30 mona mona 4.0K Dec 9 14:42 ..
  579. -rw-rw-r-- 1 mona mona 7.6K Dec 9 14:42 README.md
  580. -rw-rw-r-- 1 mona mona 1.1K Dec 9 14:42 LICENSE
  581. drwxrwxr-x 4 mona mona 4.0K Dec 9 14:42 data
  582. -rwxrwxr-x 1 mona mona 9.4K Dec 9 14:42 demo.py
  583. drwxrwxr-x 4 mona mona 4.0K Dec 9 14:42 demo
  584. drwxrwxr-x 4 mona mona 4.0K Dec 9 14:42 external
  585. -rw-rw-r-- 1 mona mona 1.5K Dec 9 14:42 environment.yml
  586. drwxrwxr-x 2 mona mona 4.0K Dec 9 14:42 utils
  587. -rwxrwxr-x 1 mona mona 2.0K Dec 9 14:42 train.py
  588. -rwxrwxr-x 1 mona mona 3.4K Dec 9 14:42 train_epoch.py
  589. -rwxrwxr-x 1 mona mona 1.4K Dec 9 14:42 test.py
  590. -rwxrwxr-x 1 mona mona 1.6K Dec 9 14:42 test_epoch.py
  591. -rw-rw-r-- 1 mona mona 430 Dec 9 14:42 requirements.txt
  592. -rwxrwxr-x 1 mona mona 1.2K Dec 9 14:42 main.py
  593. drwxrwxr-x 8 mona mona 4.0K Dec 9 14:42 .git
  594. drwxrwxr-x 3 mona mona 4.0K Dec 9 15:00 configs
  595. drwxrwxr-x 3 mona mona 4.0K Dec 9 15:00 net_utils
  596. drwxrwxr-x 4 mona mona 4.0K Dec 9 15:00 out
  597. drwxrwxr-x 6 mona mona 4.0K Dec 9 15:00 models
  598. drwxrwxr-x 3 mona mona 4.0K Dec 9 15:00 libs
  599. drwxrwxr-x 2 mona mona 4.0K Dec 9 15:00 __pycache__
  600. drwxrwxr-x 13 mona mona 4.0K Dec 9 15:00 .
  601. (Total3D) mona@mona:~/research/Total3DUnderstanding$ bat environment.yml
  602. ───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────
  603. │ File: environment.yml
  604. ───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────
  605. 1 │ name: Total3D
  606. 2 │ channels:
  607. 3 │ - pytorch
  608. 4 │ - defaults
  609. 5 │ dependencies:
  610. 6 │ - _libgcc_mutex=0.1
  611. 7 │ - blas=1.0
  612. 8 │ - bzip2=1.0.8
  613. 9 │ - ca-certificates=2020.1.1
  614. 10 │ - certifi=2020.6.20
  615. 11 │ - cffi=1.14.0
  616. 12 │ - cudatoolkit=9.0
  617. 13 │ - curl=7.69.1
  618. 14 │ - expat=2.2.6
  619. 15 │ - freetype=2.9.1
  620. 16 │ - future=0.18.2
  621. 17 │ - geos=3.8.0
  622. 18 │ - hdf4=4.2.13
  623. 19 │ - hdf5=1.10.4
  624. 20 │ - icu=58.2
  625. 21 │ - intel-openmp=2020.1
  626. 22 │ - jpeg=9b
  627. 23 │ - jsoncpp=1.8.4
  628. 24 │ - krb5=1.17.1
  629. 25 │ - ld_impl_linux-64=2.33.1
  630. 26 │ - libcurl=7.69.1
  631. 27 │ - libedit=3.1.20181209
  632. 28 │ - libffi=3.3
  633. 29 │ - libgcc-ng=9.1.0
  634. 30 │ - libgfortran-ng=7.3.0
  635. 31 │ - libnetcdf=4.6.1
  636. 32 │ - libogg=1.3.2
  637. 33 │ - libpng=1.6.37
  638. 34 │ - libssh2=1.9.0
  639. 35 │ - libstdcxx-ng=9.1.0
  640. 36 │ - libtheora=1.1.1
  641. 37 │ - libtiff=4.1.0
  642. 38 │ - libvorbis=1.3.6
  643. 39 │ - libxml2=2.9.10
  644. 40 │ - lz4-c=1.8.1.2
  645. 41 │ - mkl=2020.1
  646. 42 │ - mkl-service=2.3.0
  647. 43 │ - mkl_fft=1.0.15
  648. 44 │ - mkl_random=1.1.1
  649. 45 │ - ncurses=6.2
  650. 46 │ - ninja=1.9.0
  651. 47 │ - numpy=1.18.1
  652. 48 │ - numpy-base=1.18.1
  653. 49 │ - olefile=0.46
  654. 50 │ - openssl=1.1.1g
  655. 51 │ - pandas=1.0.5
  656. 52 │ - pillow=7.1.2
  657. 53 │ - pip=20.0.2
  658. 54 │ - pycparser=2.20
  659. 55 │ - python=3.6.10
  660. 56 │ - python-dateutil=2.8.1
  661. 57 │ - pytorch=1.1.0
  662. 58 │ - pytz=2020.1
  663. 59 │ - pyyaml=5.3.1
  664. 60 │ - readline=8.0
  665. 61 │ - scipy=1.4.1
  666. 62 │ - setuptools=47.1.1
  667. 63 │ - shapely=1.7.0
  668. 64 │ - six=1.15.0
  669. 65 │ - sqlite=3.31.1
  670. 66 │ - tbb=2020.0
  671. 67 │ - tk=8.6.8
  672. 68 │ - torchvision=0.3.0
  673. 69 │ - vtk=8.2.0
  674. 70 │ - wheel=0.34.2
  675. 71 │ - xz=5.2.5
  676. 72 │ - yaml=0.1.7
  677. 73 │ - zlib=1.2.11
  678. 74 │ - zstd=1.3.7
  679. 75 │ - pip:
  680. 76 │ - cycler==0.10.0
  681. 77 │ - jellyfish==0.8.2
  682. 78 │ - kiwisolver==1.2.0
  683. 79 │ - matplotlib==3.2.2
  684. 80 │ - opencv-python==4.2.0.34
  685. 81 │ - pyparsing==2.4.7
  686. 82 │ - seaborn==0.10.1
  687. 83 │
  688.  
  689.  
  690.  
  691. $ nvcc --version
  692. nvcc: NVIDIA (R) Cuda compiler driver
  693. Copyright (c) 2005-2019 NVIDIA Corporation
  694. Built on Sun_Jul_28_19:07:16_PDT_2019
  695. Cuda compilation tools, release 10.1, V10.1.243
  696.  
  697.  
  698.  
  699. $ nvidia-smi
  700. Wed Dec 9 15:14:27 2020
  701. +-----------------------------------------------------------------------------+
  702. | NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
  703. |-------------------------------+----------------------+----------------------+
  704. | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
  705. | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
  706. | | | MIG M. |
  707. |===============================+======================+======================|
  708. | 0 GeForce RTX 2070 Off | 00000000:01:00.0 Off | N/A |
  709. | N/A 49C P8 10W / N/A | 3121MiB / 7982MiB | 11% Default |
  710. | | | N/A |
  711. +-------------------------------+----------------------+----------------------+
  712.  
  713. +-----------------------------------------------------------------------------+
  714. | Processes: |
  715. | GPU GI CI PID Type Process name GPU Memory |
  716. | ID ID Usage |
  717. |=============================================================================|
  718. | 0 N/A N/A 1364 G /usr/lib/xorg/Xorg 816MiB |
  719. | 0 N/A N/A 1797 G /usr/bin/gnome-shell 516MiB |
  720. | 0 N/A N/A 3284 G /usr/lib/firefox/firefox 2MiB |
  721. | 0 N/A N/A 3506 G /usr/lib/firefox/firefox 2MiB |
  722. | 0 N/A N/A 4545 G /usr/lib/firefox/firefox 2MiB |
  723. | 0 N/A N/A 7443 G /usr/lib/firefox/firefox 2MiB |
  724. | 0 N/A N/A 37638 G /usr/lib/firefox/firefox 2MiB |
  725. | 0 N/A N/A 37787 G /usr/lib/firefox/firefox 2MiB |
  726. | 0 N/A N/A 69220 G /usr/lib/firefox/firefox 2MiB |
  727. | 0 N/A N/A 74559 G /usr/lib/firefox/firefox 2MiB |
  728. | 0 N/A N/A 77168 G ...AAAAAAAAA= --shared-files 136MiB |
  729. | 0 N/A N/A 77506 C ...mona/anaconda3/bin/python 1621MiB |
  730. +-----------------------------------------------------------------------------+
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement