Advertisement
Guest User

Untitled

a guest
Mar 27th, 2023
16
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 219.12 KB | None | 0 0
  1.  
  2.  
  3. 2023-03-27 05:19:43.377
  4.  
  5. Skip rewriting leaf module
  6.  
  7. Skip rewriting leaf module
  8.  
  9. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\tracer\acc_tracer\acc_tracer.py:584: UserWarning: acc_tracer does not support currently support models for training. Calling eval on model before tracing.
  10. warnings.warn(
  11.  
  12. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\tracer\acc_tracer\acc_tracer.py:584: UserWarning: acc_tracer does not support currently support models for training. Calling eval on model before tracing.
  13. warnings.warn(
  14.  
  15. Skip rewriting leaf module
  16.  
  17. Skip rewriting leaf module
  18.  
  19. == Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpjmqoifyb, before/after are the same = False
  20.  
  21. == Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpjmqoifyb, before/after are the same = False
  22.  
  23. == Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpu2_5tp_s, before/after are the same = True
  24.  
  25. == Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpu2_5tp_s, before/after are the same = True
  26.  
  27. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
  28.  
  29. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
  30.  
  31. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
  32.  
  33. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
  34.  
  35. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
  36.  
  37. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
  38.  
  39. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
  40.  
  41. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
  42.  
  43. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
  44.  
  45. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
  46.  
  47. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
  48.  
  49. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
  50.  
  51. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
  52.  
  53. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
  54.  
  55. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
  56.  
  57. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
  58.  
  59. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
  60.  
  61. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
  62.  
  63. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
  64.  
  65. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
  66.  
  67. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
  68.  
  69. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
  70.  
  71. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
  72.  
  73. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
  74.  
  75. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
  76.  
  77. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
  78.  
  79. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
  80.  
  81. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
  82.  
  83. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
  84.  
  85. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
  86.  
  87. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
  88.  
  89. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
  90.  
  91. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
  92.  
  93. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
  94.  
  95. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
  96.  
  97. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
  98.  
  99. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
  100.  
  101. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
  102.  
  103. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
  104.  
  105. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
  106.  
  107. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
  108.  
  109. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
  110.  
  111. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
  112.  
  113. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
  114.  
  115. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
  116.  
  117. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
  118.  
  119. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
  120.  
  121. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
  122.  
  123. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
  124.  
  125. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
  126.  
  127. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
  128.  
  129. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
  130.  
  131. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
  132.  
  133. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
  134.  
  135. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
  136.  
  137. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
  138.  
  139. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
  140.  
  141. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
  142.  
  143. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
  144.  
  145. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
  146.  
  147. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
  148.  
  149. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
  150.  
  151. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
  152.  
  153. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
  154.  
  155. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
  156.  
  157. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
  158.  
  159. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
  160.  
  161. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
  162.  
  163. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
  164.  
  165. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
  166.  
  167. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
  168.  
  169. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
  170.  
  171. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
  172.  
  173. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
  174.  
  175. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
  176.  
  177. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
  178.  
  179. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
  180.  
  181. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
  182.  
  183. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
  184.  
  185. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
  186.  
  187. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
  188.  
  189. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
  190.  
  191. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
  192.  
  193. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
  194.  
  195. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
  196.  
  197. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
  198.  
  199. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
  200.  
  201. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
  202.  
  203. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
  204.  
  205. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
  206.  
  207. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
  208.  
  209. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
  210.  
  211. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
  212.  
  213. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
  214.  
  215. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
  216.  
  217. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
  218.  
  219. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
  220.  
  221. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
  222.  
  223. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
  224.  
  225. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
  226.  
  227. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
  228.  
  229. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
  230.  
  231. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
  232.  
  233. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
  234.  
  235. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
  236.  
  237. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
  238.  
  239. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
  240.  
  241. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
  242.  
  243. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
  244.  
  245. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
  246.  
  247. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
  248.  
  249. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
  250.  
  251. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
  252.  
  253. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
  254.  
  255. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
  256.  
  257. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
  258.  
  259. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
  260.  
  261. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
  262.  
  263. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
  264.  
  265. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
  266.  
  267. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
  268.  
  269. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
  270.  
  271. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
  272.  
  273. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
  274.  
  275. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
  276.  
  277. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
  278.  
  279. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
  280.  
  281. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
  282.  
  283. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
  284.  
  285. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
  286.  
  287. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
  288.  
  289. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
  290.  
  291. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
  292.  
  293. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
  294.  
  295. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
  296.  
  297. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
  298.  
  299. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
  300.  
  301. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
  302.  
  303. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
  304.  
  305. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
  306.  
  307. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
  308.  
  309. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
  310.  
  311. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
  312.  
  313. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
  314.  
  315. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
  316.  
  317. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
  318.  
  319. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
  320.  
  321. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
  322.  
  323. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
  324.  
  325. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
  326.  
  327. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
  328.  
  329. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
  330.  
  331. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
  332.  
  333. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
  334.  
  335. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
  336.  
  337. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
  338.  
  339. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
  340.  
  341. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
  342.  
  343. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
  344.  
  345. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
  346.  
  347. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
  348.  
  349. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
  350.  
  351. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
  352.  
  353. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
  354.  
  355. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
  356.  
  357. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
  358.  
  359. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
  360.  
  361. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
  362.  
  363. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
  364.  
  365. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
  366.  
  367. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
  368.  
  369. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
  370.  
  371. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
  372.  
  373. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
  374.  
  375. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
  376.  
  377. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
  378.  
  379. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
  380.  
  381. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
  382.  
  383. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
  384.  
  385. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
  386.  
  387. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
  388.  
  389. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
  390.  
  391. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
  392.  
  393. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
  394.  
  395. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
  396.  
  397. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
  398.  
  399. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
  400.  
  401. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
  402.  
  403. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
  404.  
  405. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
  406.  
  407. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
  408.  
  409. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
  410.  
  411. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
  412.  
  413. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
  414.  
  415. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
  416.  
  417. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
  418.  
  419. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
  420.  
  421. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
  422.  
  423. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
  424.  
  425. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
  426.  
  427. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
  428.  
  429. : Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
  430.  
  431. Now lowering submodule _run_on_acc_0
  432.  
  433. Now lowering submodule _run_on_acc_0
  434.  
  435. split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  436.  
  437. split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  438.  
  439. Timing cache is used!
  440.  
  441. Timing cache is used!
  442.  
  443. 2023-03-27 05:19:52.711
  444.  
  445. TRT INetwork construction elapsed time: 0:00:00.007916
  446.  
  447. TRT INetwork construction elapsed time: 0:00:00.007916
  448.  
  449. Build TRT engine elapsed time: 0:00:01.467011
  450.  
  451. Build TRT engine elapsed time: 0:00:01.467011
  452.  
  453. Lowering submodule _run_on_acc_0 elapsed time 0:00:06.490400
  454.  
  455. Lowering submodule _run_on_acc_0 elapsed time 0:00:06.490400
  456.  
  457. Now lowering submodule _run_on_acc_2
  458.  
  459. Now lowering submodule _run_on_acc_2
  460.  
  461. split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  462.  
  463. split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  464.  
  465. Timing cache is used!
  466.  
  467. Timing cache is used!
  468.  
  469. TRT INetwork construction elapsed time: 0:00:00.002721
  470.  
  471. TRT INetwork construction elapsed time: 0:00:00.002721
  472.  
  473. 2023-03-27 05:20:13.327
  474.  
  475. Build TRT engine elapsed time: 0:00:19.097724
  476.  
  477. Build TRT engine elapsed time: 0:00:19.097724
  478.  
  479. Lowering submodule _run_on_acc_2 elapsed time 0:00:19.127441
  480.  
  481. Lowering submodule _run_on_acc_2 elapsed time 0:00:19.127441
  482.  
  483. Now lowering submodule _run_on_acc_4
  484.  
  485. Now lowering submodule _run_on_acc_4
  486.  
  487. split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  488.  
  489. split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  490.  
  491. Timing cache is used!
  492.  
  493. Timing cache is used!
  494.  
  495. TRT INetwork construction elapsed time: 0:00:00.000997
  496.  
  497. TRT INetwork construction elapsed time: 0:00:00.000997
  498.  
  499. Build TRT engine elapsed time: 0:00:01.871680
  500.  
  501. Build TRT engine elapsed time: 0:00:01.871680
  502.  
  503. Lowering submodule _run_on_acc_4 elapsed time 0:00:01.900656
  504.  
  505. Lowering submodule _run_on_acc_4 elapsed time 0:00:01.900656
  506.  
  507. Now lowering submodule _run_on_acc_6
  508.  
  509. Now lowering submodule _run_on_acc_6
  510.  
  511. split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  512.  
  513. split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  514.  
  515. Timing cache is used!
  516.  
  517. Timing cache is used!
  518.  
  519. TRT INetwork construction elapsed time: 0:00:00.000997
  520.  
  521. TRT INetwork construction elapsed time: 0:00:00.000997
  522.  
  523. Build TRT engine elapsed time: 0:00:01.830022
  524.  
  525. Build TRT engine elapsed time: 0:00:01.830022
  526.  
  527. Lowering submodule _run_on_acc_6 elapsed time 0:00:01.859147
  528.  
  529. Lowering submodule _run_on_acc_6 elapsed time 0:00:01.859147
  530.  
  531. Now lowering submodule _run_on_acc_8
  532.  
  533. Now lowering submodule _run_on_acc_8
  534.  
  535. split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  536.  
  537. split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  538.  
  539. Timing cache is used!
  540.  
  541. Timing cache is used!
  542.  
  543. TRT INetwork construction elapsed time: 0:00:00.003000
  544.  
  545. TRT INetwork construction elapsed time: 0:00:00.003000
  546.  
  547. 2023-03-27 05:20:19.471
  548.  
  549. Build TRT engine elapsed time: 0:00:02.317220
  550.  
  551. Build TRT engine elapsed time: 0:00:02.317220
  552.  
  553. Lowering submodule _run_on_acc_8 elapsed time 0:00:02.347223
  554.  
  555. Lowering submodule _run_on_acc_8 elapsed time 0:00:02.347223
  556.  
  557. Now lowering submodule _run_on_acc_10
  558.  
  559. Now lowering submodule _run_on_acc_10
  560.  
  561. split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  562.  
  563. split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  564.  
  565. Timing cache is used!
  566.  
  567. Timing cache is used!
  568.  
  569. TRT INetwork construction elapsed time: 0:00:00.000999
  570.  
  571. TRT INetwork construction elapsed time: 0:00:00.000999
  572.  
  573. Build TRT engine elapsed time: 0:00:01.354626
  574.  
  575. Build TRT engine elapsed time: 0:00:01.354626
  576.  
  577. Lowering submodule _run_on_acc_10 elapsed time 0:00:01.382135
  578.  
  579. Lowering submodule _run_on_acc_10 elapsed time 0:00:01.382135
  580.  
  581. Now lowering submodule _run_on_acc_12
  582.  
  583. Now lowering submodule _run_on_acc_12
  584.  
  585. split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  586.  
  587. split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  588.  
  589. Timing cache is used!
  590.  
  591. Timing cache is used!
  592.  
  593. TRT INetwork construction elapsed time: 0:00:00.001000
  594.  
  595. TRT INetwork construction elapsed time: 0:00:00.001000
  596.  
  597. Build TRT engine elapsed time: 0:00:01.378106
  598.  
  599. Build TRT engine elapsed time: 0:00:01.378106
  600.  
  601. Lowering submodule _run_on_acc_12 elapsed time 0:00:01.407108
  602.  
  603. Lowering submodule _run_on_acc_12 elapsed time 0:00:01.407108
  604.  
  605. Now lowering submodule _run_on_acc_14
  606.  
  607. Now lowering submodule _run_on_acc_14
  608.  
  609. split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  610.  
  611. split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  612.  
  613. Timing cache is used!
  614.  
  615. Timing cache is used!
  616.  
  617. TRT INetwork construction elapsed time: 0:00:00.000999
  618.  
  619. TRT INetwork construction elapsed time: 0:00:00.000999
  620.  
  621. Build TRT engine elapsed time: 0:00:01.380241
  622.  
  623. Build TRT engine elapsed time: 0:00:01.380241
  624.  
  625. Lowering submodule _run_on_acc_14 elapsed time 0:00:01.410777
  626.  
  627. Lowering submodule _run_on_acc_14 elapsed time 0:00:01.410777
  628.  
  629. Now lowering submodule _run_on_acc_16
  630.  
  631. Now lowering submodule _run_on_acc_16
  632.  
  633. split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  634.  
  635. split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  636.  
  637. Timing cache is used!
  638.  
  639. Timing cache is used!
  640.  
  641. TRT INetwork construction elapsed time: 0:00:00.003001
  642.  
  643. TRT INetwork construction elapsed time: 0:00:00.003001
  644.  
  645. Build TRT engine elapsed time: 0:00:01.396178
  646.  
  647. Build TRT engine elapsed time: 0:00:01.396178
  648.  
  649. Lowering submodule _run_on_acc_16 elapsed time 0:00:01.426690
  650.  
  651. Lowering submodule _run_on_acc_16 elapsed time 0:00:01.426690
  652.  
  653. Now lowering submodule _run_on_acc_18
  654.  
  655. Now lowering submodule _run_on_acc_18
  656.  
  657. split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 3840]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  658.  
  659. split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 3840]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  660.  
  661. Timing cache is used!
  662.  
  663. Timing cache is used!
  664.  
  665. 2023-03-27 05:20:25.185
  666.  
  667. Unable to find layer norm plugin, fall back to TensorRT implementation.
  668.  
  669. Unable to find layer norm plugin, fall back to TensorRT implementation.
  670.  
  671. 2023-03-27 05:20:25.191
  672.  
  673. TRT INetwork construction elapsed time: 0:00:00.010184
  674.  
  675. TRT INetwork construction elapsed time: 0:00:00.010184
  676.  
  677. 2023-03-27 05:20:43.247
  678.  
  679. Build TRT engine elapsed time: 0:00:18.049577
  680.  
  681. Build TRT engine elapsed time: 0:00:18.049577
  682.  
  683. Lowering submodule _run_on_acc_18 elapsed time 0:00:18.085758
  684.  
  685. Lowering submodule _run_on_acc_18 elapsed time 0:00:18.085758
  686.  
  687. Now lowering submodule _run_on_acc_20
  688.  
  689. Now lowering submodule _run_on_acc_20
  690.  
  691. split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  692.  
  693. split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  694.  
  695. Timing cache is used!
  696.  
  697. Timing cache is used!
  698.  
  699. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_144 are constant. In this case, please consider constant fold the model first.
  700. warnings.warn(
  701.  
  702. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_144 are constant. In this case, please consider constant fold the model first.
  703. warnings.warn(
  704.  
  705. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_145 are constant. In this case, please consider constant fold the model first.
  706. warnings.warn(
  707.  
  708. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_145 are constant. In this case, please consider constant fold the model first.
  709. warnings.warn(
  710.  
  711. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_146 are constant. In this case, please consider constant fold the model first.
  712. warnings.warn(
  713.  
  714. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_146 are constant. In this case, please consider constant fold the model first.
  715. warnings.warn(
  716.  
  717. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_147 are constant. In this case, please consider constant fold the model first.
  718. warnings.warn(
  719.  
  720. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_147 are constant. In this case, please consider constant fold the model first.
  721. warnings.warn(
  722.  
  723. TRT INetwork construction elapsed time: 0:00:00.033131
  724.  
  725. TRT INetwork construction elapsed time: 0:00:00.033131
  726.  
  727. Build TRT engine elapsed time: 0:00:00.472221
  728.  
  729. Build TRT engine elapsed time: 0:00:00.472221
  730.  
  731. Lowering submodule _run_on_acc_20 elapsed time 0:00:00.534362
  732.  
  733. Lowering submodule _run_on_acc_20 elapsed time 0:00:00.534362
  734.  
  735. Now lowering submodule _run_on_acc_22
  736.  
  737. Now lowering submodule _run_on_acc_22
  738.  
  739. split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  740.  
  741. split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  742.  
  743. Timing cache is used!
  744.  
  745. Timing cache is used!
  746.  
  747. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_148 are constant. In this case, please consider constant fold the model first.
  748. warnings.warn(
  749.  
  750. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_148 are constant. In this case, please consider constant fold the model first.
  751. warnings.warn(
  752.  
  753. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_149 are constant. In this case, please consider constant fold the model first.
  754. warnings.warn(
  755.  
  756. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_149 are constant. In this case, please consider constant fold the model first.
  757. warnings.warn(
  758.  
  759. TRT INetwork construction elapsed time: 0:00:00.021358
  760.  
  761. TRT INetwork construction elapsed time: 0:00:00.021358
  762.  
  763. 2023-03-27 05:20:48.278
  764.  
  765. Build TRT engine elapsed time: 0:00:04.422826
  766.  
  767. Build TRT engine elapsed time: 0:00:04.422826
  768.  
  769. Lowering submodule _run_on_acc_22 elapsed time 0:00:04.474202
  770.  
  771. Lowering submodule _run_on_acc_22 elapsed time 0:00:04.474202
  772.  
  773. Now lowering submodule _run_on_acc_24
  774.  
  775. Now lowering submodule _run_on_acc_24
  776.  
  777. split_name=_run_on_acc_24, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  778.  
  779. split_name=_run_on_acc_24, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  780.  
  781. Timing cache is used!
  782.  
  783. Timing cache is used!
  784.  
  785. 2023-03-27 05:20:48.317
  786.  
  787. Unable to find layer norm plugin, fall back to TensorRT implementation.
  788.  
  789. Unable to find layer norm plugin, fall back to TensorRT implementation.
  790.  
  791. Unable to find layer norm plugin, fall back to TensorRT implementation.
  792.  
  793. Unable to find layer norm plugin, fall back to TensorRT implementation.
  794.  
  795. 2023-03-27 05:20:48.335
  796.  
  797. TRT INetwork construction elapsed time: 0:00:00.019704
  798.  
  799. TRT INetwork construction elapsed time: 0:00:00.019704
  800.  
  801. 2023-03-27 05:20:58.364
  802.  
  803. Build TRT engine elapsed time: 0:00:10.021728
  804.  
  805. Build TRT engine elapsed time: 0:00:10.021728
  806.  
  807. Lowering submodule _run_on_acc_24 elapsed time 0:00:10.068910
  808.  
  809. Lowering submodule _run_on_acc_24 elapsed time 0:00:10.068910
  810.  
  811. Now lowering submodule _run_on_acc_26
  812.  
  813. Now lowering submodule _run_on_acc_26
  814.  
  815. split_name=_run_on_acc_26, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  816.  
  817. split_name=_run_on_acc_26, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  818.  
  819. Timing cache is used!
  820.  
  821. Timing cache is used!
  822.  
  823. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_150 are constant. In this case, please consider constant fold the model first.
  824. warnings.warn(
  825.  
  826. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_150 are constant. In this case, please consider constant fold the model first.
  827. warnings.warn(
  828.  
  829. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_151 are constant. In this case, please consider constant fold the model first.
  830. warnings.warn(
  831.  
  832. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_151 are constant. In this case, please consider constant fold the model first.
  833. warnings.warn(
  834.  
  835. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_152 are constant. In this case, please consider constant fold the model first.
  836. warnings.warn(
  837.  
  838. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_152 are constant. In this case, please consider constant fold the model first.
  839. warnings.warn(
  840.  
  841. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_153 are constant. In this case, please consider constant fold the model first.
  842. warnings.warn(
  843.  
  844. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_153 are constant. In this case, please consider constant fold the model first.
  845. warnings.warn(
  846.  
  847. TRT INetwork construction elapsed time: 0:00:00.031840
  848.  
  849. TRT INetwork construction elapsed time: 0:00:00.031840
  850.  
  851. Build TRT engine elapsed time: 0:00:00.403447
  852.  
  853. Build TRT engine elapsed time: 0:00:00.403447
  854.  
  855. Lowering submodule _run_on_acc_26 elapsed time 0:00:00.465066
  856.  
  857. Lowering submodule _run_on_acc_26 elapsed time 0:00:00.465066
  858.  
  859. Now lowering submodule _run_on_acc_28
  860.  
  861. Now lowering submodule _run_on_acc_28
  862.  
  863. split_name=_run_on_acc_28, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  864.  
  865. split_name=_run_on_acc_28, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  866.  
  867. Timing cache is used!
  868.  
  869. Timing cache is used!
  870.  
  871. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_154 are constant. In this case, please consider constant fold the model first.
  872. warnings.warn(
  873.  
  874. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_154 are constant. In this case, please consider constant fold the model first.
  875. warnings.warn(
  876.  
  877. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_155 are constant. In this case, please consider constant fold the model first.
  878. warnings.warn(
  879.  
  880. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_155 are constant. In this case, please consider constant fold the model first.
  881. warnings.warn(
  882.  
  883. TRT INetwork construction elapsed time: 0:00:00.020004
  884.  
  885. TRT INetwork construction elapsed time: 0:00:00.020004
  886.  
  887. 2023-03-27 05:21:03.309
  888.  
  889. Build TRT engine elapsed time: 0:00:04.404170
  890.  
  891. Build TRT engine elapsed time: 0:00:04.404170
  892.  
  893. Lowering submodule _run_on_acc_28 elapsed time 0:00:04.457413
  894.  
  895. Lowering submodule _run_on_acc_28 elapsed time 0:00:04.457413
  896.  
  897. Now lowering submodule _run_on_acc_30
  898.  
  899. Now lowering submodule _run_on_acc_30
  900.  
  901. split_name=_run_on_acc_30, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  902.  
  903. split_name=_run_on_acc_30, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  904.  
  905. Timing cache is used!
  906.  
  907. Timing cache is used!
  908.  
  909. 2023-03-27 05:21:03.352
  910.  
  911. Unable to find layer norm plugin, fall back to TensorRT implementation.
  912.  
  913. Unable to find layer norm plugin, fall back to TensorRT implementation.
  914.  
  915. Unable to find layer norm plugin, fall back to TensorRT implementation.
  916.  
  917. Unable to find layer norm plugin, fall back to TensorRT implementation.
  918.  
  919. 2023-03-27 05:21:03.370
  920.  
  921. TRT INetwork construction elapsed time: 0:00:00.019124
  922.  
  923. TRT INetwork construction elapsed time: 0:00:00.019124
  924.  
  925. 2023-03-27 05:21:13.399
  926.  
  927. Build TRT engine elapsed time: 0:00:10.022734
  928.  
  929. Build TRT engine elapsed time: 0:00:10.022734
  930.  
  931. Lowering submodule _run_on_acc_30 elapsed time 0:00:10.072859
  932.  
  933. Lowering submodule _run_on_acc_30 elapsed time 0:00:10.072859
  934.  
  935. Now lowering submodule _run_on_acc_32
  936.  
  937. Now lowering submodule _run_on_acc_32
  938.  
  939. split_name=_run_on_acc_32, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  940.  
  941. split_name=_run_on_acc_32, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  942.  
  943. Timing cache is used!
  944.  
  945. Timing cache is used!
  946.  
  947. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_156 are constant. In this case, please consider constant fold the model first.
  948. warnings.warn(
  949.  
  950. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_156 are constant. In this case, please consider constant fold the model first.
  951. warnings.warn(
  952.  
  953. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_157 are constant. In this case, please consider constant fold the model first.
  954. warnings.warn(
  955.  
  956. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_157 are constant. In this case, please consider constant fold the model first.
  957. warnings.warn(
  958.  
  959. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_158 are constant. In this case, please consider constant fold the model first.
  960. warnings.warn(
  961.  
  962. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_158 are constant. In this case, please consider constant fold the model first.
  963. warnings.warn(
  964.  
  965. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_159 are constant. In this case, please consider constant fold the model first.
  966. warnings.warn(
  967.  
  968. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_159 are constant. In this case, please consider constant fold the model first.
  969. warnings.warn(
  970.  
  971. TRT INetwork construction elapsed time: 0:00:00.033510
  972.  
  973. TRT INetwork construction elapsed time: 0:00:00.033510
  974.  
  975. Build TRT engine elapsed time: 0:00:00.410851
  976.  
  977. Build TRT engine elapsed time: 0:00:00.410851
  978.  
  979. Lowering submodule _run_on_acc_32 elapsed time 0:00:00.474359
  980.  
  981. Lowering submodule _run_on_acc_32 elapsed time 0:00:00.474359
  982.  
  983. Now lowering submodule _run_on_acc_34
  984.  
  985. Now lowering submodule _run_on_acc_34
  986.  
  987. split_name=_run_on_acc_34, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  988.  
  989. split_name=_run_on_acc_34, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  990.  
  991. Timing cache is used!
  992.  
  993. Timing cache is used!
  994.  
  995. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_160 are constant. In this case, please consider constant fold the model first.
  996. warnings.warn(
  997.  
  998. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_160 are constant. In this case, please consider constant fold the model first.
  999. warnings.warn(
  1000.  
  1001. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_161 are constant. In this case, please consider constant fold the model first.
  1002. warnings.warn(
  1003.  
  1004. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_161 are constant. In this case, please consider constant fold the model first.
  1005. warnings.warn(
  1006.  
  1007. TRT INetwork construction elapsed time: 0:00:00.021186
  1008.  
  1009. TRT INetwork construction elapsed time: 0:00:00.021186
  1010.  
  1011. 2023-03-27 05:21:18.137
  1012.  
  1013. Build TRT engine elapsed time: 0:00:04.182155
  1014.  
  1015. Build TRT engine elapsed time: 0:00:04.182155
  1016.  
  1017. Lowering submodule _run_on_acc_34 elapsed time 0:00:04.237904
  1018.  
  1019. Lowering submodule _run_on_acc_34 elapsed time 0:00:04.237904
  1020.  
  1021. Now lowering submodule _run_on_acc_36
  1022.  
  1023. Now lowering submodule _run_on_acc_36
  1024.  
  1025. split_name=_run_on_acc_36, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1026.  
  1027. split_name=_run_on_acc_36, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1028.  
  1029. Timing cache is used!
  1030.  
  1031. Timing cache is used!
  1032.  
  1033. 2023-03-27 05:21:18.181
  1034.  
  1035. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1036.  
  1037. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1038.  
  1039. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1040.  
  1041. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1042.  
  1043. 2023-03-27 05:21:18.199
  1044.  
  1045. TRT INetwork construction elapsed time: 0:00:00.018492
  1046.  
  1047. TRT INetwork construction elapsed time: 0:00:00.018492
  1048.  
  1049. 2023-03-27 05:21:28.173
  1050.  
  1051. Build TRT engine elapsed time: 0:00:09.965197
  1052.  
  1053. Build TRT engine elapsed time: 0:00:09.965197
  1054.  
  1055. Lowering submodule _run_on_acc_36 elapsed time 0:00:10.016874
  1056.  
  1057. Lowering submodule _run_on_acc_36 elapsed time 0:00:10.016874
  1058.  
  1059. Now lowering submodule _run_on_acc_38
  1060.  
  1061. Now lowering submodule _run_on_acc_38
  1062.  
  1063. split_name=_run_on_acc_38, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1064.  
  1065. split_name=_run_on_acc_38, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1066.  
  1067. Timing cache is used!
  1068.  
  1069. Timing cache is used!
  1070.  
  1071. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_162 are constant. In this case, please consider constant fold the model first.
  1072. warnings.warn(
  1073.  
  1074. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_162 are constant. In this case, please consider constant fold the model first.
  1075. warnings.warn(
  1076.  
  1077. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_163 are constant. In this case, please consider constant fold the model first.
  1078. warnings.warn(
  1079.  
  1080. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_163 are constant. In this case, please consider constant fold the model first.
  1081. warnings.warn(
  1082.  
  1083. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_164 are constant. In this case, please consider constant fold the model first.
  1084. warnings.warn(
  1085.  
  1086. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_164 are constant. In this case, please consider constant fold the model first.
  1087. warnings.warn(
  1088.  
  1089. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_165 are constant. In this case, please consider constant fold the model first.
  1090. warnings.warn(
  1091.  
  1092. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_165 are constant. In this case, please consider constant fold the model first.
  1093. warnings.warn(
  1094.  
  1095. TRT INetwork construction elapsed time: 0:00:00.035506
  1096.  
  1097. TRT INetwork construction elapsed time: 0:00:00.035506
  1098.  
  1099. Build TRT engine elapsed time: 0:00:00.409342
  1100.  
  1101. Build TRT engine elapsed time: 0:00:00.409342
  1102.  
  1103. Lowering submodule _run_on_acc_38 elapsed time 0:00:00.477063
  1104.  
  1105. Lowering submodule _run_on_acc_38 elapsed time 0:00:00.477063
  1106.  
  1107. Now lowering submodule _run_on_acc_40
  1108.  
  1109. Now lowering submodule _run_on_acc_40
  1110.  
  1111. split_name=_run_on_acc_40, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1112.  
  1113. split_name=_run_on_acc_40, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1114.  
  1115. Timing cache is used!
  1116.  
  1117. Timing cache is used!
  1118.  
  1119. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_166 are constant. In this case, please consider constant fold the model first.
  1120. warnings.warn(
  1121.  
  1122. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_166 are constant. In this case, please consider constant fold the model first.
  1123. warnings.warn(
  1124.  
  1125. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_167 are constant. In this case, please consider constant fold the model first.
  1126. warnings.warn(
  1127.  
  1128. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_167 are constant. In this case, please consider constant fold the model first.
  1129. warnings.warn(
  1130.  
  1131. TRT INetwork construction elapsed time: 0:00:00.021001
  1132.  
  1133. TRT INetwork construction elapsed time: 0:00:00.021001
  1134.  
  1135. 2023-03-27 05:21:32.882
  1136.  
  1137. Build TRT engine elapsed time: 0:00:04.148708
  1138.  
  1139. Build TRT engine elapsed time: 0:00:04.148708
  1140.  
  1141. Lowering submodule _run_on_acc_40 elapsed time 0:00:04.206795
  1142.  
  1143. Lowering submodule _run_on_acc_40 elapsed time 0:00:04.206795
  1144.  
  1145. Now lowering submodule _run_on_acc_42
  1146.  
  1147. Now lowering submodule _run_on_acc_42
  1148.  
  1149. split_name=_run_on_acc_42, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1150.  
  1151. split_name=_run_on_acc_42, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1152.  
  1153. Timing cache is used!
  1154.  
  1155. Timing cache is used!
  1156.  
  1157. 2023-03-27 05:21:32.927
  1158.  
  1159. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1160.  
  1161. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1162.  
  1163. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1164.  
  1165. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1166.  
  1167. 2023-03-27 05:21:32.946
  1168.  
  1169. TRT INetwork construction elapsed time: 0:00:00.019506
  1170.  
  1171. TRT INetwork construction elapsed time: 0:00:00.019506
  1172.  
  1173. 2023-03-27 05:21:42.970
  1174.  
  1175. Build TRT engine elapsed time: 0:00:10.017006
  1176.  
  1177. Build TRT engine elapsed time: 0:00:10.017006
  1178.  
  1179. Lowering submodule _run_on_acc_42 elapsed time 0:00:10.069570
  1180.  
  1181. Lowering submodule _run_on_acc_42 elapsed time 0:00:10.069570
  1182.  
  1183. Now lowering submodule _run_on_acc_44
  1184.  
  1185. Now lowering submodule _run_on_acc_44
  1186.  
  1187. split_name=_run_on_acc_44, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1188.  
  1189. split_name=_run_on_acc_44, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1190.  
  1191. Timing cache is used!
  1192.  
  1193. Timing cache is used!
  1194.  
  1195. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_168 are constant. In this case, please consider constant fold the model first.
  1196. warnings.warn(
  1197.  
  1198. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_168 are constant. In this case, please consider constant fold the model first.
  1199. warnings.warn(
  1200.  
  1201. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_169 are constant. In this case, please consider constant fold the model first.
  1202. warnings.warn(
  1203.  
  1204. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_169 are constant. In this case, please consider constant fold the model first.
  1205. warnings.warn(
  1206.  
  1207. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_170 are constant. In this case, please consider constant fold the model first.
  1208. warnings.warn(
  1209.  
  1210. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_170 are constant. In this case, please consider constant fold the model first.
  1211. warnings.warn(
  1212.  
  1213. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_171 are constant. In this case, please consider constant fold the model first.
  1214. warnings.warn(
  1215.  
  1216. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_171 are constant. In this case, please consider constant fold the model first.
  1217. warnings.warn(
  1218.  
  1219. TRT INetwork construction elapsed time: 0:00:00.034506
  1220.  
  1221. TRT INetwork construction elapsed time: 0:00:00.034506
  1222.  
  1223. Build TRT engine elapsed time: 0:00:00.427618
  1224.  
  1225. Build TRT engine elapsed time: 0:00:00.427618
  1226.  
  1227. Lowering submodule _run_on_acc_44 elapsed time 0:00:00.496209
  1228.  
  1229. Lowering submodule _run_on_acc_44 elapsed time 0:00:00.496209
  1230.  
  1231. Now lowering submodule _run_on_acc_46
  1232.  
  1233. Now lowering submodule _run_on_acc_46
  1234.  
  1235. split_name=_run_on_acc_46, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1236.  
  1237. split_name=_run_on_acc_46, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1238.  
  1239. Timing cache is used!
  1240.  
  1241. Timing cache is used!
  1242.  
  1243. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_172 are constant. In this case, please consider constant fold the model first.
  1244. warnings.warn(
  1245.  
  1246. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_172 are constant. In this case, please consider constant fold the model first.
  1247. warnings.warn(
  1248.  
  1249. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_173 are constant. In this case, please consider constant fold the model first.
  1250. warnings.warn(
  1251.  
  1252. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_173 are constant. In this case, please consider constant fold the model first.
  1253. warnings.warn(
  1254.  
  1255. TRT INetwork construction elapsed time: 0:00:00.021894
  1256.  
  1257. TRT INetwork construction elapsed time: 0:00:00.021894
  1258.  
  1259. 2023-03-27 05:21:47.799
  1260.  
  1261. Build TRT engine elapsed time: 0:00:04.246928
  1262.  
  1263. Build TRT engine elapsed time: 0:00:04.246928
  1264.  
  1265. Lowering submodule _run_on_acc_46 elapsed time 0:00:04.305896
  1266.  
  1267. Lowering submodule _run_on_acc_46 elapsed time 0:00:04.305896
  1268.  
  1269. Now lowering submodule _run_on_acc_48
  1270.  
  1271. Now lowering submodule _run_on_acc_48
  1272.  
  1273. split_name=_run_on_acc_48, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1274.  
  1275. split_name=_run_on_acc_48, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1276.  
  1277. Timing cache is used!
  1278.  
  1279. Timing cache is used!
  1280.  
  1281. 2023-03-27 05:21:47.845
  1282.  
  1283. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1284.  
  1285. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1286.  
  1287. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1288.  
  1289. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1290.  
  1291. 2023-03-27 05:21:47.865
  1292.  
  1293. TRT INetwork construction elapsed time: 0:00:00.020405
  1294.  
  1295. TRT INetwork construction elapsed time: 0:00:00.020405
  1296.  
  1297. 2023-03-27 05:21:58.114
  1298.  
  1299. Build TRT engine elapsed time: 0:00:10.241350
  1300.  
  1301. Build TRT engine elapsed time: 0:00:10.241350
  1302.  
  1303. Lowering submodule _run_on_acc_48 elapsed time 0:00:10.294995
  1304.  
  1305. Lowering submodule _run_on_acc_48 elapsed time 0:00:10.294995
  1306.  
  1307. Now lowering submodule _run_on_acc_50
  1308.  
  1309. Now lowering submodule _run_on_acc_50
  1310.  
  1311. split_name=_run_on_acc_50, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1312.  
  1313. split_name=_run_on_acc_50, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1314.  
  1315. Timing cache is used!
  1316.  
  1317. Timing cache is used!
  1318.  
  1319. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_174 are constant. In this case, please consider constant fold the model first.
  1320. warnings.warn(
  1321.  
  1322. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_174 are constant. In this case, please consider constant fold the model first.
  1323. warnings.warn(
  1324.  
  1325. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_175 are constant. In this case, please consider constant fold the model first.
  1326. warnings.warn(
  1327.  
  1328. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_175 are constant. In this case, please consider constant fold the model first.
  1329. warnings.warn(
  1330.  
  1331. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_176 are constant. In this case, please consider constant fold the model first.
  1332. warnings.warn(
  1333.  
  1334. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_176 are constant. In this case, please consider constant fold the model first.
  1335. warnings.warn(
  1336.  
  1337. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_177 are constant. In this case, please consider constant fold the model first.
  1338. warnings.warn(
  1339.  
  1340. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_177 are constant. In this case, please consider constant fold the model first.
  1341. warnings.warn(
  1342.  
  1343. TRT INetwork construction elapsed time: 0:00:00.035008
  1344.  
  1345. TRT INetwork construction elapsed time: 0:00:00.035008
  1346.  
  1347. Build TRT engine elapsed time: 0:00:00.426671
  1348.  
  1349. Build TRT engine elapsed time: 0:00:00.426671
  1350.  
  1351. Lowering submodule _run_on_acc_50 elapsed time 0:00:00.496741
  1352.  
  1353. Lowering submodule _run_on_acc_50 elapsed time 0:00:00.496741
  1354.  
  1355. Now lowering submodule _run_on_acc_52
  1356.  
  1357. Now lowering submodule _run_on_acc_52
  1358.  
  1359. split_name=_run_on_acc_52, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1360.  
  1361. split_name=_run_on_acc_52, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1362.  
  1363. Timing cache is used!
  1364.  
  1365. Timing cache is used!
  1366.  
  1367. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_178 are constant. In this case, please consider constant fold the model first.
  1368. warnings.warn(
  1369.  
  1370. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_178 are constant. In this case, please consider constant fold the model first.
  1371. warnings.warn(
  1372.  
  1373. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_179 are constant. In this case, please consider constant fold the model first.
  1374. warnings.warn(
  1375.  
  1376. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_179 are constant. In this case, please consider constant fold the model first.
  1377. warnings.warn(
  1378.  
  1379. TRT INetwork construction elapsed time: 0:00:00.022005
  1380.  
  1381. TRT INetwork construction elapsed time: 0:00:00.022005
  1382.  
  1383. 2023-03-27 05:22:02.946
  1384.  
  1385. Build TRT engine elapsed time: 0:00:04.249548
  1386.  
  1387. Build TRT engine elapsed time: 0:00:04.249548
  1388.  
  1389. Lowering submodule _run_on_acc_52 elapsed time 0:00:04.307900
  1390.  
  1391. Lowering submodule _run_on_acc_52 elapsed time 0:00:04.307900
  1392.  
  1393. Now lowering submodule _run_on_acc_54
  1394.  
  1395. Now lowering submodule _run_on_acc_54
  1396.  
  1397. split_name=_run_on_acc_54, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1398.  
  1399. split_name=_run_on_acc_54, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1400.  
  1401. Timing cache is used!
  1402.  
  1403. Timing cache is used!
  1404.  
  1405. 2023-03-27 05:22:02.991
  1406.  
  1407. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1408.  
  1409. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1410.  
  1411. 2023-03-27 05:22:03.002
  1412.  
  1413. TRT INetwork construction elapsed time: 0:00:00.011003
  1414.  
  1415. TRT INetwork construction elapsed time: 0:00:00.011003
  1416.  
  1417. 2023-03-27 05:22:13.196
  1418.  
  1419. Build TRT engine elapsed time: 0:00:10.185831
  1420.  
  1421. Build TRT engine elapsed time: 0:00:10.185831
  1422.  
  1423. Lowering submodule _run_on_acc_54 elapsed time 0:00:10.230870
  1424.  
  1425. Lowering submodule _run_on_acc_54 elapsed time 0:00:10.230870
  1426.  
  1427. Now lowering submodule _run_on_acc_56
  1428.  
  1429. Now lowering submodule _run_on_acc_56
  1430.  
  1431. split_name=_run_on_acc_56, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1432.  
  1433. split_name=_run_on_acc_56, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1434.  
  1435. Timing cache is used!
  1436.  
  1437. Timing cache is used!
  1438.  
  1439. 2023-03-27 05:22:13.238
  1440.  
  1441. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1442.  
  1443. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1444.  
  1445. 2023-03-27 05:22:13.247
  1446.  
  1447. TRT INetwork construction elapsed time: 0:00:00.010002
  1448.  
  1449. TRT INetwork construction elapsed time: 0:00:00.010002
  1450.  
  1451. 2023-03-27 05:22:16.968
  1452.  
  1453. Build TRT engine elapsed time: 0:00:03.713161
  1454.  
  1455. Build TRT engine elapsed time: 0:00:03.713161
  1456.  
  1457. Lowering submodule _run_on_acc_56 elapsed time 0:00:03.757133
  1458.  
  1459. Lowering submodule _run_on_acc_56 elapsed time 0:00:03.757133
  1460.  
  1461. Now lowering submodule _run_on_acc_58
  1462.  
  1463. Now lowering submodule _run_on_acc_58
  1464.  
  1465. split_name=_run_on_acc_58, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1466.  
  1467. split_name=_run_on_acc_58, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1468.  
  1469. Timing cache is used!
  1470.  
  1471. Timing cache is used!
  1472.  
  1473. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_180 are constant. In this case, please consider constant fold the model first.
  1474. warnings.warn(
  1475.  
  1476. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_180 are constant. In this case, please consider constant fold the model first.
  1477. warnings.warn(
  1478.  
  1479. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_181 are constant. In this case, please consider constant fold the model first.
  1480. warnings.warn(
  1481.  
  1482. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_181 are constant. In this case, please consider constant fold the model first.
  1483. warnings.warn(
  1484.  
  1485. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_182 are constant. In this case, please consider constant fold the model first.
  1486. warnings.warn(
  1487.  
  1488. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_182 are constant. In this case, please consider constant fold the model first.
  1489. warnings.warn(
  1490.  
  1491. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_183 are constant. In this case, please consider constant fold the model first.
  1492. warnings.warn(
  1493.  
  1494. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_183 are constant. In this case, please consider constant fold the model first.
  1495. warnings.warn(
  1496.  
  1497. TRT INetwork construction elapsed time: 0:00:00.036506
  1498.  
  1499. TRT INetwork construction elapsed time: 0:00:00.036506
  1500.  
  1501. Build TRT engine elapsed time: 0:00:00.424042
  1502.  
  1503. Build TRT engine elapsed time: 0:00:00.424042
  1504.  
  1505. Lowering submodule _run_on_acc_58 elapsed time 0:00:00.494137
  1506.  
  1507. Lowering submodule _run_on_acc_58 elapsed time 0:00:00.494137
  1508.  
  1509. Now lowering submodule _run_on_acc_60
  1510.  
  1511. Now lowering submodule _run_on_acc_60
  1512.  
  1513. split_name=_run_on_acc_60, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1514.  
  1515. split_name=_run_on_acc_60, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1516.  
  1517. Timing cache is used!
  1518.  
  1519. Timing cache is used!
  1520.  
  1521. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_184 are constant. In this case, please consider constant fold the model first.
  1522. warnings.warn(
  1523.  
  1524. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_184 are constant. In this case, please consider constant fold the model first.
  1525. warnings.warn(
  1526.  
  1527. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_185 are constant. In this case, please consider constant fold the model first.
  1528. warnings.warn(
  1529.  
  1530. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_185 are constant. In this case, please consider constant fold the model first.
  1531. warnings.warn(
  1532.  
  1533. TRT INetwork construction elapsed time: 0:00:00.021507
  1534.  
  1535. TRT INetwork construction elapsed time: 0:00:00.021507
  1536.  
  1537. 2023-03-27 05:22:21.761
  1538.  
  1539. Build TRT engine elapsed time: 0:00:04.211298
  1540.  
  1541. Build TRT engine elapsed time: 0:00:04.211298
  1542.  
  1543. Lowering submodule _run_on_acc_60 elapsed time 0:00:04.271451
  1544.  
  1545. Lowering submodule _run_on_acc_60 elapsed time 0:00:04.271451
  1546.  
  1547. Now lowering submodule _run_on_acc_62
  1548.  
  1549. Now lowering submodule _run_on_acc_62
  1550.  
  1551. split_name=_run_on_acc_62, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1552.  
  1553. split_name=_run_on_acc_62, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1554.  
  1555. Timing cache is used!
  1556.  
  1557. Timing cache is used!
  1558.  
  1559. 2023-03-27 05:22:21.808
  1560.  
  1561. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1562.  
  1563. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1564.  
  1565. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1566.  
  1567. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1568.  
  1569. 2023-03-27 05:22:21.828
  1570.  
  1571. TRT INetwork construction elapsed time: 0:00:00.020001
  1572.  
  1573. TRT INetwork construction elapsed time: 0:00:00.020001
  1574.  
  1575. 2023-03-27 05:22:31.858
  1576.  
  1577. Build TRT engine elapsed time: 0:00:10.021450
  1578.  
  1579. Build TRT engine elapsed time: 0:00:10.021450
  1580.  
  1581. Lowering submodule _run_on_acc_62 elapsed time 0:00:10.077401
  1582.  
  1583. Lowering submodule _run_on_acc_62 elapsed time 0:00:10.077401
  1584.  
  1585. Now lowering submodule _run_on_acc_64
  1586.  
  1587. Now lowering submodule _run_on_acc_64
  1588.  
  1589. split_name=_run_on_acc_64, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1590.  
  1591. split_name=_run_on_acc_64, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1592.  
  1593. Timing cache is used!
  1594.  
  1595. Timing cache is used!
  1596.  
  1597. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_186 are constant. In this case, please consider constant fold the model first.
  1598. warnings.warn(
  1599.  
  1600. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_186 are constant. In this case, please consider constant fold the model first.
  1601. warnings.warn(
  1602.  
  1603. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_187 are constant. In this case, please consider constant fold the model first.
  1604. warnings.warn(
  1605.  
  1606. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_187 are constant. In this case, please consider constant fold the model first.
  1607. warnings.warn(
  1608.  
  1609. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_188 are constant. In this case, please consider constant fold the model first.
  1610. warnings.warn(
  1611.  
  1612. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_188 are constant. In this case, please consider constant fold the model first.
  1613. warnings.warn(
  1614.  
  1615. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_189 are constant. In this case, please consider constant fold the model first.
  1616. warnings.warn(
  1617.  
  1618. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_189 are constant. In this case, please consider constant fold the model first.
  1619. warnings.warn(
  1620.  
  1621. TRT INetwork construction elapsed time: 0:00:00.036505
  1622.  
  1623. TRT INetwork construction elapsed time: 0:00:00.036505
  1624.  
  1625. Build TRT engine elapsed time: 0:00:00.421742
  1626.  
  1627. Build TRT engine elapsed time: 0:00:00.421742
  1628.  
  1629. Lowering submodule _run_on_acc_64 elapsed time 0:00:00.493351
  1630.  
  1631. Lowering submodule _run_on_acc_64 elapsed time 0:00:00.493351
  1632.  
  1633. Now lowering submodule _run_on_acc_66
  1634.  
  1635. Now lowering submodule _run_on_acc_66
  1636.  
  1637. split_name=_run_on_acc_66, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1638.  
  1639. split_name=_run_on_acc_66, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1640.  
  1641. Timing cache is used!
  1642.  
  1643. Timing cache is used!
  1644.  
  1645. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_190 are constant. In this case, please consider constant fold the model first.
  1646. warnings.warn(
  1647.  
  1648. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_190 are constant. In this case, please consider constant fold the model first.
  1649. warnings.warn(
  1650.  
  1651. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_191 are constant. In this case, please consider constant fold the model first.
  1652. warnings.warn(
  1653.  
  1654. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_191 are constant. In this case, please consider constant fold the model first.
  1655. warnings.warn(
  1656.  
  1657. TRT INetwork construction elapsed time: 0:00:00.021504
  1658.  
  1659. TRT INetwork construction elapsed time: 0:00:00.021504
  1660.  
  1661. 2023-03-27 05:22:36.642
  1662.  
  1663. Build TRT engine elapsed time: 0:00:04.201128
  1664.  
  1665. Build TRT engine elapsed time: 0:00:04.201128
  1666.  
  1667. Lowering submodule _run_on_acc_66 elapsed time 0:00:04.260797
  1668.  
  1669. Lowering submodule _run_on_acc_66 elapsed time 0:00:04.260797
  1670.  
  1671. Now lowering submodule _run_on_acc_68
  1672.  
  1673. Now lowering submodule _run_on_acc_68
  1674.  
  1675. split_name=_run_on_acc_68, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1676.  
  1677. split_name=_run_on_acc_68, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1678.  
  1679. Timing cache is used!
  1680.  
  1681. Timing cache is used!
  1682.  
  1683. 2023-03-27 05:22:36.689
  1684.  
  1685. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1686.  
  1687. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1688.  
  1689. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1690.  
  1691. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1692.  
  1693. 2023-03-27 05:22:36.708
  1694.  
  1695. TRT INetwork construction elapsed time: 0:00:00.020005
  1696.  
  1697. TRT INetwork construction elapsed time: 0:00:00.020005
  1698.  
  1699. 2023-03-27 05:22:46.710
  1700.  
  1701. Build TRT engine elapsed time: 0:00:09.994147
  1702.  
  1703. Build TRT engine elapsed time: 0:00:09.994147
  1704.  
  1705. Lowering submodule _run_on_acc_68 elapsed time 0:00:10.048280
  1706.  
  1707. Lowering submodule _run_on_acc_68 elapsed time 0:00:10.048280
  1708.  
  1709. Now lowering submodule _run_on_acc_70
  1710.  
  1711. Now lowering submodule _run_on_acc_70
  1712.  
  1713. split_name=_run_on_acc_70, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1714.  
  1715. split_name=_run_on_acc_70, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1716.  
  1717. Timing cache is used!
  1718.  
  1719. Timing cache is used!
  1720.  
  1721. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_192 are constant. In this case, please consider constant fold the model first.
  1722. warnings.warn(
  1723.  
  1724. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_192 are constant. In this case, please consider constant fold the model first.
  1725. warnings.warn(
  1726.  
  1727. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_193 are constant. In this case, please consider constant fold the model first.
  1728. warnings.warn(
  1729.  
  1730. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_193 are constant. In this case, please consider constant fold the model first.
  1731. warnings.warn(
  1732.  
  1733. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_194 are constant. In this case, please consider constant fold the model first.
  1734. warnings.warn(
  1735.  
  1736. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_194 are constant. In this case, please consider constant fold the model first.
  1737. warnings.warn(
  1738.  
  1739. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_195 are constant. In this case, please consider constant fold the model first.
  1740. warnings.warn(
  1741.  
  1742. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_195 are constant. In this case, please consider constant fold the model first.
  1743. warnings.warn(
  1744.  
  1745. TRT INetwork construction elapsed time: 0:00:00.037005
  1746.  
  1747. TRT INetwork construction elapsed time: 0:00:00.037005
  1748.  
  1749. Build TRT engine elapsed time: 0:00:00.435881
  1750.  
  1751. Build TRT engine elapsed time: 0:00:00.435881
  1752.  
  1753. Lowering submodule _run_on_acc_70 elapsed time 0:00:00.507967
  1754.  
  1755. Lowering submodule _run_on_acc_70 elapsed time 0:00:00.507967
  1756.  
  1757. Now lowering submodule _run_on_acc_72
  1758.  
  1759. Now lowering submodule _run_on_acc_72
  1760.  
  1761. split_name=_run_on_acc_72, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1762.  
  1763. split_name=_run_on_acc_72, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1764.  
  1765. Timing cache is used!
  1766.  
  1767. Timing cache is used!
  1768.  
  1769. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_196 are constant. In this case, please consider constant fold the model first.
  1770. warnings.warn(
  1771.  
  1772. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_196 are constant. In this case, please consider constant fold the model first.
  1773. warnings.warn(
  1774.  
  1775. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_197 are constant. In this case, please consider constant fold the model first.
  1776. warnings.warn(
  1777.  
  1778. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_197 are constant. In this case, please consider constant fold the model first.
  1779. warnings.warn(
  1780.  
  1781. TRT INetwork construction elapsed time: 0:00:00.022005
  1782.  
  1783. TRT INetwork construction elapsed time: 0:00:00.022005
  1784.  
  1785. 2023-03-27 05:22:51.571
  1786.  
  1787. Build TRT engine elapsed time: 0:00:04.264076
  1788.  
  1789. Build TRT engine elapsed time: 0:00:04.264076
  1790.  
  1791. Lowering submodule _run_on_acc_72 elapsed time 0:00:04.324360
  1792.  
  1793. Lowering submodule _run_on_acc_72 elapsed time 0:00:04.324360
  1794.  
  1795. Now lowering submodule _run_on_acc_74
  1796.  
  1797. Now lowering submodule _run_on_acc_74
  1798.  
  1799. split_name=_run_on_acc_74, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1800.  
  1801. split_name=_run_on_acc_74, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1802.  
  1803. Timing cache is used!
  1804.  
  1805. Timing cache is used!
  1806.  
  1807. 2023-03-27 05:22:51.618
  1808.  
  1809. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1810.  
  1811. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1812.  
  1813. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1814.  
  1815. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1816.  
  1817. 2023-03-27 05:22:51.637
  1818.  
  1819. TRT INetwork construction elapsed time: 0:00:00.020490
  1820.  
  1821. TRT INetwork construction elapsed time: 0:00:00.020490
  1822.  
  1823. 2023-03-27 05:23:01.792
  1824.  
  1825. Build TRT engine elapsed time: 0:00:10.146359
  1826.  
  1827. Build TRT engine elapsed time: 0:00:10.146359
  1828.  
  1829. Lowering submodule _run_on_acc_74 elapsed time 0:00:10.201391
  1830.  
  1831. Lowering submodule _run_on_acc_74 elapsed time 0:00:10.201391
  1832.  
  1833. Now lowering submodule _run_on_acc_76
  1834.  
  1835. Now lowering submodule _run_on_acc_76
  1836.  
  1837. split_name=_run_on_acc_76, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1838.  
  1839. split_name=_run_on_acc_76, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1840.  
  1841. Timing cache is used!
  1842.  
  1843. Timing cache is used!
  1844.  
  1845. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_198 are constant. In this case, please consider constant fold the model first.
  1846. warnings.warn(
  1847.  
  1848. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_198 are constant. In this case, please consider constant fold the model first.
  1849. warnings.warn(
  1850.  
  1851. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_199 are constant. In this case, please consider constant fold the model first.
  1852. warnings.warn(
  1853.  
  1854. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_199 are constant. In this case, please consider constant fold the model first.
  1855. warnings.warn(
  1856.  
  1857. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_200 are constant. In this case, please consider constant fold the model first.
  1858. warnings.warn(
  1859.  
  1860. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_200 are constant. In this case, please consider constant fold the model first.
  1861. warnings.warn(
  1862.  
  1863. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_201 are constant. In this case, please consider constant fold the model first.
  1864. warnings.warn(
  1865.  
  1866. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_201 are constant. In this case, please consider constant fold the model first.
  1867. warnings.warn(
  1868.  
  1869. TRT INetwork construction elapsed time: 0:00:00.037009
  1870.  
  1871. TRT INetwork construction elapsed time: 0:00:00.037009
  1872.  
  1873. Build TRT engine elapsed time: 0:00:00.419433
  1874.  
  1875. Build TRT engine elapsed time: 0:00:00.419433
  1876.  
  1877. Lowering submodule _run_on_acc_76 elapsed time 0:00:00.491929
  1878.  
  1879. Lowering submodule _run_on_acc_76 elapsed time 0:00:00.491929
  1880.  
  1881. Now lowering submodule _run_on_acc_78
  1882.  
  1883. Now lowering submodule _run_on_acc_78
  1884.  
  1885. split_name=_run_on_acc_78, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1886.  
  1887. split_name=_run_on_acc_78, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1888.  
  1889. Timing cache is used!
  1890.  
  1891. Timing cache is used!
  1892.  
  1893. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_202 are constant. In this case, please consider constant fold the model first.
  1894. warnings.warn(
  1895.  
  1896. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_202 are constant. In this case, please consider constant fold the model first.
  1897. warnings.warn(
  1898.  
  1899. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_203 are constant. In this case, please consider constant fold the model first.
  1900. warnings.warn(
  1901.  
  1902. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_203 are constant. In this case, please consider constant fold the model first.
  1903. warnings.warn(
  1904.  
  1905. TRT INetwork construction elapsed time: 0:00:00.022006
  1906.  
  1907. TRT INetwork construction elapsed time: 0:00:00.022006
  1908.  
  1909. 2023-03-27 05:23:06.598
  1910.  
  1911. Build TRT engine elapsed time: 0:00:04.224634
  1912.  
  1913. Build TRT engine elapsed time: 0:00:04.224634
  1914.  
  1915. Lowering submodule _run_on_acc_78 elapsed time 0:00:04.284197
  1916.  
  1917. Lowering submodule _run_on_acc_78 elapsed time 0:00:04.284197
  1918.  
  1919. Now lowering submodule _run_on_acc_80
  1920.  
  1921. Now lowering submodule _run_on_acc_80
  1922.  
  1923. split_name=_run_on_acc_80, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1924.  
  1925. split_name=_run_on_acc_80, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1926.  
  1927. Timing cache is used!
  1928.  
  1929. Timing cache is used!
  1930.  
  1931. 2023-03-27 05:23:06.644
  1932.  
  1933. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1934.  
  1935. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1936.  
  1937. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1938.  
  1939. Unable to find layer norm plugin, fall back to TensorRT implementation.
  1940.  
  1941. 2023-03-27 05:23:06.664
  1942.  
  1943. TRT INetwork construction elapsed time: 0:00:00.019939
  1944.  
  1945. TRT INetwork construction elapsed time: 0:00:00.019939
  1946.  
  1947. 2023-03-27 05:23:16.660
  1948.  
  1949. Build TRT engine elapsed time: 0:00:09.987888
  1950.  
  1951. Build TRT engine elapsed time: 0:00:09.987888
  1952.  
  1953. Lowering submodule _run_on_acc_80 elapsed time 0:00:10.041882
  1954.  
  1955. Lowering submodule _run_on_acc_80 elapsed time 0:00:10.041882
  1956.  
  1957. Now lowering submodule _run_on_acc_82
  1958.  
  1959. Now lowering submodule _run_on_acc_82
  1960.  
  1961. split_name=_run_on_acc_82, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1962.  
  1963. split_name=_run_on_acc_82, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  1964.  
  1965. Timing cache is used!
  1966.  
  1967. Timing cache is used!
  1968.  
  1969. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_204 are constant. In this case, please consider constant fold the model first.
  1970. warnings.warn(
  1971.  
  1972. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_204 are constant. In this case, please consider constant fold the model first.
  1973. warnings.warn(
  1974.  
  1975. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_205 are constant. In this case, please consider constant fold the model first.
  1976. warnings.warn(
  1977.  
  1978. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_205 are constant. In this case, please consider constant fold the model first.
  1979. warnings.warn(
  1980.  
  1981. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_206 are constant. In this case, please consider constant fold the model first.
  1982. warnings.warn(
  1983.  
  1984. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_206 are constant. In this case, please consider constant fold the model first.
  1985. warnings.warn(
  1986.  
  1987. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_207 are constant. In this case, please consider constant fold the model first.
  1988. warnings.warn(
  1989.  
  1990. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_207 are constant. In this case, please consider constant fold the model first.
  1991. warnings.warn(
  1992.  
  1993. TRT INetwork construction elapsed time: 0:00:00.036506
  1994.  
  1995. TRT INetwork construction elapsed time: 0:00:00.036506
  1996.  
  1997. Build TRT engine elapsed time: 0:00:00.433769
  1998.  
  1999. Build TRT engine elapsed time: 0:00:00.433769
  2000.  
  2001. Lowering submodule _run_on_acc_82 elapsed time 0:00:00.504658
  2002.  
  2003. Lowering submodule _run_on_acc_82 elapsed time 0:00:00.504658
  2004.  
  2005. Now lowering submodule _run_on_acc_84
  2006.  
  2007. Now lowering submodule _run_on_acc_84
  2008.  
  2009. split_name=_run_on_acc_84, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2010.  
  2011. split_name=_run_on_acc_84, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2012.  
  2013. Timing cache is used!
  2014.  
  2015. Timing cache is used!
  2016.  
  2017. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_208 are constant. In this case, please consider constant fold the model first.
  2018. warnings.warn(
  2019.  
  2020. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_208 are constant. In this case, please consider constant fold the model first.
  2021. warnings.warn(
  2022.  
  2023. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_209 are constant. In this case, please consider constant fold the model first.
  2024. warnings.warn(
  2025.  
  2026. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_209 are constant. In this case, please consider constant fold the model first.
  2027. warnings.warn(
  2028.  
  2029. TRT INetwork construction elapsed time: 0:00:00.022474
  2030.  
  2031. TRT INetwork construction elapsed time: 0:00:00.022474
  2032.  
  2033. 2023-03-27 05:23:21.530
  2034.  
  2035. Build TRT engine elapsed time: 0:00:04.274198
  2036.  
  2037. Build TRT engine elapsed time: 0:00:04.274198
  2038.  
  2039. Lowering submodule _run_on_acc_84 elapsed time 0:00:04.336163
  2040.  
  2041. Lowering submodule _run_on_acc_84 elapsed time 0:00:04.336163
  2042.  
  2043. Now lowering submodule _run_on_acc_86
  2044.  
  2045. Now lowering submodule _run_on_acc_86
  2046.  
  2047. split_name=_run_on_acc_86, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2048.  
  2049. split_name=_run_on_acc_86, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2050.  
  2051. Timing cache is used!
  2052.  
  2053. Timing cache is used!
  2054.  
  2055. 2023-03-27 05:23:21.577
  2056.  
  2057. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2058.  
  2059. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2060.  
  2061. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2062.  
  2063. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2064.  
  2065. 2023-03-27 05:23:21.597
  2066.  
  2067. TRT INetwork construction elapsed time: 0:00:00.020005
  2068.  
  2069. TRT INetwork construction elapsed time: 0:00:00.020005
  2070.  
  2071. 2023-03-27 05:23:31.919
  2072.  
  2073. Build TRT engine elapsed time: 0:00:10.314745
  2074.  
  2075. Build TRT engine elapsed time: 0:00:10.314745
  2076.  
  2077. Lowering submodule _run_on_acc_86 elapsed time 0:00:10.369284
  2078.  
  2079. Lowering submodule _run_on_acc_86 elapsed time 0:00:10.369284
  2080.  
  2081. Now lowering submodule _run_on_acc_88
  2082.  
  2083. Now lowering submodule _run_on_acc_88
  2084.  
  2085. split_name=_run_on_acc_88, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2086.  
  2087. split_name=_run_on_acc_88, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2088.  
  2089. Timing cache is used!
  2090.  
  2091. Timing cache is used!
  2092.  
  2093. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_210 are constant. In this case, please consider constant fold the model first.
  2094. warnings.warn(
  2095.  
  2096. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_210 are constant. In this case, please consider constant fold the model first.
  2097. warnings.warn(
  2098.  
  2099. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_211 are constant. In this case, please consider constant fold the model first.
  2100. warnings.warn(
  2101.  
  2102. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_211 are constant. In this case, please consider constant fold the model first.
  2103. warnings.warn(
  2104.  
  2105. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_212 are constant. In this case, please consider constant fold the model first.
  2106. warnings.warn(
  2107.  
  2108. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_212 are constant. In this case, please consider constant fold the model first.
  2109. warnings.warn(
  2110.  
  2111. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_213 are constant. In this case, please consider constant fold the model first.
  2112. warnings.warn(
  2113.  
  2114. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_213 are constant. In this case, please consider constant fold the model first.
  2115. warnings.warn(
  2116.  
  2117. TRT INetwork construction elapsed time: 0:00:00.038002
  2118.  
  2119. TRT INetwork construction elapsed time: 0:00:00.038002
  2120.  
  2121. Build TRT engine elapsed time: 0:00:00.421442
  2122.  
  2123. Build TRT engine elapsed time: 0:00:00.421442
  2124.  
  2125. Lowering submodule _run_on_acc_88 elapsed time 0:00:00.493725
  2126.  
  2127. Lowering submodule _run_on_acc_88 elapsed time 0:00:00.493725
  2128.  
  2129. Now lowering submodule _run_on_acc_90
  2130.  
  2131. Now lowering submodule _run_on_acc_90
  2132.  
  2133. split_name=_run_on_acc_90, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2134.  
  2135. split_name=_run_on_acc_90, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2136.  
  2137. Timing cache is used!
  2138.  
  2139. Timing cache is used!
  2140.  
  2141. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_214 are constant. In this case, please consider constant fold the model first.
  2142. warnings.warn(
  2143.  
  2144. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_214 are constant. In this case, please consider constant fold the model first.
  2145. warnings.warn(
  2146.  
  2147. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_215 are constant. In this case, please consider constant fold the model first.
  2148. warnings.warn(
  2149.  
  2150. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_215 are constant. In this case, please consider constant fold the model first.
  2151. warnings.warn(
  2152.  
  2153. TRT INetwork construction elapsed time: 0:00:00.022619
  2154.  
  2155. TRT INetwork construction elapsed time: 0:00:00.022619
  2156.  
  2157. 2023-03-27 05:23:36.797
  2158.  
  2159. Build TRT engine elapsed time: 0:00:04.292359
  2160.  
  2161. Build TRT engine elapsed time: 0:00:04.292359
  2162.  
  2163. Lowering submodule _run_on_acc_90 elapsed time 0:00:04.353155
  2164.  
  2165. Lowering submodule _run_on_acc_90 elapsed time 0:00:04.353155
  2166.  
  2167. Now lowering submodule _run_on_acc_92
  2168.  
  2169. Now lowering submodule _run_on_acc_92
  2170.  
  2171. split_name=_run_on_acc_92, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2172.  
  2173. split_name=_run_on_acc_92, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2174.  
  2175. Timing cache is used!
  2176.  
  2177. Timing cache is used!
  2178.  
  2179. 2023-03-27 05:23:36.843
  2180.  
  2181. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2182.  
  2183. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2184.  
  2185. 2023-03-27 05:23:36.855
  2186.  
  2187. TRT INetwork construction elapsed time: 0:00:00.012449
  2188.  
  2189. TRT INetwork construction elapsed time: 0:00:00.012449
  2190.  
  2191. 2023-03-27 05:23:47.226
  2192.  
  2193. Build TRT engine elapsed time: 0:00:10.362828
  2194.  
  2195. Build TRT engine elapsed time: 0:00:10.362828
  2196.  
  2197. Lowering submodule _run_on_acc_92 elapsed time 0:00:10.409648
  2198.  
  2199. Lowering submodule _run_on_acc_92 elapsed time 0:00:10.409648
  2200.  
  2201. Now lowering submodule _run_on_acc_94
  2202.  
  2203. Now lowering submodule _run_on_acc_94
  2204.  
  2205. split_name=_run_on_acc_94, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2206.  
  2207. split_name=_run_on_acc_94, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2208.  
  2209. Timing cache is used!
  2210.  
  2211. Timing cache is used!
  2212.  
  2213. 2023-03-27 05:23:47.270
  2214.  
  2215. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2216.  
  2217. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2218.  
  2219. 2023-03-27 05:23:47.280
  2220.  
  2221. TRT INetwork construction elapsed time: 0:00:00.012002
  2222.  
  2223. TRT INetwork construction elapsed time: 0:00:00.012002
  2224.  
  2225. 2023-03-27 05:23:51.052
  2226.  
  2227. Build TRT engine elapsed time: 0:00:03.763718
  2228.  
  2229. Build TRT engine elapsed time: 0:00:03.763718
  2230.  
  2231. Lowering submodule _run_on_acc_94 elapsed time 0:00:03.808704
  2232.  
  2233. Lowering submodule _run_on_acc_94 elapsed time 0:00:03.808704
  2234.  
  2235. Now lowering submodule _run_on_acc_96
  2236.  
  2237. Now lowering submodule _run_on_acc_96
  2238.  
  2239. split_name=_run_on_acc_96, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2240.  
  2241. split_name=_run_on_acc_96, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2242.  
  2243. Timing cache is used!
  2244.  
  2245. Timing cache is used!
  2246.  
  2247. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_216 are constant. In this case, please consider constant fold the model first.
  2248. warnings.warn(
  2249.  
  2250. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_216 are constant. In this case, please consider constant fold the model first.
  2251. warnings.warn(
  2252.  
  2253. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_217 are constant. In this case, please consider constant fold the model first.
  2254. warnings.warn(
  2255.  
  2256. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_217 are constant. In this case, please consider constant fold the model first.
  2257. warnings.warn(
  2258.  
  2259. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_218 are constant. In this case, please consider constant fold the model first.
  2260. warnings.warn(
  2261.  
  2262. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_218 are constant. In this case, please consider constant fold the model first.
  2263. warnings.warn(
  2264.  
  2265. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_219 are constant. In this case, please consider constant fold the model first.
  2266. warnings.warn(
  2267.  
  2268. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_219 are constant. In this case, please consider constant fold the model first.
  2269. warnings.warn(
  2270.  
  2271. TRT INetwork construction elapsed time: 0:00:00.037003
  2272.  
  2273. TRT INetwork construction elapsed time: 0:00:00.037003
  2274.  
  2275. Build TRT engine elapsed time: 0:00:00.428686
  2276.  
  2277. Build TRT engine elapsed time: 0:00:00.428686
  2278.  
  2279. Lowering submodule _run_on_acc_96 elapsed time 0:00:00.500206
  2280.  
  2281. Lowering submodule _run_on_acc_96 elapsed time 0:00:00.500206
  2282.  
  2283. Now lowering submodule _run_on_acc_98
  2284.  
  2285. Now lowering submodule _run_on_acc_98
  2286.  
  2287. split_name=_run_on_acc_98, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2288.  
  2289. split_name=_run_on_acc_98, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2290.  
  2291. Timing cache is used!
  2292.  
  2293. Timing cache is used!
  2294.  
  2295. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_220 are constant. In this case, please consider constant fold the model first.
  2296. warnings.warn(
  2297.  
  2298. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_220 are constant. In this case, please consider constant fold the model first.
  2299. warnings.warn(
  2300.  
  2301. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_221 are constant. In this case, please consider constant fold the model first.
  2302. warnings.warn(
  2303.  
  2304. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_221 are constant. In this case, please consider constant fold the model first.
  2305. warnings.warn(
  2306.  
  2307. TRT INetwork construction elapsed time: 0:00:00.021504
  2308.  
  2309. TRT INetwork construction elapsed time: 0:00:00.021504
  2310.  
  2311. 2023-03-27 05:23:55.944
  2312.  
  2313. Build TRT engine elapsed time: 0:00:04.304132
  2314.  
  2315. Build TRT engine elapsed time: 0:00:04.304132
  2316.  
  2317. Lowering submodule _run_on_acc_98 elapsed time 0:00:04.363832
  2318.  
  2319. Lowering submodule _run_on_acc_98 elapsed time 0:00:04.363832
  2320.  
  2321. Now lowering submodule _run_on_acc_100
  2322.  
  2323. Now lowering submodule _run_on_acc_100
  2324.  
  2325. split_name=_run_on_acc_100, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2326.  
  2327. split_name=_run_on_acc_100, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2328.  
  2329. Timing cache is used!
  2330.  
  2331. Timing cache is used!
  2332.  
  2333. 2023-03-27 05:23:55.990
  2334.  
  2335. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2336.  
  2337. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2338.  
  2339. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2340.  
  2341. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2342.  
  2343. 2023-03-27 05:23:56.010
  2344.  
  2345. TRT INetwork construction elapsed time: 0:00:00.019997
  2346.  
  2347. TRT INetwork construction elapsed time: 0:00:00.019997
  2348.  
  2349. 2023-03-27 05:24:06.075
  2350.  
  2351. Build TRT engine elapsed time: 0:00:10.056246
  2352.  
  2353. Build TRT engine elapsed time: 0:00:10.056246
  2354.  
  2355. Lowering submodule _run_on_acc_100 elapsed time 0:00:10.110658
  2356.  
  2357. Lowering submodule _run_on_acc_100 elapsed time 0:00:10.110658
  2358.  
  2359. Now lowering submodule _run_on_acc_102
  2360.  
  2361. Now lowering submodule _run_on_acc_102
  2362.  
  2363. split_name=_run_on_acc_102, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2364.  
  2365. split_name=_run_on_acc_102, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2366.  
  2367. Timing cache is used!
  2368.  
  2369. Timing cache is used!
  2370.  
  2371. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_222 are constant. In this case, please consider constant fold the model first.
  2372. warnings.warn(
  2373.  
  2374. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_222 are constant. In this case, please consider constant fold the model first.
  2375. warnings.warn(
  2376.  
  2377. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_223 are constant. In this case, please consider constant fold the model first.
  2378. warnings.warn(
  2379.  
  2380. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_223 are constant. In this case, please consider constant fold the model first.
  2381. warnings.warn(
  2382.  
  2383. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_224 are constant. In this case, please consider constant fold the model first.
  2384. warnings.warn(
  2385.  
  2386. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_224 are constant. In this case, please consider constant fold the model first.
  2387. warnings.warn(
  2388.  
  2389. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_225 are constant. In this case, please consider constant fold the model first.
  2390. warnings.warn(
  2391.  
  2392. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_225 are constant. In this case, please consider constant fold the model first.
  2393. warnings.warn(
  2394.  
  2395. TRT INetwork construction elapsed time: 0:00:00.036510
  2396.  
  2397. TRT INetwork construction elapsed time: 0:00:00.036510
  2398.  
  2399. Build TRT engine elapsed time: 0:00:00.433247
  2400.  
  2401. Build TRT engine elapsed time: 0:00:00.433247
  2402.  
  2403. Lowering submodule _run_on_acc_102 elapsed time 0:00:00.505180
  2404.  
  2405. Lowering submodule _run_on_acc_102 elapsed time 0:00:00.505180
  2406.  
  2407. Now lowering submodule _run_on_acc_104
  2408.  
  2409. Now lowering submodule _run_on_acc_104
  2410.  
  2411. split_name=_run_on_acc_104, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2412.  
  2413. split_name=_run_on_acc_104, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2414.  
  2415. Timing cache is used!
  2416.  
  2417. Timing cache is used!
  2418.  
  2419. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_226 are constant. In this case, please consider constant fold the model first.
  2420. warnings.warn(
  2421.  
  2422. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_226 are constant. In this case, please consider constant fold the model first.
  2423. warnings.warn(
  2424.  
  2425. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_227 are constant. In this case, please consider constant fold the model first.
  2426. warnings.warn(
  2427.  
  2428. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_227 are constant. In this case, please consider constant fold the model first.
  2429. warnings.warn(
  2430.  
  2431. TRT INetwork construction elapsed time: 0:00:00.022015
  2432.  
  2433. TRT INetwork construction elapsed time: 0:00:00.022015
  2434.  
  2435. 2023-03-27 05:24:10.928
  2436.  
  2437. Build TRT engine elapsed time: 0:00:04.258981
  2438.  
  2439. Build TRT engine elapsed time: 0:00:04.258981
  2440.  
  2441. Lowering submodule _run_on_acc_104 elapsed time 0:00:04.319536
  2442.  
  2443. Lowering submodule _run_on_acc_104 elapsed time 0:00:04.319536
  2444.  
  2445. Now lowering submodule _run_on_acc_106
  2446.  
  2447. Now lowering submodule _run_on_acc_106
  2448.  
  2449. split_name=_run_on_acc_106, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2450.  
  2451. split_name=_run_on_acc_106, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2452.  
  2453. Timing cache is used!
  2454.  
  2455. Timing cache is used!
  2456.  
  2457. 2023-03-27 05:24:10.974
  2458.  
  2459. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2460.  
  2461. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2462.  
  2463. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2464.  
  2465. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2466.  
  2467. 2023-03-27 05:24:10.994
  2468.  
  2469. TRT INetwork construction elapsed time: 0:00:00.020005
  2470.  
  2471. TRT INetwork construction elapsed time: 0:00:00.020005
  2472.  
  2473. 2023-03-27 05:24:21.030
  2474.  
  2475. Build TRT engine elapsed time: 0:00:10.027874
  2476.  
  2477. Build TRT engine elapsed time: 0:00:10.027874
  2478.  
  2479. Lowering submodule _run_on_acc_106 elapsed time 0:00:10.082504
  2480.  
  2481. Lowering submodule _run_on_acc_106 elapsed time 0:00:10.082504
  2482.  
  2483. Now lowering submodule _run_on_acc_108
  2484.  
  2485. Now lowering submodule _run_on_acc_108
  2486.  
  2487. split_name=_run_on_acc_108, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2488.  
  2489. split_name=_run_on_acc_108, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2490.  
  2491. Timing cache is used!
  2492.  
  2493. Timing cache is used!
  2494.  
  2495. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_228 are constant. In this case, please consider constant fold the model first.
  2496. warnings.warn(
  2497.  
  2498. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_228 are constant. In this case, please consider constant fold the model first.
  2499. warnings.warn(
  2500.  
  2501. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_229 are constant. In this case, please consider constant fold the model first.
  2502. warnings.warn(
  2503.  
  2504. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_229 are constant. In this case, please consider constant fold the model first.
  2505. warnings.warn(
  2506.  
  2507. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_230 are constant. In this case, please consider constant fold the model first.
  2508. warnings.warn(
  2509.  
  2510. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_230 are constant. In this case, please consider constant fold the model first.
  2511. warnings.warn(
  2512.  
  2513. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_231 are constant. In this case, please consider constant fold the model first.
  2514. warnings.warn(
  2515.  
  2516. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_231 are constant. In this case, please consider constant fold the model first.
  2517. warnings.warn(
  2518.  
  2519. TRT INetwork construction elapsed time: 0:00:00.037008
  2520.  
  2521. TRT INetwork construction elapsed time: 0:00:00.037008
  2522.  
  2523. Build TRT engine elapsed time: 0:00:00.423912
  2524.  
  2525. Build TRT engine elapsed time: 0:00:00.423912
  2526.  
  2527. Lowering submodule _run_on_acc_108 elapsed time 0:00:00.495086
  2528.  
  2529. Lowering submodule _run_on_acc_108 elapsed time 0:00:00.495086
  2530.  
  2531. Now lowering submodule _run_on_acc_110
  2532.  
  2533. Now lowering submodule _run_on_acc_110
  2534.  
  2535. split_name=_run_on_acc_110, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2536.  
  2537. split_name=_run_on_acc_110, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2538.  
  2539. Timing cache is used!
  2540.  
  2541. Timing cache is used!
  2542.  
  2543. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_232 are constant. In this case, please consider constant fold the model first.
  2544. warnings.warn(
  2545.  
  2546. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_232 are constant. In this case, please consider constant fold the model first.
  2547. warnings.warn(
  2548.  
  2549. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_233 are constant. In this case, please consider constant fold the model first.
  2550. warnings.warn(
  2551.  
  2552. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_233 are constant. In this case, please consider constant fold the model first.
  2553. warnings.warn(
  2554.  
  2555. TRT INetwork construction elapsed time: 0:00:00.021300
  2556.  
  2557. TRT INetwork construction elapsed time: 0:00:00.021300
  2558.  
  2559. 2023-03-27 05:24:25.911
  2560.  
  2561. Build TRT engine elapsed time: 0:00:04.294626
  2562.  
  2563. Build TRT engine elapsed time: 0:00:04.294626
  2564.  
  2565. Lowering submodule _run_on_acc_110 elapsed time 0:00:04.355888
  2566.  
  2567. Lowering submodule _run_on_acc_110 elapsed time 0:00:04.355888
  2568.  
  2569. Now lowering submodule _run_on_acc_112
  2570.  
  2571. Now lowering submodule _run_on_acc_112
  2572.  
  2573. split_name=_run_on_acc_112, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2574.  
  2575. split_name=_run_on_acc_112, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2576.  
  2577. Timing cache is used!
  2578.  
  2579. Timing cache is used!
  2580.  
  2581. 2023-03-27 05:24:25.958
  2582.  
  2583. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2584.  
  2585. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2586.  
  2587. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2588.  
  2589. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2590.  
  2591. 2023-03-27 05:24:25.978
  2592.  
  2593. TRT INetwork construction elapsed time: 0:00:00.021005
  2594.  
  2595. TRT INetwork construction elapsed time: 0:00:00.021005
  2596.  
  2597. 2023-03-27 05:24:36.012
  2598.  
  2599. Build TRT engine elapsed time: 0:00:10.025766
  2600.  
  2601. Build TRT engine elapsed time: 0:00:10.025766
  2602.  
  2603. Lowering submodule _run_on_acc_112 elapsed time 0:00:10.080759
  2604.  
  2605. Lowering submodule _run_on_acc_112 elapsed time 0:00:10.080759
  2606.  
  2607. Now lowering submodule _run_on_acc_114
  2608.  
  2609. Now lowering submodule _run_on_acc_114
  2610.  
  2611. split_name=_run_on_acc_114, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2612.  
  2613. split_name=_run_on_acc_114, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2614.  
  2615. Timing cache is used!
  2616.  
  2617. Timing cache is used!
  2618.  
  2619. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_234 are constant. In this case, please consider constant fold the model first.
  2620. warnings.warn(
  2621.  
  2622. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_234 are constant. In this case, please consider constant fold the model first.
  2623. warnings.warn(
  2624.  
  2625. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_235 are constant. In this case, please consider constant fold the model first.
  2626. warnings.warn(
  2627.  
  2628. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_235 are constant. In this case, please consider constant fold the model first.
  2629. warnings.warn(
  2630.  
  2631. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_236 are constant. In this case, please consider constant fold the model first.
  2632. warnings.warn(
  2633.  
  2634. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_236 are constant. In this case, please consider constant fold the model first.
  2635. warnings.warn(
  2636.  
  2637. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_237 are constant. In this case, please consider constant fold the model first.
  2638. warnings.warn(
  2639.  
  2640. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_237 are constant. In this case, please consider constant fold the model first.
  2641. warnings.warn(
  2642.  
  2643. TRT INetwork construction elapsed time: 0:00:00.037007
  2644.  
  2645. TRT INetwork construction elapsed time: 0:00:00.037007
  2646.  
  2647. Build TRT engine elapsed time: 0:00:00.416450
  2648.  
  2649. Build TRT engine elapsed time: 0:00:00.416450
  2650.  
  2651. Lowering submodule _run_on_acc_114 elapsed time 0:00:00.487694
  2652.  
  2653. Lowering submodule _run_on_acc_114 elapsed time 0:00:00.487694
  2654.  
  2655. Now lowering submodule _run_on_acc_116
  2656.  
  2657. Now lowering submodule _run_on_acc_116
  2658.  
  2659. split_name=_run_on_acc_116, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2660.  
  2661. split_name=_run_on_acc_116, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2662.  
  2663. Timing cache is used!
  2664.  
  2665. Timing cache is used!
  2666.  
  2667. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_238 are constant. In this case, please consider constant fold the model first.
  2668. warnings.warn(
  2669.  
  2670. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_238 are constant. In this case, please consider constant fold the model first.
  2671. warnings.warn(
  2672.  
  2673. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_239 are constant. In this case, please consider constant fold the model first.
  2674. warnings.warn(
  2675.  
  2676. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_239 are constant. In this case, please consider constant fold the model first.
  2677. warnings.warn(
  2678.  
  2679. TRT INetwork construction elapsed time: 0:00:00.022006
  2680.  
  2681. TRT INetwork construction elapsed time: 0:00:00.022006
  2682.  
  2683. 2023-03-27 05:24:40.836
  2684.  
  2685. Build TRT engine elapsed time: 0:00:04.247610
  2686.  
  2687. Build TRT engine elapsed time: 0:00:04.247610
  2688.  
  2689. Lowering submodule _run_on_acc_116 elapsed time 0:00:04.308578
  2690.  
  2691. Lowering submodule _run_on_acc_116 elapsed time 0:00:04.308578
  2692.  
  2693. Now lowering submodule _run_on_acc_118
  2694.  
  2695. Now lowering submodule _run_on_acc_118
  2696.  
  2697. split_name=_run_on_acc_118, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2698.  
  2699. split_name=_run_on_acc_118, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2700.  
  2701. Timing cache is used!
  2702.  
  2703. Timing cache is used!
  2704.  
  2705. 2023-03-27 05:24:40.884
  2706.  
  2707. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2708.  
  2709. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2710.  
  2711. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2712.  
  2713. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2714.  
  2715. 2023-03-27 05:24:40.904
  2716.  
  2717. TRT INetwork construction elapsed time: 0:00:00.020048
  2718.  
  2719. TRT INetwork construction elapsed time: 0:00:00.020048
  2720.  
  2721. 2023-03-27 05:24:50.877
  2722.  
  2723. Build TRT engine elapsed time: 0:00:09.964763
  2724.  
  2725. Build TRT engine elapsed time: 0:00:09.964763
  2726.  
  2727. Lowering submodule _run_on_acc_118 elapsed time 0:00:10.021839
  2728.  
  2729. Lowering submodule _run_on_acc_118 elapsed time 0:00:10.021839
  2730.  
  2731. Now lowering submodule _run_on_acc_120
  2732.  
  2733. Now lowering submodule _run_on_acc_120
  2734.  
  2735. split_name=_run_on_acc_120, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2736.  
  2737. split_name=_run_on_acc_120, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2738.  
  2739. Timing cache is used!
  2740.  
  2741. Timing cache is used!
  2742.  
  2743. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_240 are constant. In this case, please consider constant fold the model first.
  2744. warnings.warn(
  2745.  
  2746. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_240 are constant. In this case, please consider constant fold the model first.
  2747. warnings.warn(
  2748.  
  2749. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_241 are constant. In this case, please consider constant fold the model first.
  2750. warnings.warn(
  2751.  
  2752. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_241 are constant. In this case, please consider constant fold the model first.
  2753. warnings.warn(
  2754.  
  2755. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_242 are constant. In this case, please consider constant fold the model first.
  2756. warnings.warn(
  2757.  
  2758. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_242 are constant. In this case, please consider constant fold the model first.
  2759. warnings.warn(
  2760.  
  2761. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_243 are constant. In this case, please consider constant fold the model first.
  2762. warnings.warn(
  2763.  
  2764. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_243 are constant. In this case, please consider constant fold the model first.
  2765. warnings.warn(
  2766.  
  2767. TRT INetwork construction elapsed time: 0:00:00.037499
  2768.  
  2769. TRT INetwork construction elapsed time: 0:00:00.037499
  2770.  
  2771. Build TRT engine elapsed time: 0:00:00.431422
  2772.  
  2773. Build TRT engine elapsed time: 0:00:00.431422
  2774.  
  2775. Lowering submodule _run_on_acc_120 elapsed time 0:00:00.503167
  2776.  
  2777. Lowering submodule _run_on_acc_120 elapsed time 0:00:00.503167
  2778.  
  2779. Now lowering submodule _run_on_acc_122
  2780.  
  2781. Now lowering submodule _run_on_acc_122
  2782.  
  2783. split_name=_run_on_acc_122, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2784.  
  2785. split_name=_run_on_acc_122, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2786.  
  2787. Timing cache is used!
  2788.  
  2789. Timing cache is used!
  2790.  
  2791. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_244 are constant. In this case, please consider constant fold the model first.
  2792. warnings.warn(
  2793.  
  2794. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_244 are constant. In this case, please consider constant fold the model first.
  2795. warnings.warn(
  2796.  
  2797. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_245 are constant. In this case, please consider constant fold the model first.
  2798. warnings.warn(
  2799.  
  2800. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_245 are constant. In this case, please consider constant fold the model first.
  2801. warnings.warn(
  2802.  
  2803. TRT INetwork construction elapsed time: 0:00:00.022005
  2804.  
  2805. TRT INetwork construction elapsed time: 0:00:00.022005
  2806.  
  2807. 2023-03-27 05:24:55.738
  2808.  
  2809. Build TRT engine elapsed time: 0:00:04.267363
  2810.  
  2811. Build TRT engine elapsed time: 0:00:04.267363
  2812.  
  2813. Lowering submodule _run_on_acc_122 elapsed time 0:00:04.327424
  2814.  
  2815. Lowering submodule _run_on_acc_122 elapsed time 0:00:04.327424
  2816.  
  2817. Now lowering submodule _run_on_acc_124
  2818.  
  2819. Now lowering submodule _run_on_acc_124
  2820.  
  2821. split_name=_run_on_acc_124, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2822.  
  2823. split_name=_run_on_acc_124, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2824.  
  2825. Timing cache is used!
  2826.  
  2827. Timing cache is used!
  2828.  
  2829. 2023-03-27 05:24:55.784
  2830.  
  2831. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2832.  
  2833. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2834.  
  2835. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2836.  
  2837. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2838.  
  2839. 2023-03-27 05:24:55.804
  2840.  
  2841. TRT INetwork construction elapsed time: 0:00:00.020000
  2842.  
  2843. TRT INetwork construction elapsed time: 0:00:00.020000
  2844.  
  2845. 2023-03-27 05:25:05.842
  2846.  
  2847. Build TRT engine elapsed time: 0:00:10.029499
  2848.  
  2849. Build TRT engine elapsed time: 0:00:10.029499
  2850.  
  2851. Lowering submodule _run_on_acc_124 elapsed time 0:00:10.084028
  2852.  
  2853. Lowering submodule _run_on_acc_124 elapsed time 0:00:10.084028
  2854.  
  2855. Now lowering submodule _run_on_acc_126
  2856.  
  2857. Now lowering submodule _run_on_acc_126
  2858.  
  2859. split_name=_run_on_acc_126, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2860.  
  2861. split_name=_run_on_acc_126, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2862.  
  2863. Timing cache is used!
  2864.  
  2865. Timing cache is used!
  2866.  
  2867. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_246 are constant. In this case, please consider constant fold the model first.
  2868. warnings.warn(
  2869.  
  2870. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_246 are constant. In this case, please consider constant fold the model first.
  2871. warnings.warn(
  2872.  
  2873. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_247 are constant. In this case, please consider constant fold the model first.
  2874. warnings.warn(
  2875.  
  2876. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_247 are constant. In this case, please consider constant fold the model first.
  2877. warnings.warn(
  2878.  
  2879. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_248 are constant. In this case, please consider constant fold the model first.
  2880. warnings.warn(
  2881.  
  2882. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_248 are constant. In this case, please consider constant fold the model first.
  2883. warnings.warn(
  2884.  
  2885. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_249 are constant. In this case, please consider constant fold the model first.
  2886. warnings.warn(
  2887.  
  2888. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_249 are constant. In this case, please consider constant fold the model first.
  2889. warnings.warn(
  2890.  
  2891. TRT INetwork construction elapsed time: 0:00:00.037984
  2892.  
  2893. TRT INetwork construction elapsed time: 0:00:00.037984
  2894.  
  2895. Build TRT engine elapsed time: 0:00:00.428732
  2896.  
  2897. Build TRT engine elapsed time: 0:00:00.428732
  2898.  
  2899. Lowering submodule _run_on_acc_126 elapsed time 0:00:00.502204
  2900.  
  2901. Lowering submodule _run_on_acc_126 elapsed time 0:00:00.502204
  2902.  
  2903. Now lowering submodule _run_on_acc_128
  2904.  
  2905. Now lowering submodule _run_on_acc_128
  2906.  
  2907. split_name=_run_on_acc_128, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2908.  
  2909. split_name=_run_on_acc_128, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2910.  
  2911. Timing cache is used!
  2912.  
  2913. Timing cache is used!
  2914.  
  2915. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_250 are constant. In this case, please consider constant fold the model first.
  2916. warnings.warn(
  2917.  
  2918. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_250 are constant. In this case, please consider constant fold the model first.
  2919. warnings.warn(
  2920.  
  2921. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_251 are constant. In this case, please consider constant fold the model first.
  2922. warnings.warn(
  2923.  
  2924. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_251 are constant. In this case, please consider constant fold the model first.
  2925. warnings.warn(
  2926.  
  2927. TRT INetwork construction elapsed time: 0:00:00.022002
  2928.  
  2929. TRT INetwork construction elapsed time: 0:00:00.022002
  2930.  
  2931. 2023-03-27 05:25:10.696
  2932.  
  2933. Build TRT engine elapsed time: 0:00:04.261908
  2934.  
  2935. Build TRT engine elapsed time: 0:00:04.261908
  2936.  
  2937. Lowering submodule _run_on_acc_128 elapsed time 0:00:04.321579
  2938.  
  2939. Lowering submodule _run_on_acc_128 elapsed time 0:00:04.321579
  2940.  
  2941. Now lowering submodule _run_on_acc_130
  2942.  
  2943. Now lowering submodule _run_on_acc_130
  2944.  
  2945. split_name=_run_on_acc_130, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2946.  
  2947. split_name=_run_on_acc_130, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2948.  
  2949. Timing cache is used!
  2950.  
  2951. Timing cache is used!
  2952.  
  2953. 2023-03-27 05:25:10.742
  2954.  
  2955. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2956.  
  2957. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2958.  
  2959. 2023-03-27 05:25:10.753
  2960.  
  2961. TRT INetwork construction elapsed time: 0:00:00.012003
  2962.  
  2963. TRT INetwork construction elapsed time: 0:00:00.012003
  2964.  
  2965. 2023-03-27 05:25:20.927
  2966.  
  2967. Build TRT engine elapsed time: 0:00:10.165734
  2968.  
  2969. Build TRT engine elapsed time: 0:00:10.165734
  2970.  
  2971. Lowering submodule _run_on_acc_130 elapsed time 0:00:10.211746
  2972.  
  2973. Lowering submodule _run_on_acc_130 elapsed time 0:00:10.211746
  2974.  
  2975. Now lowering submodule _run_on_acc_132
  2976.  
  2977. Now lowering submodule _run_on_acc_132
  2978.  
  2979. split_name=_run_on_acc_132, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2980.  
  2981. split_name=_run_on_acc_132, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  2982.  
  2983. Timing cache is used!
  2984.  
  2985. Timing cache is used!
  2986.  
  2987. 2023-03-27 05:25:20.972
  2988.  
  2989. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2990.  
  2991. Unable to find layer norm plugin, fall back to TensorRT implementation.
  2992.  
  2993. 2023-03-27 05:25:20.984
  2994.  
  2995. TRT INetwork construction elapsed time: 0:00:00.013004
  2996.  
  2997. TRT INetwork construction elapsed time: 0:00:00.013004
  2998.  
  2999. 2023-03-27 05:25:24.712
  3000.  
  3001. Build TRT engine elapsed time: 0:00:03.719135
  3002.  
  3003. Build TRT engine elapsed time: 0:00:03.719135
  3004.  
  3005. Lowering submodule _run_on_acc_132 elapsed time 0:00:03.767137
  3006.  
  3007. Lowering submodule _run_on_acc_132 elapsed time 0:00:03.767137
  3008.  
  3009. Now lowering submodule _run_on_acc_134
  3010.  
  3011. Now lowering submodule _run_on_acc_134
  3012.  
  3013. split_name=_run_on_acc_134, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3014.  
  3015. split_name=_run_on_acc_134, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3016.  
  3017. Timing cache is used!
  3018.  
  3019. Timing cache is used!
  3020.  
  3021. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_252 are constant. In this case, please consider constant fold the model first.
  3022. warnings.warn(
  3023.  
  3024. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_252 are constant. In this case, please consider constant fold the model first.
  3025. warnings.warn(
  3026.  
  3027. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_253 are constant. In this case, please consider constant fold the model first.
  3028. warnings.warn(
  3029.  
  3030. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_253 are constant. In this case, please consider constant fold the model first.
  3031. warnings.warn(
  3032.  
  3033. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_254 are constant. In this case, please consider constant fold the model first.
  3034. warnings.warn(
  3035.  
  3036. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_254 are constant. In this case, please consider constant fold the model first.
  3037. warnings.warn(
  3038.  
  3039. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_255 are constant. In this case, please consider constant fold the model first.
  3040. warnings.warn(
  3041.  
  3042. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_255 are constant. In this case, please consider constant fold the model first.
  3043. warnings.warn(
  3044.  
  3045. TRT INetwork construction elapsed time: 0:00:00.038008
  3046.  
  3047. TRT INetwork construction elapsed time: 0:00:00.038008
  3048.  
  3049. Build TRT engine elapsed time: 0:00:00.443596
  3050.  
  3051. Build TRT engine elapsed time: 0:00:00.443596
  3052.  
  3053. Lowering submodule _run_on_acc_134 elapsed time 0:00:00.517405
  3054.  
  3055. Lowering submodule _run_on_acc_134 elapsed time 0:00:00.517405
  3056.  
  3057. Now lowering submodule _run_on_acc_136
  3058.  
  3059. Now lowering submodule _run_on_acc_136
  3060.  
  3061. split_name=_run_on_acc_136, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3062.  
  3063. split_name=_run_on_acc_136, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3064.  
  3065. Timing cache is used!
  3066.  
  3067. Timing cache is used!
  3068.  
  3069. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_256 are constant. In this case, please consider constant fold the model first.
  3070. warnings.warn(
  3071.  
  3072. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_256 are constant. In this case, please consider constant fold the model first.
  3073. warnings.warn(
  3074.  
  3075. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_257 are constant. In this case, please consider constant fold the model first.
  3076. warnings.warn(
  3077.  
  3078. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_257 are constant. In this case, please consider constant fold the model first.
  3079. warnings.warn(
  3080.  
  3081. TRT INetwork construction elapsed time: 0:00:00.022005
  3082.  
  3083. TRT INetwork construction elapsed time: 0:00:00.022005
  3084.  
  3085. 2023-03-27 05:25:29.560
  3086.  
  3087. Build TRT engine elapsed time: 0:00:04.241779
  3088.  
  3089. Build TRT engine elapsed time: 0:00:04.241779
  3090.  
  3091. Lowering submodule _run_on_acc_136 elapsed time 0:00:04.302936
  3092.  
  3093. Lowering submodule _run_on_acc_136 elapsed time 0:00:04.302936
  3094.  
  3095. Now lowering submodule _run_on_acc_138
  3096.  
  3097. Now lowering submodule _run_on_acc_138
  3098.  
  3099. split_name=_run_on_acc_138, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3100.  
  3101. split_name=_run_on_acc_138, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3102.  
  3103. Timing cache is used!
  3104.  
  3105. Timing cache is used!
  3106.  
  3107. 2023-03-27 05:25:29.607
  3108.  
  3109. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3110.  
  3111. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3112.  
  3113. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3114.  
  3115. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3116.  
  3117. 2023-03-27 05:25:29.627
  3118.  
  3119. TRT INetwork construction elapsed time: 0:00:00.021001
  3120.  
  3121. TRT INetwork construction elapsed time: 0:00:00.021001
  3122.  
  3123. 2023-03-27 05:25:39.760
  3124.  
  3125. Build TRT engine elapsed time: 0:00:10.125096
  3126.  
  3127. Build TRT engine elapsed time: 0:00:10.125096
  3128.  
  3129. Lowering submodule _run_on_acc_138 elapsed time 0:00:10.180046
  3130.  
  3131. Lowering submodule _run_on_acc_138 elapsed time 0:00:10.180046
  3132.  
  3133. Now lowering submodule _run_on_acc_140
  3134.  
  3135. Now lowering submodule _run_on_acc_140
  3136.  
  3137. split_name=_run_on_acc_140, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3138.  
  3139. split_name=_run_on_acc_140, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3140.  
  3141. Timing cache is used!
  3142.  
  3143. Timing cache is used!
  3144.  
  3145. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_258 are constant. In this case, please consider constant fold the model first.
  3146. warnings.warn(
  3147.  
  3148. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_258 are constant. In this case, please consider constant fold the model first.
  3149. warnings.warn(
  3150.  
  3151. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_259 are constant. In this case, please consider constant fold the model first.
  3152. warnings.warn(
  3153.  
  3154. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_259 are constant. In this case, please consider constant fold the model first.
  3155. warnings.warn(
  3156.  
  3157. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_260 are constant. In this case, please consider constant fold the model first.
  3158. warnings.warn(
  3159.  
  3160. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_260 are constant. In this case, please consider constant fold the model first.
  3161. warnings.warn(
  3162.  
  3163. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_261 are constant. In this case, please consider constant fold the model first.
  3164. warnings.warn(
  3165.  
  3166. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_261 are constant. In this case, please consider constant fold the model first.
  3167. warnings.warn(
  3168.  
  3169. TRT INetwork construction elapsed time: 0:00:00.036506
  3170.  
  3171. TRT INetwork construction elapsed time: 0:00:00.036506
  3172.  
  3173. Build TRT engine elapsed time: 0:00:00.428635
  3174.  
  3175. Build TRT engine elapsed time: 0:00:00.428635
  3176.  
  3177. Lowering submodule _run_on_acc_140 elapsed time 0:00:00.501459
  3178.  
  3179. Lowering submodule _run_on_acc_140 elapsed time 0:00:00.501459
  3180.  
  3181. Now lowering submodule _run_on_acc_142
  3182.  
  3183. Now lowering submodule _run_on_acc_142
  3184.  
  3185. split_name=_run_on_acc_142, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3186.  
  3187. split_name=_run_on_acc_142, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3188.  
  3189. Timing cache is used!
  3190.  
  3191. Timing cache is used!
  3192.  
  3193. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_262 are constant. In this case, please consider constant fold the model first.
  3194. warnings.warn(
  3195.  
  3196. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_262 are constant. In this case, please consider constant fold the model first.
  3197. warnings.warn(
  3198.  
  3199. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_263 are constant. In this case, please consider constant fold the model first.
  3200. warnings.warn(
  3201.  
  3202. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_263 are constant. In this case, please consider constant fold the model first.
  3203. warnings.warn(
  3204.  
  3205. TRT INetwork construction elapsed time: 0:00:00.022298
  3206.  
  3207. TRT INetwork construction elapsed time: 0:00:00.022298
  3208.  
  3209. 2023-03-27 05:25:44.719
  3210.  
  3211. Build TRT engine elapsed time: 0:00:04.366112
  3212.  
  3213. Build TRT engine elapsed time: 0:00:04.366112
  3214.  
  3215. Lowering submodule _run_on_acc_142 elapsed time 0:00:04.427833
  3216.  
  3217. Lowering submodule _run_on_acc_142 elapsed time 0:00:04.427833
  3218.  
  3219. Now lowering submodule _run_on_acc_144
  3220.  
  3221. Now lowering submodule _run_on_acc_144
  3222.  
  3223. split_name=_run_on_acc_144, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3224.  
  3225. split_name=_run_on_acc_144, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3226.  
  3227. Timing cache is used!
  3228.  
  3229. Timing cache is used!
  3230.  
  3231. 2023-03-27 05:25:44.767
  3232.  
  3233. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3234.  
  3235. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3236.  
  3237. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3238.  
  3239. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3240.  
  3241. 2023-03-27 05:25:44.787
  3242.  
  3243. TRT INetwork construction elapsed time: 0:00:00.020982
  3244.  
  3245. TRT INetwork construction elapsed time: 0:00:00.020982
  3246.  
  3247. 2023-03-27 05:25:54.843
  3248.  
  3249. Build TRT engine elapsed time: 0:00:10.047955
  3250.  
  3251. Build TRT engine elapsed time: 0:00:10.047955
  3252.  
  3253. Lowering submodule _run_on_acc_144 elapsed time 0:00:10.106318
  3254.  
  3255. Lowering submodule _run_on_acc_144 elapsed time 0:00:10.106318
  3256.  
  3257. Now lowering submodule _run_on_acc_146
  3258.  
  3259. Now lowering submodule _run_on_acc_146
  3260.  
  3261. split_name=_run_on_acc_146, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3262.  
  3263. split_name=_run_on_acc_146, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3264.  
  3265. Timing cache is used!
  3266.  
  3267. Timing cache is used!
  3268.  
  3269. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_264 are constant. In this case, please consider constant fold the model first.
  3270. warnings.warn(
  3271.  
  3272. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_264 are constant. In this case, please consider constant fold the model first.
  3273. warnings.warn(
  3274.  
  3275. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_265 are constant. In this case, please consider constant fold the model first.
  3276. warnings.warn(
  3277.  
  3278. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_265 are constant. In this case, please consider constant fold the model first.
  3279. warnings.warn(
  3280.  
  3281. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_266 are constant. In this case, please consider constant fold the model first.
  3282. warnings.warn(
  3283.  
  3284. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_266 are constant. In this case, please consider constant fold the model first.
  3285. warnings.warn(
  3286.  
  3287. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_267 are constant. In this case, please consider constant fold the model first.
  3288. warnings.warn(
  3289.  
  3290. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_267 are constant. In this case, please consider constant fold the model first.
  3291. warnings.warn(
  3292.  
  3293. TRT INetwork construction elapsed time: 0:00:00.037005
  3294.  
  3295. TRT INetwork construction elapsed time: 0:00:00.037005
  3296.  
  3297. Build TRT engine elapsed time: 0:00:00.431744
  3298.  
  3299. Build TRT engine elapsed time: 0:00:00.431744
  3300.  
  3301. Lowering submodule _run_on_acc_146 elapsed time 0:00:00.503690
  3302.  
  3303. Lowering submodule _run_on_acc_146 elapsed time 0:00:00.503690
  3304.  
  3305. Now lowering submodule _run_on_acc_148
  3306.  
  3307. Now lowering submodule _run_on_acc_148
  3308.  
  3309. split_name=_run_on_acc_148, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3310.  
  3311. split_name=_run_on_acc_148, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3312.  
  3313. Timing cache is used!
  3314.  
  3315. Timing cache is used!
  3316.  
  3317. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_268 are constant. In this case, please consider constant fold the model first.
  3318. warnings.warn(
  3319.  
  3320. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_268 are constant. In this case, please consider constant fold the model first.
  3321. warnings.warn(
  3322.  
  3323. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_269 are constant. In this case, please consider constant fold the model first.
  3324. warnings.warn(
  3325.  
  3326. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_269 are constant. In this case, please consider constant fold the model first.
  3327. warnings.warn(
  3328.  
  3329. TRT INetwork construction elapsed time: 0:00:00.022503
  3330.  
  3331. TRT INetwork construction elapsed time: 0:00:00.022503
  3332.  
  3333. 2023-03-27 05:25:59.737
  3334.  
  3335. Build TRT engine elapsed time: 0:00:04.296204
  3336.  
  3337. Build TRT engine elapsed time: 0:00:04.296204
  3338.  
  3339. Lowering submodule _run_on_acc_148 elapsed time 0:00:04.356075
  3340.  
  3341. Lowering submodule _run_on_acc_148 elapsed time 0:00:04.356075
  3342.  
  3343. Now lowering submodule _run_on_acc_150
  3344.  
  3345. Now lowering submodule _run_on_acc_150
  3346.  
  3347. split_name=_run_on_acc_150, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3348.  
  3349. split_name=_run_on_acc_150, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3350.  
  3351. Timing cache is used!
  3352.  
  3353. Timing cache is used!
  3354.  
  3355. 2023-03-27 05:25:59.783
  3356.  
  3357. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3358.  
  3359. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3360.  
  3361. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3362.  
  3363. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3364.  
  3365. 2023-03-27 05:25:59.803
  3366.  
  3367. TRT INetwork construction elapsed time: 0:00:00.021011
  3368.  
  3369. TRT INetwork construction elapsed time: 0:00:00.021011
  3370.  
  3371. 2023-03-27 05:26:09.847
  3372.  
  3373. Build TRT engine elapsed time: 0:00:10.035541
  3374.  
  3375. Build TRT engine elapsed time: 0:00:10.035541
  3376.  
  3377. Lowering submodule _run_on_acc_150 elapsed time 0:00:10.092231
  3378.  
  3379. Lowering submodule _run_on_acc_150 elapsed time 0:00:10.092231
  3380.  
  3381. Now lowering submodule _run_on_acc_152
  3382.  
  3383. Now lowering submodule _run_on_acc_152
  3384.  
  3385. split_name=_run_on_acc_152, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3386.  
  3387. split_name=_run_on_acc_152, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3388.  
  3389. Timing cache is used!
  3390.  
  3391. Timing cache is used!
  3392.  
  3393. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_270 are constant. In this case, please consider constant fold the model first.
  3394. warnings.warn(
  3395.  
  3396. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_270 are constant. In this case, please consider constant fold the model first.
  3397. warnings.warn(
  3398.  
  3399. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_271 are constant. In this case, please consider constant fold the model first.
  3400. warnings.warn(
  3401.  
  3402. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_271 are constant. In this case, please consider constant fold the model first.
  3403. warnings.warn(
  3404.  
  3405. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_272 are constant. In this case, please consider constant fold the model first.
  3406. warnings.warn(
  3407.  
  3408. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_272 are constant. In this case, please consider constant fold the model first.
  3409. warnings.warn(
  3410.  
  3411. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_273 are constant. In this case, please consider constant fold the model first.
  3412. warnings.warn(
  3413.  
  3414. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_273 are constant. In this case, please consider constant fold the model first.
  3415. warnings.warn(
  3416.  
  3417. TRT INetwork construction elapsed time: 0:00:00.038792
  3418.  
  3419. TRT INetwork construction elapsed time: 0:00:00.038792
  3420.  
  3421. Build TRT engine elapsed time: 0:00:00.436945
  3422.  
  3423. Build TRT engine elapsed time: 0:00:00.436945
  3424.  
  3425. Lowering submodule _run_on_acc_152 elapsed time 0:00:00.511085
  3426.  
  3427. Lowering submodule _run_on_acc_152 elapsed time 0:00:00.511085
  3428.  
  3429. Now lowering submodule _run_on_acc_154
  3430.  
  3431. Now lowering submodule _run_on_acc_154
  3432.  
  3433. split_name=_run_on_acc_154, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3434.  
  3435. split_name=_run_on_acc_154, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3436.  
  3437. Timing cache is used!
  3438.  
  3439. Timing cache is used!
  3440.  
  3441. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_274 are constant. In this case, please consider constant fold the model first.
  3442. warnings.warn(
  3443.  
  3444. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_274 are constant. In this case, please consider constant fold the model first.
  3445. warnings.warn(
  3446.  
  3447. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_275 are constant. In this case, please consider constant fold the model first.
  3448. warnings.warn(
  3449.  
  3450. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_275 are constant. In this case, please consider constant fold the model first.
  3451. warnings.warn(
  3452.  
  3453. TRT INetwork construction elapsed time: 0:00:00.023795
  3454.  
  3455. TRT INetwork construction elapsed time: 0:00:00.023795
  3456.  
  3457. 2023-03-27 05:26:14.778
  3458.  
  3459. Build TRT engine elapsed time: 0:00:04.324144
  3460.  
  3461. Build TRT engine elapsed time: 0:00:04.324144
  3462.  
  3463. Lowering submodule _run_on_acc_154 elapsed time 0:00:04.387566
  3464.  
  3465. Lowering submodule _run_on_acc_154 elapsed time 0:00:04.387566
  3466.  
  3467. Now lowering submodule _run_on_acc_156
  3468.  
  3469. Now lowering submodule _run_on_acc_156
  3470.  
  3471. split_name=_run_on_acc_156, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3472.  
  3473. split_name=_run_on_acc_156, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3474.  
  3475. Timing cache is used!
  3476.  
  3477. Timing cache is used!
  3478.  
  3479. 2023-03-27 05:26:14.825
  3480.  
  3481. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3482.  
  3483. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3484.  
  3485. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3486.  
  3487. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3488.  
  3489. 2023-03-27 05:26:14.845
  3490.  
  3491. TRT INetwork construction elapsed time: 0:00:00.020507
  3492.  
  3493. TRT INetwork construction elapsed time: 0:00:00.020507
  3494.  
  3495. 2023-03-27 05:26:24.844
  3496.  
  3497. Build TRT engine elapsed time: 0:00:09.991012
  3498.  
  3499. Build TRT engine elapsed time: 0:00:09.991012
  3500.  
  3501. Lowering submodule _run_on_acc_156 elapsed time 0:00:10.049578
  3502.  
  3503. Lowering submodule _run_on_acc_156 elapsed time 0:00:10.049578
  3504.  
  3505. Now lowering submodule _run_on_acc_158
  3506.  
  3507. Now lowering submodule _run_on_acc_158
  3508.  
  3509. split_name=_run_on_acc_158, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3510.  
  3511. split_name=_run_on_acc_158, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3512.  
  3513. Timing cache is used!
  3514.  
  3515. Timing cache is used!
  3516.  
  3517. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_276 are constant. In this case, please consider constant fold the model first.
  3518. warnings.warn(
  3519.  
  3520. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_276 are constant. In this case, please consider constant fold the model first.
  3521. warnings.warn(
  3522.  
  3523. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_277 are constant. In this case, please consider constant fold the model first.
  3524. warnings.warn(
  3525.  
  3526. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_277 are constant. In this case, please consider constant fold the model first.
  3527. warnings.warn(
  3528.  
  3529. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_278 are constant. In this case, please consider constant fold the model first.
  3530. warnings.warn(
  3531.  
  3532. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_278 are constant. In this case, please consider constant fold the model first.
  3533. warnings.warn(
  3534.  
  3535. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_279 are constant. In this case, please consider constant fold the model first.
  3536. warnings.warn(
  3537.  
  3538. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_279 are constant. In this case, please consider constant fold the model first.
  3539. warnings.warn(
  3540.  
  3541. TRT INetwork construction elapsed time: 0:00:00.037005
  3542.  
  3543. TRT INetwork construction elapsed time: 0:00:00.037005
  3544.  
  3545. Build TRT engine elapsed time: 0:00:00.423171
  3546.  
  3547. Build TRT engine elapsed time: 0:00:00.423171
  3548.  
  3549. Lowering submodule _run_on_acc_158 elapsed time 0:00:00.495677
  3550.  
  3551. Lowering submodule _run_on_acc_158 elapsed time 0:00:00.495677
  3552.  
  3553. Now lowering submodule _run_on_acc_160
  3554.  
  3555. Now lowering submodule _run_on_acc_160
  3556.  
  3557. split_name=_run_on_acc_160, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3558.  
  3559. split_name=_run_on_acc_160, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3560.  
  3561. Timing cache is used!
  3562.  
  3563. Timing cache is used!
  3564.  
  3565. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_280 are constant. In this case, please consider constant fold the model first.
  3566. warnings.warn(
  3567.  
  3568. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_280 are constant. In this case, please consider constant fold the model first.
  3569. warnings.warn(
  3570.  
  3571. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_281 are constant. In this case, please consider constant fold the model first.
  3572. warnings.warn(
  3573.  
  3574. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_281 are constant. In this case, please consider constant fold the model first.
  3575. warnings.warn(
  3576.  
  3577. TRT INetwork construction elapsed time: 0:00:00.021485
  3578.  
  3579. TRT INetwork construction elapsed time: 0:00:00.021485
  3580.  
  3581. 2023-03-27 05:26:29.713
  3582.  
  3583. Build TRT engine elapsed time: 0:00:04.278824
  3584.  
  3585. Build TRT engine elapsed time: 0:00:04.278824
  3586.  
  3587. Lowering submodule _run_on_acc_160 elapsed time 0:00:04.340067
  3588.  
  3589. Lowering submodule _run_on_acc_160 elapsed time 0:00:04.340067
  3590.  
  3591. Now lowering submodule _run_on_acc_162
  3592.  
  3593. Now lowering submodule _run_on_acc_162
  3594.  
  3595. split_name=_run_on_acc_162, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3596.  
  3597. split_name=_run_on_acc_162, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3598.  
  3599. Timing cache is used!
  3600.  
  3601. Timing cache is used!
  3602.  
  3603. 2023-03-27 05:26:29.760
  3604.  
  3605. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3606.  
  3607. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3608.  
  3609. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3610.  
  3611. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3612.  
  3613. 2023-03-27 05:26:29.780
  3614.  
  3615. TRT INetwork construction elapsed time: 0:00:00.020991
  3616.  
  3617. TRT INetwork construction elapsed time: 0:00:00.020991
  3618.  
  3619. 2023-03-27 05:26:39.806
  3620.  
  3621. Build TRT engine elapsed time: 0:00:10.018291
  3622.  
  3623. Build TRT engine elapsed time: 0:00:10.018291
  3624.  
  3625. Lowering submodule _run_on_acc_162 elapsed time 0:00:10.073605
  3626.  
  3627. Lowering submodule _run_on_acc_162 elapsed time 0:00:10.073605
  3628.  
  3629. Now lowering submodule _run_on_acc_164
  3630.  
  3631. Now lowering submodule _run_on_acc_164
  3632.  
  3633. split_name=_run_on_acc_164, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3634.  
  3635. split_name=_run_on_acc_164, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3636.  
  3637. Timing cache is used!
  3638.  
  3639. Timing cache is used!
  3640.  
  3641. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_282 are constant. In this case, please consider constant fold the model first.
  3642. warnings.warn(
  3643.  
  3644. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_282 are constant. In this case, please consider constant fold the model first.
  3645. warnings.warn(
  3646.  
  3647. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_283 are constant. In this case, please consider constant fold the model first.
  3648. warnings.warn(
  3649.  
  3650. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_283 are constant. In this case, please consider constant fold the model first.
  3651. warnings.warn(
  3652.  
  3653. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_284 are constant. In this case, please consider constant fold the model first.
  3654. warnings.warn(
  3655.  
  3656. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_284 are constant. In this case, please consider constant fold the model first.
  3657. warnings.warn(
  3658.  
  3659. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_285 are constant. In this case, please consider constant fold the model first.
  3660. warnings.warn(
  3661.  
  3662. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_285 are constant. In this case, please consider constant fold the model first.
  3663. warnings.warn(
  3664.  
  3665. TRT INetwork construction elapsed time: 0:00:00.038008
  3666.  
  3667. TRT INetwork construction elapsed time: 0:00:00.038008
  3668.  
  3669. Build TRT engine elapsed time: 0:00:00.424398
  3670.  
  3671. Build TRT engine elapsed time: 0:00:00.424398
  3672.  
  3673. Lowering submodule _run_on_acc_164 elapsed time 0:00:00.496390
  3674.  
  3675. Lowering submodule _run_on_acc_164 elapsed time 0:00:00.496390
  3676.  
  3677. Now lowering submodule _run_on_acc_166
  3678.  
  3679. Now lowering submodule _run_on_acc_166
  3680.  
  3681. split_name=_run_on_acc_166, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3682.  
  3683. split_name=_run_on_acc_166, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3684.  
  3685. Timing cache is used!
  3686.  
  3687. Timing cache is used!
  3688.  
  3689. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_286 are constant. In this case, please consider constant fold the model first.
  3690. warnings.warn(
  3691.  
  3692. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_286 are constant. In this case, please consider constant fold the model first.
  3693. warnings.warn(
  3694.  
  3695. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_287 are constant. In this case, please consider constant fold the model first.
  3696. warnings.warn(
  3697.  
  3698. I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_287 are constant. In this case, please consider constant fold the model first.
  3699. warnings.warn(
  3700.  
  3701. TRT INetwork construction elapsed time: 0:00:00.023005
  3702.  
  3703. TRT INetwork construction elapsed time: 0:00:00.023005
  3704.  
  3705. 2023-03-27 05:26:44.701
  3706.  
  3707. Build TRT engine elapsed time: 0:00:04.307120
  3708.  
  3709. Build TRT engine elapsed time: 0:00:04.307120
  3710.  
  3711. Lowering submodule _run_on_acc_166 elapsed time 0:00:04.368131
  3712.  
  3713. Lowering submodule _run_on_acc_166 elapsed time 0:00:04.368131
  3714.  
  3715. Now lowering submodule _run_on_acc_168
  3716.  
  3717. Now lowering submodule _run_on_acc_168
  3718.  
  3719. split_name=_run_on_acc_168, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3720.  
  3721. split_name=_run_on_acc_168, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3722.  
  3723. Timing cache is used!
  3724.  
  3725. Timing cache is used!
  3726.  
  3727. 2023-03-27 05:26:44.749
  3728.  
  3729. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3730.  
  3731. Unable to find layer norm plugin, fall back to TensorRT implementation.
  3732.  
  3733. 2023-03-27 05:26:44.760
  3734.  
  3735. TRT INetwork construction elapsed time: 0:00:00.012003
  3736.  
  3737. TRT INetwork construction elapsed time: 0:00:00.012003
  3738.  
  3739. 2023-03-27 05:26:54.941
  3740.  
  3741. Build TRT engine elapsed time: 0:00:10.173015
  3742.  
  3743. Build TRT engine elapsed time: 0:00:10.173015
  3744.  
  3745. Lowering submodule _run_on_acc_168 elapsed time 0:00:10.220081
  3746.  
  3747. Lowering submodule _run_on_acc_168 elapsed time 0:00:10.220081
  3748.  
  3749. Now lowering submodule _run_on_acc_170
  3750.  
  3751. Now lowering submodule _run_on_acc_170
  3752.  
  3753. split_name=_run_on_acc_170, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3754.  
  3755. split_name=_run_on_acc_170, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3756.  
  3757. Timing cache is used!
  3758.  
  3759. Timing cache is used!
  3760.  
  3761. TRT INetwork construction elapsed time: 0:00:00.000999
  3762.  
  3763. TRT INetwork construction elapsed time: 0:00:00.000999
  3764.  
  3765. 2023-03-27 05:26:57.108
  3766.  
  3767. Build TRT engine elapsed time: 0:00:02.113329
  3768.  
  3769. Build TRT engine elapsed time: 0:00:02.113329
  3770.  
  3771. Lowering submodule _run_on_acc_170 elapsed time 0:00:02.149360
  3772.  
  3773. Lowering submodule _run_on_acc_170 elapsed time 0:00:02.149360
  3774.  
  3775. Now lowering submodule _run_on_acc_172
  3776.  
  3777. Now lowering submodule _run_on_acc_172
  3778.  
  3779. split_name=_run_on_acc_172, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3780.  
  3781. split_name=_run_on_acc_172, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3782.  
  3783. Timing cache is used!
  3784.  
  3785. Timing cache is used!
  3786.  
  3787. TRT INetwork construction elapsed time: 0:00:00.002000
  3788.  
  3789. TRT INetwork construction elapsed time: 0:00:00.002000
  3790.  
  3791. Build TRT engine elapsed time: 0:00:01.918650
  3792.  
  3793. Build TRT engine elapsed time: 0:00:01.918650
  3794.  
  3795. Lowering submodule _run_on_acc_172 elapsed time 0:00:01.954914
  3796.  
  3797. Lowering submodule _run_on_acc_172 elapsed time 0:00:01.954914
  3798.  
  3799. Now lowering submodule _run_on_acc_174
  3800.  
  3801. Now lowering submodule _run_on_acc_174
  3802.  
  3803. split_name=_run_on_acc_174, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 512, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3804.  
  3805. split_name=_run_on_acc_174, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 512, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3806.  
  3807. Timing cache is used!
  3808.  
  3809. Timing cache is used!
  3810.  
  3811. TRT INetwork construction elapsed time: 0:00:00.003001
  3812.  
  3813. TRT INetwork construction elapsed time: 0:00:00.003001
  3814.  
  3815. 2023-03-27 05:27:01.856
  3816.  
  3817. Build TRT engine elapsed time: 0:00:02.722756
  3818.  
  3819. Build TRT engine elapsed time: 0:00:02.722756
  3820.  
  3821. Lowering submodule _run_on_acc_174 elapsed time 0:00:02.760268
  3822.  
  3823. Lowering submodule _run_on_acc_174 elapsed time 0:00:02.760268
  3824.  
  3825. Now lowering submodule _run_on_acc_176
  3826.  
  3827. Now lowering submodule _run_on_acc_176
  3828.  
  3829. split_name=_run_on_acc_176, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3830.  
  3831. split_name=_run_on_acc_176, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3832.  
  3833. Timing cache is used!
  3834.  
  3835. Timing cache is used!
  3836.  
  3837. TRT INetwork construction elapsed time: 0:00:00.004177
  3838.  
  3839. TRT INetwork construction elapsed time: 0:00:00.004177
  3840.  
  3841. 2023-03-27 05:27:04.652
  3842.  
  3843. Build TRT engine elapsed time: 0:00:02.740045
  3844.  
  3845. Build TRT engine elapsed time: 0:00:02.740045
  3846.  
  3847. Lowering submodule _run_on_acc_176 elapsed time 0:00:02.779477
  3848.  
  3849. Lowering submodule _run_on_acc_176 elapsed time 0:00:02.779477
  3850.  
  3851. Now lowering submodule _run_on_acc_178
  3852.  
  3853. Now lowering submodule _run_on_acc_178
  3854.  
  3855. split_name=_run_on_acc_178, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3856.  
  3857. split_name=_run_on_acc_178, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3858.  
  3859. Timing cache is used!
  3860.  
  3861. Timing cache is used!
  3862.  
  3863. TRT INetwork construction elapsed time: 0:00:00.003001
  3864.  
  3865. TRT INetwork construction elapsed time: 0:00:00.003001
  3866.  
  3867. 2023-03-27 05:27:07.478
  3868.  
  3869. Build TRT engine elapsed time: 0:00:02.772769
  3870.  
  3871. Build TRT engine elapsed time: 0:00:02.772769
  3872.  
  3873. Lowering submodule _run_on_acc_178 elapsed time 0:00:02.809798
  3874.  
  3875. Lowering submodule _run_on_acc_178 elapsed time 0:00:02.809798
  3876.  
  3877. Now lowering submodule _run_on_acc_180
  3878.  
  3879. Now lowering submodule _run_on_acc_180
  3880.  
  3881. split_name=_run_on_acc_180, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3882.  
  3883. split_name=_run_on_acc_180, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3884.  
  3885. Timing cache is used!
  3886.  
  3887. Timing cache is used!
  3888.  
  3889. TRT INetwork construction elapsed time: 0:00:00.003998
  3890.  
  3891. TRT INetwork construction elapsed time: 0:00:00.003998
  3892.  
  3893. 2023-03-27 05:27:12.314
  3894.  
  3895. Build TRT engine elapsed time: 0:00:04.781494
  3896.  
  3897. Build TRT engine elapsed time: 0:00:04.781494
  3898.  
  3899. Lowering submodule _run_on_acc_180 elapsed time 0:00:04.821787
  3900.  
  3901. Lowering submodule _run_on_acc_180 elapsed time 0:00:04.821787
  3902.  
  3903. Now lowering submodule _run_on_acc_182
  3904.  
  3905. Now lowering submodule _run_on_acc_182
  3906.  
  3907. split_name=_run_on_acc_182, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3908.  
  3909. split_name=_run_on_acc_182, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3910.  
  3911. Timing cache is used!
  3912.  
  3913. Timing cache is used!
  3914.  
  3915. TRT INetwork construction elapsed time: 0:00:00.004128
  3916.  
  3917. TRT INetwork construction elapsed time: 0:00:00.004128
  3918.  
  3919. 2023-03-27 05:27:40.716
  3920.  
  3921. Build TRT engine elapsed time: 0:00:28.343696
  3922.  
  3923. Build TRT engine elapsed time: 0:00:28.343696
  3924.  
  3925. Lowering submodule _run_on_acc_182 elapsed time 0:00:28.385848
  3926.  
  3927. Lowering submodule _run_on_acc_182 elapsed time 0:00:28.385848
  3928.  
  3929. Now lowering submodule _run_on_acc_184
  3930.  
  3931. Now lowering submodule _run_on_acc_184
  3932.  
  3933. split_name=_run_on_acc_184, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3934.  
  3935. split_name=_run_on_acc_184, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3936.  
  3937. Timing cache is used!
  3938.  
  3939. Timing cache is used!
  3940.  
  3941. TRT INetwork construction elapsed time: 0:00:00.004128
  3942.  
  3943. TRT INetwork construction elapsed time: 0:00:00.004128
  3944.  
  3945. 2023-03-27 05:27:43.880
  3946.  
  3947. Build TRT engine elapsed time: 0:00:03.104550
  3948.  
  3949. Build TRT engine elapsed time: 0:00:03.104550
  3950.  
  3951. Lowering submodule _run_on_acc_184 elapsed time 0:00:03.146690
  3952.  
  3953. Lowering submodule _run_on_acc_184 elapsed time 0:00:03.146690
  3954.  
  3955. Now lowering submodule _run_on_acc_186
  3956.  
  3957. Now lowering submodule _run_on_acc_186
  3958.  
  3959. split_name=_run_on_acc_186, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3960.  
  3961. split_name=_run_on_acc_186, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3962.  
  3963. Timing cache is used!
  3964.  
  3965. Timing cache is used!
  3966.  
  3967. TRT INetwork construction elapsed time: 0:00:00.002000
  3968.  
  3969. TRT INetwork construction elapsed time: 0:00:00.002000
  3970.  
  3971. 2023-03-27 05:27:46.898
  3972.  
  3973. Build TRT engine elapsed time: 0:00:02.965473
  3974.  
  3975. Build TRT engine elapsed time: 0:00:02.965473
  3976.  
  3977. Lowering submodule _run_on_acc_186 elapsed time 0:00:03.002562
  3978.  
  3979. Lowering submodule _run_on_acc_186 elapsed time 0:00:03.002562
  3980.  
  3981. Now lowering submodule _run_on_acc_188
  3982.  
  3983. Now lowering submodule _run_on_acc_188
  3984.  
  3985. split_name=_run_on_acc_188, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3986.  
  3987. split_name=_run_on_acc_188, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  3988.  
  3989. Timing cache is used!
  3990.  
  3991. Timing cache is used!
  3992.  
  3993. TRT INetwork construction elapsed time: 0:00:00.001505
  3994.  
  3995. TRT INetwork construction elapsed time: 0:00:00.001505
  3996.  
  3997. 2023-03-27 05:27:49.880
  3998.  
  3999. Build TRT engine elapsed time: 0:00:02.930511
  4000.  
  4001. Build TRT engine elapsed time: 0:00:02.930511
  4002.  
  4003. Lowering submodule _run_on_acc_188 elapsed time 0:00:02.967487
  4004.  
  4005. Lowering submodule _run_on_acc_188 elapsed time 0:00:02.967487
  4006.  
  4007. Now lowering submodule _run_on_acc_190
  4008.  
  4009. Now lowering submodule _run_on_acc_190
  4010.  
  4011. split_name=_run_on_acc_190, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  4012.  
  4013. split_name=_run_on_acc_190, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  4014.  
  4015. Timing cache is used!
  4016.  
  4017. Timing cache is used!
  4018.  
  4019. TRT INetwork construction elapsed time: 0:00:00.002011
  4020.  
  4021. TRT INetwork construction elapsed time: 0:00:00.002011
  4022.  
  4023. 2023-03-27 05:27:54.905
  4024.  
  4025. Build TRT engine elapsed time: 0:00:04.971792
  4026.  
  4027. Build TRT engine elapsed time: 0:00:04.971792
  4028.  
  4029. Lowering submodule _run_on_acc_190 elapsed time 0:00:05.010673
  4030.  
  4031. Lowering submodule _run_on_acc_190 elapsed time 0:00:05.010673
  4032.  
  4033. Now lowering submodule _run_on_acc_192
  4034.  
  4035. Now lowering submodule _run_on_acc_192
  4036.  
  4037. split_name=_run_on_acc_192, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  4038.  
  4039. split_name=_run_on_acc_192, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  4040.  
  4041. Timing cache is used!
  4042.  
  4043. Timing cache is used!
  4044.  
  4045. TRT INetwork construction elapsed time: 0:00:00.000998
  4046.  
  4047. TRT INetwork construction elapsed time: 0:00:00.000998
  4048.  
  4049. 2023-03-27 05:27:57.915
  4050.  
  4051. Build TRT engine elapsed time: 0:00:02.957856
  4052.  
  4053. Build TRT engine elapsed time: 0:00:02.957856
  4054.  
  4055. Lowering submodule _run_on_acc_192 elapsed time 0:00:02.992994
  4056.  
  4057. Lowering submodule _run_on_acc_192 elapsed time 0:00:02.992994
  4058.  
  4059. Now lowering submodule _run_on_acc_194
  4060.  
  4061. Now lowering submodule _run_on_acc_194
  4062.  
  4063. split_name=_run_on_acc_194, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  4064.  
  4065. split_name=_run_on_acc_194, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  4066.  
  4067. Timing cache is used!
  4068.  
  4069. Timing cache is used!
  4070.  
  4071. TRT INetwork construction elapsed time: 0:00:00.000999
  4072.  
  4073. TRT INetwork construction elapsed time: 0:00:00.000999
  4074.  
  4075. 2023-03-27 05:28:00.995
  4076.  
  4077. Build TRT engine elapsed time: 0:00:03.027250
  4078.  
  4079. Build TRT engine elapsed time: 0:00:03.027250
  4080.  
  4081. Lowering submodule _run_on_acc_194 elapsed time 0:00:03.063577
  4082.  
  4083. Lowering submodule _run_on_acc_194 elapsed time 0:00:03.063577
  4084.  
  4085. Now lowering submodule _run_on_acc_196
  4086.  
  4087. Now lowering submodule _run_on_acc_196
  4088.  
  4089. split_name=_run_on_acc_196, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  4090.  
  4091. split_name=_run_on_acc_196, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
  4092.  
  4093. Timing cache is used!
  4094.  
  4095. Timing cache is used!
  4096.  
  4097. TRT INetwork construction elapsed time: 0:00:00.001001
  4098.  
  4099. TRT INetwork construction elapsed time: 0:00:00.001001
  4100.  
  4101. 2023-03-27 05:28:01.054
  4102.  
  4103. Failed to evaluate the script:
  4104. Python exception:
  4105.  
  4106. Traceback (most recent call last):
  4107. File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
  4108. File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
  4109. File "J:\tmp\tempPreviewVapoursynthFile05_19_33_068.vpy", line 38, in
  4110. clip = FeMaSR(clip=clip, device_index=0, trt=True, trt_cache_path=r"J:\tmp") # 640x352
  4111. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
  4112. return func(*args, **kwargs)
  4113. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsfemasr\__init__.py", line 171, in femasr
  4114. module = lowerer(
  4115. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 323, in __call__
  4116. return do_lower(module, inputs)
  4117. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\pass_utils.py", line 117, in pass_with_validation
  4118. processed_module = pass_(module, input, *args, **kwargs)
  4119. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 320, in do_lower
  4120. lower_result = pm(module)
  4121. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
  4122. out = _pass(out)
  4123. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
  4124. out = _pass(out)
  4125. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\lower_pass_manager_builder.py", line 167, in lower_func
  4126. lowered_module = self._lower_func(
  4127. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 180, in lower_pass
  4128. interp_res: TRTInterpreterResult = interpreter(mod, input, module_name)
  4129. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 132, in __call__
  4130. interp_result: TRTInterpreterResult = interpreter.run(
  4131. File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\fx2trt.py", line 252, in run
  4132. assert engine
  4133. AssertionError
  4134.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement