Advertisement
Guest User

Untitled

a guest
Mar 2nd, 2023
74
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 300.81 KB | None | 0 0
  1. 'Resize_3': Unsupported on DLA. Switching this layer's device type to GPU.
  2. [03/02/2023-09:19:43] [W] [TRT] Layer '390': Unsupported on DLA. Switching this layer's device type to GPU.
  3. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 2) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
  4. [03/02/2023-09:19:43] [W] [TRT] Layer '392': Unsupported on DLA. Switching this layer's device type to GPU.
  5. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 5) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
  6. [03/02/2023-09:19:43] [W] [TRT] Div_7: DLA cores do not support DIV ElementWise operation.
  7. [03/02/2023-09:19:43] [W] [TRT] Layer 'Div_7': Unsupported on DLA. Switching this layer's device type to GPU.
  8. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_9: Activation type: HARD_SIGMOID is not supported on DLA.
  9. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_9': Unsupported on DLA. Switching this layer's device type to GPU.
  10. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_30': Unsupported on DLA. Switching this layer's device type to GPU.
  11. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_34: Activation type: HARD_SIGMOID is not supported on DLA.
  12. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_34': Unsupported on DLA. Switching this layer's device type to GPU.
  13. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_41': Unsupported on DLA. Switching this layer's device type to GPU.
  14. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_45: Activation type: HARD_SIGMOID is not supported on DLA.
  15. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_45': Unsupported on DLA. Switching this layer's device type to GPU.
  16. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_53': Unsupported on DLA. Switching this layer's device type to GPU.
  17. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_57: Activation type: HARD_SIGMOID is not supported on DLA.
  18. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_57': Unsupported on DLA. Switching this layer's device type to GPU.
  19. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_62: Activation type: HARD_SIGMOID is not supported on DLA.
  20. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_62': Unsupported on DLA. Switching this layer's device type to GPU.
  21. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_65: Activation type: HARD_SIGMOID is not supported on DLA.
  22. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_65': Unsupported on DLA. Switching this layer's device type to GPU.
  23. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_69: Activation type: HARD_SIGMOID is not supported on DLA.
  24. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_69': Unsupported on DLA. Switching this layer's device type to GPU.
  25. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_72: Activation type: HARD_SIGMOID is not supported on DLA.
  26. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_72': Unsupported on DLA. Switching this layer's device type to GPU.
  27. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_77: Activation type: HARD_SIGMOID is not supported on DLA.
  28. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_77': Unsupported on DLA. Switching this layer's device type to GPU.
  29. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_80: Activation type: HARD_SIGMOID is not supported on DLA.
  30. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_80': Unsupported on DLA. Switching this layer's device type to GPU.
  31. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_85: Activation type: HARD_SIGMOID is not supported on DLA.
  32. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_85': Unsupported on DLA. Switching this layer's device type to GPU.
  33. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_88: Activation type: HARD_SIGMOID is not supported on DLA.
  34. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_88': Unsupported on DLA. Switching this layer's device type to GPU.
  35. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_93: Activation type: HARD_SIGMOID is not supported on DLA.
  36. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_93': Unsupported on DLA. Switching this layer's device type to GPU.
  37. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_96: Activation type: HARD_SIGMOID is not supported on DLA.
  38. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_96': Unsupported on DLA. Switching this layer's device type to GPU.
  39. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_98': Unsupported on DLA. Switching this layer's device type to GPU.
  40. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_102: Activation type: HARD_SIGMOID is not supported on DLA.
  41. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_102': Unsupported on DLA. Switching this layer's device type to GPU.
  42. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_106: Activation type: HARD_SIGMOID is not supported on DLA.
  43. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_106': Unsupported on DLA. Switching this layer's device type to GPU.
  44. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_109: Activation type: HARD_SIGMOID is not supported on DLA.
  45. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_109': Unsupported on DLA. Switching this layer's device type to GPU.
  46. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_111': Unsupported on DLA. Switching this layer's device type to GPU.
  47. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_115: Activation type: HARD_SIGMOID is not supported on DLA.
  48. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_115': Unsupported on DLA. Switching this layer's device type to GPU.
  49. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_120: Activation type: HARD_SIGMOID is not supported on DLA.
  50. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_120': Unsupported on DLA. Switching this layer's device type to GPU.
  51. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_123: Activation type: HARD_SIGMOID is not supported on DLA.
  52. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_123': Unsupported on DLA. Switching this layer's device type to GPU.
  53. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_125': Unsupported on DLA. Switching this layer's device type to GPU.
  54. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_129: Activation type: HARD_SIGMOID is not supported on DLA.
  55. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_129': Unsupported on DLA. Switching this layer's device type to GPU.
  56. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_133: Activation type: HARD_SIGMOID is not supported on DLA.
  57. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_133': Unsupported on DLA. Switching this layer's device type to GPU.
  58. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_136: Activation type: HARD_SIGMOID is not supported on DLA.
  59. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_136': Unsupported on DLA. Switching this layer's device type to GPU.
  60. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_138': Unsupported on DLA. Switching this layer's device type to GPU.
  61. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_142: Activation type: HARD_SIGMOID is not supported on DLA.
  62. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_142': Unsupported on DLA. Switching this layer's device type to GPU.
  63. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_147: Activation type: HARD_SIGMOID is not supported on DLA.
  64. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_147': Unsupported on DLA. Switching this layer's device type to GPU.
  65. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_150: Activation type: HARD_SIGMOID is not supported on DLA.
  66. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_150': Unsupported on DLA. Switching this layer's device type to GPU.
  67. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_152': Unsupported on DLA. Switching this layer's device type to GPU.
  68. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_156: Activation type: HARD_SIGMOID is not supported on DLA.
  69. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_156': Unsupported on DLA. Switching this layer's device type to GPU.
  70. [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_161: Activation type: HARD_SIGMOID is not supported on DLA.
  71. [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_161': Unsupported on DLA. Switching this layer's device type to GPU.
  72. [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_165': Unsupported on DLA. Switching this layer's device type to GPU.
  73. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_173': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  74. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 174) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  75. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 175) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  76. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 175) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  77. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 176) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  78. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 177) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  79. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 177) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  80. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 178) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  81. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 178) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  82. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 179) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  83. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 179) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  84. [03/02/2023-09:19:43] [W] [TRT] Layer '990': Unsupported on DLA. Switching this layer's device type to GPU.
  85. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 191) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
  86. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_191': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  87. [03/02/2023-09:19:43] [W] [TRT] Slice_195: DLA only supports slicing 4 dimensional tensors.
  88. [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_195': Unsupported on DLA. Switching this layer's device type to GPU.
  89. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 200) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  90. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 201) [Concatenation]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  91. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 201) [Concatenation]': Unsupported on DLA. Switching this layer's device type to GPU.
  92. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 202) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  93. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 203) [Gather]': Unsupported on DLA. Switching this layer's device type to GPU.
  94. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 204) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  95. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 205) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  96. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 205) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  97. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 206) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  98. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 207) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  99. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 207) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  100. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 208) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  101. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 208) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  102. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 209) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  103. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 209) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  104. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 210) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  105. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 211) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  106. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 211) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  107. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 212) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  108. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 212) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  109. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 213) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  110. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 214) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  111. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 214) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  112. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_203': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  113. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 222) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  114. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 223) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  115. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 223) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  116. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 224) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  117. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 225) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  118. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 225) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  119. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 226) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  120. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 226) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  121. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 227) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  122. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 227) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  123. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 238) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
  124. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_221': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  125. [03/02/2023-09:19:43] [W] [TRT] Slice_225: DLA only supports slicing 4 dimensional tensors.
  126. [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_225': Unsupported on DLA. Switching this layer's device type to GPU.
  127. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 247) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  128. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 248) [Concatenation]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  129. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 248) [Concatenation]': Unsupported on DLA. Switching this layer's device type to GPU.
  130. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 249) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  131. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 250) [Gather]': Unsupported on DLA. Switching this layer's device type to GPU.
  132. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 251) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  133. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 252) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  134. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 252) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  135. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 253) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  136. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 254) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  137. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 254) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  138. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 255) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  139. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 255) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  140. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 256) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  141. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 256) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  142. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 257) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  143. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 258) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  144. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 258) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  145. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 259) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  146. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 259) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  147. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 260) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  148. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 261) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  149. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 261) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  150. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_233': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  151. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 269) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  152. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 270) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  153. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 270) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  154. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 271) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  155. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 272) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  156. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 272) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  157. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 273) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  158. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 273) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  159. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 274) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  160. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 274) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  161. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 285) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
  162. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_251': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  163. [03/02/2023-09:19:43] [W] [TRT] Slice_255: DLA only supports slicing 4 dimensional tensors.
  164. [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_255': Unsupported on DLA. Switching this layer's device type to GPU.
  165. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 294) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  166. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 295) [Concatenation]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  167. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 295) [Concatenation]': Unsupported on DLA. Switching this layer's device type to GPU.
  168. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 296) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  169. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 297) [Gather]': Unsupported on DLA. Switching this layer's device type to GPU.
  170. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 298) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  171. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 299) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  172. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 299) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  173. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 300) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  174. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 301) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  175. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 301) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  176. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 302) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  177. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 302) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  178. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 303) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  179. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 303) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  180. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 304) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  181. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 305) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  182. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 305) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  183. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 306) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  184. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 306) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  185. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 307) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  186. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 308) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  187. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 308) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  188. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_263': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  189. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 316) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  190. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 317) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  191. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 317) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  192. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 318) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  193. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 319) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  194. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 319) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  195. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 320) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  196. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 320) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  197. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 321) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  198. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 321) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  199. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 332) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
  200. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_281': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  201. [03/02/2023-09:19:43] [W] [TRT] Slice_285: DLA only supports slicing 4 dimensional tensors.
  202. [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_285': Unsupported on DLA. Switching this layer's device type to GPU.
  203. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 341) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  204. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 342) [Concatenation]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  205. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 342) [Concatenation]': Unsupported on DLA. Switching this layer's device type to GPU.
  206. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 343) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  207. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 344) [Gather]': Unsupported on DLA. Switching this layer's device type to GPU.
  208. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 345) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  209. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 346) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  210. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 346) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  211. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 347) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  212. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 348) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  213. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 348) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  214. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 349) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  215. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 349) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  216. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 350) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  217. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 350) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  218. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 351) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  219. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 352) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  220. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 352) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  221. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 353) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  222. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 353) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  223. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 354) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
  224. [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 355) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  225. [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 355) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
  226. [03/02/2023-09:19:43] [W] [TRT] Layer 'ReduceMean_296': Unsupported on DLA. Switching this layer's device type to GPU.
  227. [03/02/2023-09:19:43] [W] [TRT] Layer 'ReduceMean_298': Unsupported on DLA. Switching this layer's device type to GPU.
  228. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_321': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  229. [03/02/2023-09:19:43] [W] [TRT] Slice_325: DLA only supports slicing 4 dimensional tensors.
  230. [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_325': Unsupported on DLA. Switching this layer's device type to GPU.
  231. [03/02/2023-09:19:43] [W] [TRT] Layer '815': Unsupported on DLA. Switching this layer's device type to GPU.
  232. [03/02/2023-09:19:43] [W] [TRT] Concat_331: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  233. [03/02/2023-09:19:43] [W] [TRT] Layer 'Concat_331': Unsupported on DLA. Switching this layer's device type to GPU.
  234. [03/02/2023-09:19:43] [W] [TRT] Resize_332: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  235. [03/02/2023-09:19:43] [W] [TRT] Layer 'Resize_332': Unsupported on DLA. Switching this layer's device type to GPU.
  236. [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_335': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
  237. [03/02/2023-09:19:43] [W] [TRT] Slice_339: DLA only supports slicing 4 dimensional tensors.
  238. [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_339': Unsupported on DLA. Switching this layer's device type to GPU.
  239. [03/02/2023-09:19:43] [W] [TRT] Concat_345: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  240. [03/02/2023-09:19:43] [W] [TRT] Layer 'Concat_345': Unsupported on DLA. Switching this layer's device type to GPU.
  241. [03/02/2023-09:19:43] [W] [TRT] Resize_346: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
  242. [03/02/2023-09:19:43] [W] [TRT] Layer 'Resize_346': Unsupported on DLA. Switching this layer's device type to GPU.
  243. [03/02/2023-09:19:43] [V] [TRT] Applying generic optimizations to the graph for inference.
  244. [03/02/2023-09:19:43] [V] [TRT] Original: 301 layers
  245. [03/02/2023-09:19:43] [V] [TRT] After dead-layer removal: 301 layers
  246. [03/02/2023-09:19:43] [V] [TRT] Running: ConstShuffleFusion on 390
  247. [03/02/2023-09:19:43] [V] [TRT] ConstShuffleFusion: Fusing 390 with (Unnamed Layer* 2) [Shuffle]
  248. [03/02/2023-09:19:43] [V] [TRT] Running: ConstShuffleFusion on 392
  249. [03/02/2023-09:19:43] [V] [TRT] ConstShuffleFusion: Fusing 392 with (Unnamed Layer* 5) [Shuffle]
  250. [03/02/2023-09:19:43] [V] [TRT] After Myelin optimization: 299 layers
  251. [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  252. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Sub_5. Switching to GPU fallback.
  253. [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  254. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_35. Switching to GPU fallback.
  255. [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  256. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_46. Switching to GPU fallback.
  257. [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  258. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_58. Switching to GPU fallback.
  259. [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  260. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_103. Switching to GPU fallback.
  261. [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  262. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_116. Switching to GPU fallback.
  263. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Conv_122. Switching to GPU fallback.
  264. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Conv_122. Switching to GPU fallback.
  265. [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  266. [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_130. Switching to GPU fallback.
  267. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Conv_135. Switching to GPU fallback.
  268. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Conv_135. Switching to GPU fallback.
  269. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  270. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Mul_143. Switching to GPU fallback.
  271. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Conv_149. Switching to GPU fallback.
  272. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Conv_149. Switching to GPU fallback.
  273. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  274. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Mul_157. Switching to GPU fallback.
  275. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  276. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Mul_168. Switching to GPU fallback.
  277. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  278. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Mul_168. Switching to GPU fallback.
  279. [03/02/2023-09:19:44] [W] [TRT] Expand_174: DLA does not support slicing along the batch dimension.
  280. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_174. Switching to GPU fallback.
  281. [03/02/2023-09:19:44] [W] [TRT] Expand_174: DLA does not support slicing along the batch dimension.
  282. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_174. Switching to GPU fallback.
  283. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  284. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_184. Switching to GPU fallback.
  285. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  286. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_184. Switching to GPU fallback.
  287. [03/02/2023-09:19:44] [W] [TRT] Expand_204: DLA does not support slicing along the batch dimension.
  288. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_204. Switching to GPU fallback.
  289. [03/02/2023-09:19:44] [W] [TRT] Expand_204: DLA does not support slicing along the batch dimension.
  290. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_204. Switching to GPU fallback.
  291. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  292. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_214. Switching to GPU fallback.
  293. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  294. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_214. Switching to GPU fallback.
  295. [03/02/2023-09:19:44] [W] [TRT] Expand_234: DLA does not support slicing along the batch dimension.
  296. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_234. Switching to GPU fallback.
  297. [03/02/2023-09:19:44] [W] [TRT] Expand_234: DLA does not support slicing along the batch dimension.
  298. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_234. Switching to GPU fallback.
  299. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  300. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_244. Switching to GPU fallback.
  301. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  302. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_244. Switching to GPU fallback.
  303. [03/02/2023-09:19:44] [W] [TRT] Expand_264: DLA does not support slicing along the batch dimension.
  304. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_264. Switching to GPU fallback.
  305. [03/02/2023-09:19:44] [W] [TRT] Expand_264: DLA does not support slicing along the batch dimension.
  306. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_264. Switching to GPU fallback.
  307. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  308. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_274. Switching to GPU fallback.
  309. [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
  310. [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_274. Switching to GPU fallback.
  311. [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Concat_270...Sub_318]} successfully offloaded to DLA.
  312. [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
  313. [03/02/2023-09:19:44] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  314. [03/02/2023-09:19:44] [V] [TRT]         Required: Managed SRAM = 0.5 MiB,       Local DRAM = 128 MiB,   Global DRAM = 4 MiB
  315. [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Mul_10...Relu_29]} successfully offloaded to DLA.
  316. [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
  317. [03/02/2023-09:19:44] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  318. [03/02/2023-09:19:44] [V] [TRT]         Required: Managed SRAM = 0.25 MiB,      Local DRAM = 32 MiB,    Global DRAM = 4 MiB
  319. [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Concat_180...Split_202_10]} successfully offloaded to DLA.
  320. [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
  321. [03/02/2023-09:19:44] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  322. [03/02/2023-09:19:44] [V] [TRT]         Required: Managed SRAM = 0.125 MiB,     Local DRAM = 8 MiB,     Global DRAM = 4 MiB
  323. [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Concat_210...Split_232_21]} successfully offloaded to DLA.
  324. [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
  325. [03/02/2023-09:19:44] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  326. [03/02/2023-09:19:44] [V] [TRT]         Required: Managed SRAM = 0.0625 MiB,    Local DRAM = 16 MiB,    Global DRAM = 4 MiB
  327. [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Concat_240...Split_262_32]} successfully offloaded to DLA.
  328. [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
  329. [03/02/2023-09:19:44] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  330. [03/02/2023-09:19:44] [V] [TRT]         Required: Managed SRAM = 0.03125 MiB,   Local DRAM = 32 MiB,    Global DRAM = 4 MiB
  331. [03/02/2023-09:19:44] [W] [TRT] {ForeignNode[AveragePool_169...Sigmoid_167]} cannot be compiled by DLA, falling back to GPU.
  332. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_297...Clip_352]} successfully offloaded to DLA.
  333. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  334. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  335. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0.015625 MiB,  Local DRAM = 512 MiB,   Global DRAM = 4 MiB
  336. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Conv_47...Relu_52]} successfully offloaded to DLA.
  337. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  338. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  339. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0.0078125 MiB, Local DRAM = 8 MiB,     Global DRAM = 4 MiB
  340. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_175...Mul_179]} successfully offloaded to DLA.
  341. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  342. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  343. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0.00390625 MiB,        Local DRAM = 4 MiB,     Global DRAM = 4 MiB
  344. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_205...Mul_209]} successfully offloaded to DLA.
  345. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  346. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  347. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0 MiB, Local DRAM = 8 MiB,     Global DRAM = 4 MiB
  348. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_235...Mul_239]} successfully offloaded to DLA.
  349. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  350. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  351. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0 MiB, Local DRAM = 8 MiB,     Global DRAM = 4 MiB
  352. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_265...Mul_269]} successfully offloaded to DLA.
  353. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  354. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  355. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0 MiB, Local DRAM = 32 MiB,    Global DRAM = 4 MiB
  356. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Conv_36...Relu_40]} successfully offloaded to DLA.
  357. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  358. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  359. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0 MiB, Local DRAM = 8 MiB,     Global DRAM = 4 MiB
  360. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Mul_73...Conv_76]} successfully offloaded to DLA.
  361. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  362. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  363. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0 MiB, Local DRAM = 4 MiB,     Global DRAM = 4 MiB
  364. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Mul_81...Conv_84]} successfully offloaded to DLA.
  365. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  366. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  367. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0 MiB, Local DRAM = 4 MiB,     Global DRAM = 4 MiB
  368. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Mul_89...Conv_92]} successfully offloaded to DLA.
  369. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  370. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  371. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0 MiB, Local DRAM = 4 MiB,     Global DRAM = 4 MiB
  372. [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Conv_31...Conv_33]} successfully offloaded to DLA.
  373. [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
  374. [03/02/2023-09:19:46] [V] [TRT]         Pool Sizes: Managed SRAM = 0.5 MiB,     Local DRAM = 1024 MiB,  Global DRAM = 512 MiB
  375. [03/02/2023-09:19:46] [V] [TRT]         Required: Managed SRAM = 0 MiB, Local DRAM = 4 MiB,     Global DRAM = 4 MiB
  376. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_42
  377. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_54
  378. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_59
  379. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_66
  380. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_99
  381. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_112
  382. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_117
  383. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_126
  384. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_139
  385. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_144
  386. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_153
  387. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_158
  388. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_63
  389. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_70
  390. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_78
  391. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_86
  392. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_94
  393. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_104
  394. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_107
  395. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_131
  396. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Split_172
  397. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_8
  398. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_97
  399. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_110
  400. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_121
  401. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_124
  402. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_134
  403. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_137
  404. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_148
  405. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_151
  406. [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_162
  407. [03/02/2023-09:19:46] [V] [TRT] DLA Memory Consumption Summary:
  408. [03/02/2023-09:19:46] [V] [TRT]         Number of DLA node candidates offloaded : 16 out of 48
  409. [03/02/2023-09:19:46] [V] [TRT]         Total memory required by accepted candidates : Managed SRAM = 0.99609375 MiB,   Local DRAM = 812 MiB,   Global DRAM = 64 MiB
  410. [03/02/2023-09:19:46] [V] [TRT] After DLA optimization: 186 layers
  411. [03/02/2023-09:19:46] [V] [TRT] Applying ScaleNodes fusions.
  412. [03/02/2023-09:19:46] [V] [TRT] After scale fusion: 186 layers
  413. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_30
  414. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_30 from REDUCE to POOLING
  415. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_41
  416. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_41 from REDUCE to POOLING
  417. [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_42
  418. [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_42 with Relu_43
  419. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_53
  420. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_53 from REDUCE to POOLING
  421. [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_54
  422. [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_54 with Relu_55
  423. [03/02/2023-09:19:46] [V] [TRT] Running: ConvEltwiseSumFusion on Conv_59
  424. [03/02/2023-09:19:46] [V] [TRT] ConvEltwiseSumFusion: Fusing Conv_59 with Add_60
  425. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_98
  426. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_98 from REDUCE to POOLING
  427. [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_99
  428. [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_99 with Relu_100
  429. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_111
  430. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_111 from REDUCE to POOLING
  431. [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_112
  432. [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_112 with Relu_113
  433. [03/02/2023-09:19:46] [V] [TRT] Running: ConvEltwiseSumFusion on Conv_117
  434. [03/02/2023-09:19:46] [V] [TRT] ConvEltwiseSumFusion: Fusing Conv_117 with Add_118
  435. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_125
  436. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_125 from REDUCE to POOLING
  437. [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_126
  438. [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_126 with Relu_127
  439. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_138
  440. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_138 from REDUCE to POOLING
  441. [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_139
  442. [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_139 with Relu_140
  443. [03/02/2023-09:19:46] [V] [TRT] Running: ConvEltwiseSumFusion on Conv_144
  444. [03/02/2023-09:19:46] [V] [TRT] ConvEltwiseSumFusion: Fusing Conv_144 with Add_145
  445. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_152
  446. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_152 from REDUCE to POOLING
  447. [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_153
  448. [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_153 with Relu_154
  449. [03/02/2023-09:19:46] [V] [TRT] Running: ConvEltwiseSumFusion on Conv_158
  450. [03/02/2023-09:19:46] [V] [TRT] ConvEltwiseSumFusion: Fusing Conv_158 with Add_159
  451. [03/02/2023-09:19:46] [V] [TRT] Running: ConstantSplit on 990
  452. [03/02/2023-09:19:46] [V] [TRT] Running: ConstShuffleFusion on 990
  453. [03/02/2023-09:19:46] [V] [TRT] ConstShuffleFusion: Fusing 990 with (Unnamed Layer* 191) [Shuffle]
  454. [03/02/2023-09:19:46] [V] [TRT] Running: ConstShuffleFusion on 990_clone_1
  455. [03/02/2023-09:19:46] [V] [TRT] ConstShuffleFusion: Fusing 990_clone_1 with (Unnamed Layer* 238) [Shuffle]
  456. [03/02/2023-09:19:46] [V] [TRT] Running: ConstShuffleFusion on 990_clone_2
  457. [03/02/2023-09:19:46] [V] [TRT] ConstShuffleFusion: Fusing 990_clone_2 with (Unnamed Layer* 285) [Shuffle]
  458. [03/02/2023-09:19:46] [V] [TRT] Running: ConstShuffleFusion on 990_clone_3
  459. [03/02/2023-09:19:46] [V] [TRT] ConstShuffleFusion: Fusing 990_clone_3 with (Unnamed Layer* 332) [Shuffle]
  460. [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_165
  461. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_165 from REDUCE to POOLING
  462. [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_163
  463. [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_163 with Relu_164
  464. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_9
  465. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_9 from ACTIVATION to POINTWISE
  466. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_34
  467. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_34 from ACTIVATION to POINTWISE
  468. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_45
  469. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_45 from ACTIVATION to POINTWISE
  470. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_57
  471. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_57 from ACTIVATION to POINTWISE
  472. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_62
  473. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_62 from ACTIVATION to POINTWISE
  474. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_65
  475. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_65 from ACTIVATION to POINTWISE
  476. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_69
  477. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_69 from ACTIVATION to POINTWISE
  478. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_72
  479. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_72 from ACTIVATION to POINTWISE
  480. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_77
  481. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_77 from ACTIVATION to POINTWISE
  482. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_80
  483. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_80 from ACTIVATION to POINTWISE
  484. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_85
  485. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_85 from ACTIVATION to POINTWISE
  486. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_88
  487. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_88 from ACTIVATION to POINTWISE
  488. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_93
  489. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_93 from ACTIVATION to POINTWISE
  490. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_96
  491. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_96 from ACTIVATION to POINTWISE
  492. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_102
  493. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_102 from ACTIVATION to POINTWISE
  494. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_106
  495. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_106 from ACTIVATION to POINTWISE
  496. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_109
  497. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_109 from ACTIVATION to POINTWISE
  498. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_115
  499. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_115 from ACTIVATION to POINTWISE
  500. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_120
  501. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_120 from ACTIVATION to POINTWISE
  502. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_123
  503. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_123 from ACTIVATION to POINTWISE
  504. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_129
  505. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_129 from ACTIVATION to POINTWISE
  506. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_133
  507. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_133 from ACTIVATION to POINTWISE
  508. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_136
  509. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_136 from ACTIVATION to POINTWISE
  510. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_142
  511. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_142 from ACTIVATION to POINTWISE
  512. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_147
  513. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_147 from ACTIVATION to POINTWISE
  514. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_150
  515. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_150 from ACTIVATION to POINTWISE
  516. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_156
  517. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_156 from ACTIVATION to POINTWISE
  518. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_161
  519. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_161 from ACTIVATION to POINTWISE
  520. [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on Sigmoid_167
  521. [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of Sigmoid_167 from ACTIVATION to POINTWISE
  522. [03/02/2023-09:19:46] [V] [TRT] After dupe layer removal: 173 layers
  523. [03/02/2023-09:19:46] [V] [TRT] After final dead-layer removal: 173 layers
  524. [03/02/2023-09:19:46] [V] [TRT] After tensor merging: 173 layers
  525. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on Sub_5
  526. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing Sub_5 with Div_7
  527. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_34)
  528. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_34) with Mul_35
  529. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_45)
  530. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_45) with Mul_46
  531. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_57)
  532. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_57) with Mul_58
  533. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_62)
  534. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_62) with Mul_63
  535. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_65)
  536. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_65) with Mul_66
  537. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_69)
  538. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_69) with Mul_70
  539. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_77)
  540. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_77) with Mul_78
  541. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_85)
  542. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_85) with Mul_86
  543. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_93)
  544. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_93) with Mul_94
  545. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_96)
  546. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_96) with Mul_97
  547. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_102)
  548. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_102) with Mul_103
  549. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_106)
  550. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_106) with Mul_107
  551. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_109)
  552. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_109) with Mul_110
  553. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_115)
  554. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_115) with Mul_116
  555. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_120)
  556. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_120) with Mul_121
  557. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_123)
  558. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_123) with Mul_124
  559. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_129)
  560. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_129) with Mul_130
  561. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_133)
  562. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_133) with Mul_134
  563. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_136)
  564. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_136) with Mul_137
  565. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_142)
  566. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_142) with Mul_143
  567. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_147)
  568. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_147) with Mul_148
  569. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_150)
  570. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_150) with Mul_151
  571. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_156)
  572. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_156) with Mul_157
  573. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_161)
  574. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_161) with Mul_162
  575. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on 990 + (Unnamed Layer* 191) [Shuffle]
  576. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing 990 + (Unnamed Layer* 191) [Shuffle] with Sub_184
  577. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on 990_clone_1 + (Unnamed Layer* 238) [Shuffle]
  578. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing 990_clone_1 + (Unnamed Layer* 238) [Shuffle] with Sub_214
  579. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on 990_clone_2 + (Unnamed Layer* 285) [Shuffle]
  580. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing 990_clone_2 + (Unnamed Layer* 285) [Shuffle] with Sub_244
  581. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on 990_clone_3 + (Unnamed Layer* 332) [Shuffle]
  582. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing 990_clone_3 + (Unnamed Layer* 332) [Shuffle] with Sub_274
  583. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on Mul_309
  584. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing Mul_309 with Sub_310
  585. [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(Sigmoid_167)
  586. [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(Sigmoid_167) with Mul_168
  587. [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_61
  588. [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_61 with PWN(PWN(HardSigmoid_62), Mul_63)
  589. [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_68
  590. [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_68 with PWN(PWN(HardSigmoid_69), Mul_70)
  591. [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_105
  592. [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_105 with PWN(PWN(HardSigmoid_106), Mul_107)
  593. [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_119
  594. [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_119 with PWN(PWN(HardSigmoid_120), Mul_121)
  595. [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_132
  596. [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_132 with PWN(PWN(HardSigmoid_133), Mul_134)
  597. [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_146
  598. [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_146 with PWN(PWN(HardSigmoid_147), Mul_148)
  599. [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_160
  600. [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_160 with PWN(PWN(HardSigmoid_161), Mul_162)
  601. [03/02/2023-09:19:46] [V] [TRT] After vertical fusions: 135 layers
  602. [03/02/2023-09:19:46] [V] [TRT] After dupe layer removal: 135 layers
  603. [03/02/2023-09:19:46] [V] [TRT] After final dead-layer removal: 135 layers
  604. [03/02/2023-09:19:46] [V] [TRT] After tensor merging: 135 layers
  605. [03/02/2023-09:19:46] [V] [TRT] Replacing slice Split_172 with copy from 601 to 605
  606. [03/02/2023-09:19:46] [V] [TRT] Replacing slice Split_172_0 with copy from 601 to 606
  607. [03/02/2023-09:19:46] [V] [TRT] After slice removal: 135 layers
  608. [03/02/2023-09:19:46] [V] [TRT] Eliminating concatenation Concat_299
  609. [03/02/2023-09:19:46] [V] [TRT] Generating copy for 389 to 758 because input does not support striding.
  610. [03/02/2023-09:19:46] [V] [TRT] Generating copy for 757 to 758 because input does not support striding.
  611. [03/02/2023-09:19:46] [V] [TRT] After concat removal: 136 layers
  612. [03/02/2023-09:19:46] [V] [TRT] Trying to split Reshape and strided tensor
  613. [03/02/2023-09:19:46] [V] [TRT] Graph construction and optimization completed in 2.73098 seconds.
  614. [03/02/2023-09:19:46] [I] [TRT] ---------- Layers Running on DLA ----------
  615. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Mul_10...Relu_29]}
  616. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Conv_31...Conv_33]}
  617. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Conv_36...Relu_40]}
  618. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Conv_47...Relu_52]}
  619. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Mul_73...Conv_76]}
  620. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Mul_81...Conv_84]}
  621. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Mul_89...Conv_92]}
  622. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_175...Mul_179]}
  623. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_180...Split_202_10]}
  624. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_205...Mul_209]}
  625. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_210...Split_232_21]}
  626. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_235...Mul_239]}
  627. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_240...Split_262_32]}
  628. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_265...Mul_269]}
  629. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_270...Sub_318]}
  630. [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_297...Clip_352]}
  631. [03/02/2023-09:19:46] [I] [TRT] ---------- Layers Running on GPU ----------
  632. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: Reformatting CopyNode for Network Input src
  633. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONSTANT: 390 + (Unnamed Layer* 2) [Shuffle]
  634. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] RESIZE: Resize_3
  635. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONSTANT: 392 + (Unnamed Layer* 5) [Shuffle]
  636. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(Sub_5, Div_7)
  637. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_8
  638. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(HardSigmoid_9)
  639. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_30
  640. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_34), Mul_35)
  641. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_41
  642. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_42 + Relu_43
  643. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_44
  644. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_45), Mul_46)
  645. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_53
  646. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_54 + Relu_55
  647. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_56
  648. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_57), Mul_58)
  649. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_59 + Add_60
  650. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_61 + PWN(PWN(HardSigmoid_62), Mul_63)
  651. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_64
  652. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_65), Mul_66)
  653. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_67
  654. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_68 + PWN(PWN(HardSigmoid_69), Mul_70)
  655. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_71
  656. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(HardSigmoid_72)
  657. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_77), Mul_78)
  658. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_79
  659. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(HardSigmoid_80)
  660. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_85), Mul_86)
  661. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_87
  662. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(HardSigmoid_88)
  663. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_93), Mul_94)
  664. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_95
  665. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_96), Mul_97)
  666. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_98
  667. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_99 + Relu_100
  668. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_101
  669. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_102), Mul_103)
  670. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_104
  671. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_105 + PWN(PWN(HardSigmoid_106), Mul_107)
  672. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_108
  673. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_109), Mul_110)
  674. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_111
  675. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_112 + Relu_113
  676. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_114
  677. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_115), Mul_116)
  678. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_117 + Add_118
  679. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_119 + PWN(PWN(HardSigmoid_120), Mul_121)
  680. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_122
  681. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_123), Mul_124)
  682. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_125
  683. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_126 + Relu_127
  684. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_128
  685. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_129), Mul_130)
  686. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_131
  687. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_132 + PWN(PWN(HardSigmoid_133), Mul_134)
  688. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_135
  689. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_136), Mul_137)
  690. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_138
  691. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_139 + Relu_140
  692. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_141
  693. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_142), Mul_143)
  694. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_144 + Add_145
  695. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_146 + PWN(PWN(HardSigmoid_147), Mul_148)
  696. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_149
  697. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_150), Mul_151)
  698. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_152
  699. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_153 + Relu_154
  700. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_155
  701. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_156), Mul_157)
  702. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_158 + Add_159
  703. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_160 + PWN(PWN(HardSigmoid_161), Mul_162)
  704. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] REDUCE: ReduceMean_298
  705. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_165
  706. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: AveragePool_169
  707. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: AveragePool_170
  708. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: 389 copy
  709. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: 757 copy
  710. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: AveragePool_171
  711. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_301
  712. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] ELEMENTWISE: Mul_307
  713. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_308
  714. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(Mul_309, Sub_310)
  715. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_163 + Relu_164
  716. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_166
  717. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_167), Mul_168)
  718. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: Split_172
  719. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: Split_172_0
  720. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] SLICE: Expand_174
  721. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(990 + (Unnamed Layer* 191) [Shuffle], Sub_184)
  722. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] SLICE: Expand_204
  723. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(990_clone_1 + (Unnamed Layer* 238) [Shuffle], Sub_214)
  724. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] SLICE: Expand_234
  725. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(990_clone_2 + (Unnamed Layer* 285) [Shuffle], Sub_244)
  726. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] SLICE: Expand_264
  727. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(990_clone_3 + (Unnamed Layer* 332) [Shuffle], Sub_274)
  728. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] REDUCE: ReduceMean_296
  729. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] RESIZE: Resize_332
  730. [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] RESIZE: Resize_346
  731. [03/02/2023-09:19:46] [E] Error[2]: [eglUtils.cpp::operator()::72] Error Code 2: Internal Error (Assertion (eglCreateStreamKHR) != nullptr failed. )
  732. [03/02/2023-09:19:46] [E] Error[2]: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
  733. [03/02/2023-09:19:46] [E] Engine could not be created from network
  734. [03/02/2023-09:19:46] [E] Building engine failed
  735. [03/02/2023-09:19:46] [E] Failed to create engine from model or file.
  736. [03/02/2023-09:19:46] [E] Engine set up failed
  737.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement