Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 'Resize_3': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '390': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 2) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '392': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 5) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Div_7: DLA cores do not support DIV ElementWise operation.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Div_7': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_9: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_9': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_30': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_34: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_34': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_41': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_45: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_45': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_53': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_57: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_57': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_62: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_62': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_65: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_65': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_69: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_69': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_72: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_72': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_77: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_77': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_80: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_80': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_85: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_85': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_88: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_88': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_93: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_93': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_96: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_96': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_98': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_102: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_102': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_106: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_106': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_109: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_109': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_111': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_115: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_115': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_120: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_120': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_123: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_123': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_125': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_129: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_129': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_133: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_133': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_136: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_136': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_138': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_142: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_142': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_147: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_147': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_150: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_150': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_152': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_156: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_156': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] HardSigmoid_161: Activation type: HARD_SIGMOID is not supported on DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'HardSigmoid_161': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'GlobalAveragePool_165': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_173': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 174) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 175) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 175) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 176) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 177) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 177) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 178) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 178) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 179) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 179) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '990': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 191) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_191': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Slice_195: DLA only supports slicing 4 dimensional tensors.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_195': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 200) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 201) [Concatenation]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 201) [Concatenation]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 202) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 203) [Gather]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 204) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 205) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 205) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 206) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 207) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 207) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 208) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 208) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 209) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 209) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 210) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 211) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 211) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 212) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 212) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 213) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 214) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 214) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_203': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 222) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 223) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 223) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 224) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 225) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 225) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 226) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 226) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 227) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 227) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 238) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_221': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Slice_225: DLA only supports slicing 4 dimensional tensors.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_225': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 247) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 248) [Concatenation]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 248) [Concatenation]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 249) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 250) [Gather]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 251) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 252) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 252) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 253) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 254) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 254) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 255) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 255) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 256) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 256) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 257) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 258) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 258) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 259) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 259) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 260) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 261) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 261) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_233': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 269) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 270) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 270) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 271) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 272) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 272) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 273) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 273) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 274) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 274) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 285) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_251': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Slice_255: DLA only supports slicing 4 dimensional tensors.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_255': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 294) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 295) [Concatenation]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 295) [Concatenation]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 296) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 297) [Gather]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 298) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 299) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 299) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 300) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 301) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 301) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 302) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 302) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 303) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 303) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 304) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 305) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 305) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 306) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 306) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 307) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 308) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 308) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_263': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 316) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 317) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 317) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 318) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 319) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 319) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 320) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 320) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 321) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 321) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 332) [Shuffle]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_281': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Slice_285: DLA only supports slicing 4 dimensional tensors.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_285': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 341) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 342) [Concatenation]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 342) [Concatenation]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 343) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 344) [Gather]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 345) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 346) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 346) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 347) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 348) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 348) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 349) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 349) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 350) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 350) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 351) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 352) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 352) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 353) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 353) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 354) [Constant]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] (Unnamed Layer* 355) [ElementWise]: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer '(Unnamed Layer* 355) [ElementWise]': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'ReduceMean_296': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'ReduceMean_298': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_321': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Slice_325: DLA only supports slicing 4 dimensional tensors.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_325': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer '815': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Concat_331: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Concat_331': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Resize_332: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Resize_332': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Shape_335': DLA only supports FP16 and Int8 precision type. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Slice_339: DLA only supports slicing 4 dimensional tensors.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Slice_339': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Concat_345: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Concat_345': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [W] [TRT] Resize_346: Please explicitly use CAST operator in ONNX model or add an identity layer to convert INT32 to other types for DLA.
- [03/02/2023-09:19:43] [W] [TRT] Layer 'Resize_346': Unsupported on DLA. Switching this layer's device type to GPU.
- [03/02/2023-09:19:43] [V] [TRT] Applying generic optimizations to the graph for inference.
- [03/02/2023-09:19:43] [V] [TRT] Original: 301 layers
- [03/02/2023-09:19:43] [V] [TRT] After dead-layer removal: 301 layers
- [03/02/2023-09:19:43] [V] [TRT] Running: ConstShuffleFusion on 390
- [03/02/2023-09:19:43] [V] [TRT] ConstShuffleFusion: Fusing 390 with (Unnamed Layer* 2) [Shuffle]
- [03/02/2023-09:19:43] [V] [TRT] Running: ConstShuffleFusion on 392
- [03/02/2023-09:19:43] [V] [TRT] ConstShuffleFusion: Fusing 392 with (Unnamed Layer* 5) [Shuffle]
- [03/02/2023-09:19:43] [V] [TRT] After Myelin optimization: 299 layers
- [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Sub_5. Switching to GPU fallback.
- [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_35. Switching to GPU fallback.
- [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_46. Switching to GPU fallback.
- [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_58. Switching to GPU fallback.
- [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_103. Switching to GPU fallback.
- [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_116. Switching to GPU fallback.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Conv_122. Switching to GPU fallback.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Conv_122. Switching to GPU fallback.
- [03/02/2023-09:19:43] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:43] [W] [TRT] Validation failed for DLA layer: Mul_130. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Conv_135. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Conv_135. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Mul_143. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Conv_149. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Conv_149. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Mul_157. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Mul_168. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Mul_168. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Expand_174: DLA does not support slicing along the batch dimension.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_174. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Expand_174: DLA does not support slicing along the batch dimension.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_174. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_184. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_184. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Expand_204: DLA does not support slicing along the batch dimension.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_204. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Expand_204: DLA does not support slicing along the batch dimension.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_204. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_214. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_214. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Expand_234: DLA does not support slicing along the batch dimension.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_234. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Expand_234: DLA does not support slicing along the batch dimension.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_234. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_244. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_244. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Expand_264: DLA does not support slicing along the batch dimension.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_264. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] Expand_264: DLA does not support slicing along the batch dimension.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Expand_264. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_274. Switching to GPU fallback.
- [03/02/2023-09:19:44] [W] [TRT] DLA only allows inputs of the same dimensions to Elementwise.
- [03/02/2023-09:19:44] [W] [TRT] Validation failed for DLA layer: Sub_274. Switching to GPU fallback.
- [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Concat_270...Sub_318]} successfully offloaded to DLA.
- [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:44] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:44] [V] [TRT] Required: Managed SRAM = 0.5 MiB, Local DRAM = 128 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Mul_10...Relu_29]} successfully offloaded to DLA.
- [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:44] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:44] [V] [TRT] Required: Managed SRAM = 0.25 MiB, Local DRAM = 32 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Concat_180...Split_202_10]} successfully offloaded to DLA.
- [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:44] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:44] [V] [TRT] Required: Managed SRAM = 0.125 MiB, Local DRAM = 8 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Concat_210...Split_232_21]} successfully offloaded to DLA.
- [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:44] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:44] [V] [TRT] Required: Managed SRAM = 0.0625 MiB, Local DRAM = 16 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:44] [V] [TRT] {ForeignNode[Concat_240...Split_262_32]} successfully offloaded to DLA.
- [03/02/2023-09:19:44] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:44] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:44] [V] [TRT] Required: Managed SRAM = 0.03125 MiB, Local DRAM = 32 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:44] [W] [TRT] {ForeignNode[AveragePool_169...Sigmoid_167]} cannot be compiled by DLA, falling back to GPU.
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_297...Clip_352]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0.015625 MiB, Local DRAM = 512 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Conv_47...Relu_52]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0.0078125 MiB, Local DRAM = 8 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_175...Mul_179]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0.00390625 MiB, Local DRAM = 4 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_205...Mul_209]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0 MiB, Local DRAM = 8 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_235...Mul_239]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0 MiB, Local DRAM = 8 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Concat_265...Mul_269]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0 MiB, Local DRAM = 32 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Conv_36...Relu_40]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0 MiB, Local DRAM = 8 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Mul_73...Conv_76]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0 MiB, Local DRAM = 4 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Mul_81...Conv_84]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0 MiB, Local DRAM = 4 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Mul_89...Conv_92]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0 MiB, Local DRAM = 4 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [V] [TRT] {ForeignNode[Conv_31...Conv_33]} successfully offloaded to DLA.
- [03/02/2023-09:19:46] [V] [TRT] Memory consumption details:
- [03/02/2023-09:19:46] [V] [TRT] Pool Sizes: Managed SRAM = 0.5 MiB, Local DRAM = 1024 MiB, Global DRAM = 512 MiB
- [03/02/2023-09:19:46] [V] [TRT] Required: Managed SRAM = 0 MiB, Local DRAM = 4 MiB, Global DRAM = 4 MiB
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_42
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_54
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_59
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_66
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_99
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_112
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_117
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_126
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_139
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_144
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_153
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_158
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_63
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_70
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_78
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_86
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_94
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_104
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_107
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_131
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Split_172
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Conv_8
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_97
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_110
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_121
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_124
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_134
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_137
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_148
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_151
- [03/02/2023-09:19:46] [W] [TRT] DLA supports only 16 subgraphs per DLA core. Switching to GPU for layer Mul_162
- [03/02/2023-09:19:46] [V] [TRT] DLA Memory Consumption Summary:
- [03/02/2023-09:19:46] [V] [TRT] Number of DLA node candidates offloaded : 16 out of 48
- [03/02/2023-09:19:46] [V] [TRT] Total memory required by accepted candidates : Managed SRAM = 0.99609375 MiB, Local DRAM = 812 MiB, Global DRAM = 64 MiB
- [03/02/2023-09:19:46] [V] [TRT] After DLA optimization: 186 layers
- [03/02/2023-09:19:46] [V] [TRT] Applying ScaleNodes fusions.
- [03/02/2023-09:19:46] [V] [TRT] After scale fusion: 186 layers
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_30
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_30 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_41
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_41 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_42
- [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_42 with Relu_43
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_53
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_53 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_54
- [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_54 with Relu_55
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvEltwiseSumFusion on Conv_59
- [03/02/2023-09:19:46] [V] [TRT] ConvEltwiseSumFusion: Fusing Conv_59 with Add_60
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_98
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_98 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_99
- [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_99 with Relu_100
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_111
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_111 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_112
- [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_112 with Relu_113
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvEltwiseSumFusion on Conv_117
- [03/02/2023-09:19:46] [V] [TRT] ConvEltwiseSumFusion: Fusing Conv_117 with Add_118
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_125
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_125 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_126
- [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_126 with Relu_127
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_138
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_138 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_139
- [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_139 with Relu_140
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvEltwiseSumFusion on Conv_144
- [03/02/2023-09:19:46] [V] [TRT] ConvEltwiseSumFusion: Fusing Conv_144 with Add_145
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_152
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_152 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_153
- [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_153 with Relu_154
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvEltwiseSumFusion on Conv_158
- [03/02/2023-09:19:46] [V] [TRT] ConvEltwiseSumFusion: Fusing Conv_158 with Add_159
- [03/02/2023-09:19:46] [V] [TRT] Running: ConstantSplit on 990
- [03/02/2023-09:19:46] [V] [TRT] Running: ConstShuffleFusion on 990
- [03/02/2023-09:19:46] [V] [TRT] ConstShuffleFusion: Fusing 990 with (Unnamed Layer* 191) [Shuffle]
- [03/02/2023-09:19:46] [V] [TRT] Running: ConstShuffleFusion on 990_clone_1
- [03/02/2023-09:19:46] [V] [TRT] ConstShuffleFusion: Fusing 990_clone_1 with (Unnamed Layer* 238) [Shuffle]
- [03/02/2023-09:19:46] [V] [TRT] Running: ConstShuffleFusion on 990_clone_2
- [03/02/2023-09:19:46] [V] [TRT] ConstShuffleFusion: Fusing 990_clone_2 with (Unnamed Layer* 285) [Shuffle]
- [03/02/2023-09:19:46] [V] [TRT] Running: ConstShuffleFusion on 990_clone_3
- [03/02/2023-09:19:46] [V] [TRT] ConstShuffleFusion: Fusing 990_clone_3 with (Unnamed Layer* 332) [Shuffle]
- [03/02/2023-09:19:46] [V] [TRT] Running: ReduceToPoolingFusion on GlobalAveragePool_165
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of GlobalAveragePool_165 from REDUCE to POOLING
- [03/02/2023-09:19:46] [V] [TRT] Running: ConvReluFusion on Conv_163
- [03/02/2023-09:19:46] [V] [TRT] ConvReluFusion: Fusing Conv_163 with Relu_164
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_9
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_9 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_34
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_34 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_45
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_45 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_57
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_57 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_62
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_62 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_65
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_65 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_69
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_69 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_72
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_72 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_77
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_77 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_80
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_80 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_85
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_85 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_88
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_88 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_93
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_93 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_96
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_96 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_102
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_102 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_106
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_106 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_109
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_109 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_115
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_115 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_120
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_120 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_123
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_123 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_129
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_129 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_133
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_133 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_136
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_136 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_142
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_142 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_147
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_147 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_150
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_150 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_156
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_156 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on HardSigmoid_161
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of HardSigmoid_161 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] Running: ActivationToPointwiseConversion on Sigmoid_167
- [03/02/2023-09:19:46] [V] [TRT] Swap the layer type of Sigmoid_167 from ACTIVATION to POINTWISE
- [03/02/2023-09:19:46] [V] [TRT] After dupe layer removal: 173 layers
- [03/02/2023-09:19:46] [V] [TRT] After final dead-layer removal: 173 layers
- [03/02/2023-09:19:46] [V] [TRT] After tensor merging: 173 layers
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on Sub_5
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing Sub_5 with Div_7
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_34)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_34) with Mul_35
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_45)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_45) with Mul_46
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_57)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_57) with Mul_58
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_62)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_62) with Mul_63
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_65)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_65) with Mul_66
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_69)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_69) with Mul_70
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_77)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_77) with Mul_78
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_85)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_85) with Mul_86
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_93)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_93) with Mul_94
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_96)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_96) with Mul_97
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_102)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_102) with Mul_103
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_106)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_106) with Mul_107
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_109)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_109) with Mul_110
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_115)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_115) with Mul_116
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_120)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_120) with Mul_121
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_123)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_123) with Mul_124
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_129)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_129) with Mul_130
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_133)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_133) with Mul_134
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_136)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_136) with Mul_137
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_142)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_142) with Mul_143
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_147)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_147) with Mul_148
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_150)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_150) with Mul_151
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_156)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_156) with Mul_157
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(HardSigmoid_161)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(HardSigmoid_161) with Mul_162
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on 990 + (Unnamed Layer* 191) [Shuffle]
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing 990 + (Unnamed Layer* 191) [Shuffle] with Sub_184
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on 990_clone_1 + (Unnamed Layer* 238) [Shuffle]
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing 990_clone_1 + (Unnamed Layer* 238) [Shuffle] with Sub_214
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on 990_clone_2 + (Unnamed Layer* 285) [Shuffle]
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing 990_clone_2 + (Unnamed Layer* 285) [Shuffle] with Sub_244
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on 990_clone_3 + (Unnamed Layer* 332) [Shuffle]
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing 990_clone_3 + (Unnamed Layer* 332) [Shuffle] with Sub_274
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on Mul_309
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing Mul_309 with Sub_310
- [03/02/2023-09:19:46] [V] [TRT] Running: PointWiseFusion on PWN(Sigmoid_167)
- [03/02/2023-09:19:46] [V] [TRT] PointWiseFusion: Fusing PWN(Sigmoid_167) with Mul_168
- [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_61
- [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_61 with PWN(PWN(HardSigmoid_62), Mul_63)
- [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_68
- [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_68 with PWN(PWN(HardSigmoid_69), Mul_70)
- [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_105
- [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_105 with PWN(PWN(HardSigmoid_106), Mul_107)
- [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_119
- [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_119 with PWN(PWN(HardSigmoid_120), Mul_121)
- [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_132
- [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_132 with PWN(PWN(HardSigmoid_133), Mul_134)
- [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_146
- [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_146 with PWN(PWN(HardSigmoid_147), Mul_148)
- [03/02/2023-09:19:46] [V] [TRT] Running: GenericConvActFusion on Conv_160
- [03/02/2023-09:19:46] [V] [TRT] GenericConvActFusion: Fusing Conv_160 with PWN(PWN(HardSigmoid_161), Mul_162)
- [03/02/2023-09:19:46] [V] [TRT] After vertical fusions: 135 layers
- [03/02/2023-09:19:46] [V] [TRT] After dupe layer removal: 135 layers
- [03/02/2023-09:19:46] [V] [TRT] After final dead-layer removal: 135 layers
- [03/02/2023-09:19:46] [V] [TRT] After tensor merging: 135 layers
- [03/02/2023-09:19:46] [V] [TRT] Replacing slice Split_172 with copy from 601 to 605
- [03/02/2023-09:19:46] [V] [TRT] Replacing slice Split_172_0 with copy from 601 to 606
- [03/02/2023-09:19:46] [V] [TRT] After slice removal: 135 layers
- [03/02/2023-09:19:46] [V] [TRT] Eliminating concatenation Concat_299
- [03/02/2023-09:19:46] [V] [TRT] Generating copy for 389 to 758 because input does not support striding.
- [03/02/2023-09:19:46] [V] [TRT] Generating copy for 757 to 758 because input does not support striding.
- [03/02/2023-09:19:46] [V] [TRT] After concat removal: 136 layers
- [03/02/2023-09:19:46] [V] [TRT] Trying to split Reshape and strided tensor
- [03/02/2023-09:19:46] [V] [TRT] Graph construction and optimization completed in 2.73098 seconds.
- [03/02/2023-09:19:46] [I] [TRT] ---------- Layers Running on DLA ----------
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Mul_10...Relu_29]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Conv_31...Conv_33]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Conv_36...Relu_40]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Conv_47...Relu_52]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Mul_73...Conv_76]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Mul_81...Conv_84]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Mul_89...Conv_92]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_175...Mul_179]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_180...Split_202_10]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_205...Mul_209]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_210...Split_232_21]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_235...Mul_239]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_240...Split_262_32]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_265...Mul_269]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_270...Sub_318]}
- [03/02/2023-09:19:46] [I] [TRT] [DlaLayer] {ForeignNode[Concat_297...Clip_352]}
- [03/02/2023-09:19:46] [I] [TRT] ---------- Layers Running on GPU ----------
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: Reformatting CopyNode for Network Input src
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONSTANT: 390 + (Unnamed Layer* 2) [Shuffle]
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] RESIZE: Resize_3
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONSTANT: 392 + (Unnamed Layer* 5) [Shuffle]
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(Sub_5, Div_7)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_8
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(HardSigmoid_9)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_30
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_34), Mul_35)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_41
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_42 + Relu_43
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_44
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_45), Mul_46)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_53
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_54 + Relu_55
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_56
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_57), Mul_58)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_59 + Add_60
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_61 + PWN(PWN(HardSigmoid_62), Mul_63)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_64
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_65), Mul_66)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_67
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_68 + PWN(PWN(HardSigmoid_69), Mul_70)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_71
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(HardSigmoid_72)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_77), Mul_78)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_79
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(HardSigmoid_80)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_85), Mul_86)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_87
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(HardSigmoid_88)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_93), Mul_94)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_95
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_96), Mul_97)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_98
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_99 + Relu_100
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_101
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_102), Mul_103)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_104
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_105 + PWN(PWN(HardSigmoid_106), Mul_107)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_108
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_109), Mul_110)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_111
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_112 + Relu_113
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_114
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_115), Mul_116)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_117 + Add_118
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_119 + PWN(PWN(HardSigmoid_120), Mul_121)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_122
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_123), Mul_124)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_125
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_126 + Relu_127
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_128
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_129), Mul_130)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_131
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_132 + PWN(PWN(HardSigmoid_133), Mul_134)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_135
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_136), Mul_137)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_138
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_139 + Relu_140
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_141
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_142), Mul_143)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_144 + Add_145
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_146 + PWN(PWN(HardSigmoid_147), Mul_148)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_149
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_150), Mul_151)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_152
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_153 + Relu_154
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_155
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(HardSigmoid_156), Mul_157)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_158 + Add_159
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_160 + PWN(PWN(HardSigmoid_161), Mul_162)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] REDUCE: ReduceMean_298
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: GlobalAveragePool_165
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: AveragePool_169
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: AveragePool_170
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: 389 copy
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: 757 copy
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POOLING: AveragePool_171
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_301
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] ELEMENTWISE: Mul_307
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_308
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(Mul_309, Sub_310)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_163 + Relu_164
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] CONVOLUTION: Conv_166
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_167), Mul_168)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: Split_172
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] COPY: Split_172_0
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] SLICE: Expand_174
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(990 + (Unnamed Layer* 191) [Shuffle], Sub_184)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] SLICE: Expand_204
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(990_clone_1 + (Unnamed Layer* 238) [Shuffle], Sub_214)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] SLICE: Expand_234
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(990_clone_2 + (Unnamed Layer* 285) [Shuffle], Sub_244)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] SLICE: Expand_264
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] POINTWISE: PWN(990_clone_3 + (Unnamed Layer* 332) [Shuffle], Sub_274)
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] REDUCE: ReduceMean_296
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] RESIZE: Resize_332
- [03/02/2023-09:19:46] [I] [TRT] [GpuLayer] RESIZE: Resize_346
- [03/02/2023-09:19:46] [E] Error[2]: [eglUtils.cpp::operator()::72] Error Code 2: Internal Error (Assertion (eglCreateStreamKHR) != nullptr failed. )
- [03/02/2023-09:19:46] [E] Error[2]: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
- [03/02/2023-09:19:46] [E] Engine could not be created from network
- [03/02/2023-09:19:46] [E] Building engine failed
- [03/02/2023-09:19:46] [E] Failed to create engine from model or file.
- [03/02/2023-09:19:46] [E] Engine set up failed
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement