Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- &&&& RUNNING TensorRT.trtexec [TensorRT v8501] # C:/Program Files (x86)/SVP 4/rife\vsmlrt-cuda\trtexec --onnx=C:/Program Files (x86)/SVP 4/rife\models\rife\rife_v4.6.onnx --timingCacheFile=C:\Users\Loserfailure\AppData\Roaming\SVP4\cache\Program Files (x86)/SVP 4/rife\models\rife\rife_v4.6.onnx.3840x2144_fp16_trt-8502_cudnn_I-fp16_O-fp16_NVIDIA-GeForce-RTX-4090_3dcbe72f.engine.cache --device=0 --saveEngine=C:\Users\Loserfailure\AppData\Roaming\SVP4\cache\Program Files (x86)/SVP 4/rife\models\rife\rife_v4.6.onnx.3840x2144_fp16_trt-8502_cudnn_I-fp16_O-fp16_NVIDIA-GeForce-RTX-4090_3dcbe72f.engine --shapes=input:1x11x2144x3840 --fp16 --tacticSources=-CUBLAS,-CUBLAS_LT --useCudaGraph --noDataTransfers --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw
- [10/12/2023-16:01:07] [I] === Model Options ===
- [10/12/2023-16:01:07] [I] Format: ONNX
- [10/12/2023-16:01:07] [I] Model: C:/Program Files (x86)/SVP 4/rife\models\rife\rife_v4.6.onnx
- [10/12/2023-16:01:07] [I] Output:
- [10/12/2023-16:01:07] [I] === Build Options ===
- [10/12/2023-16:01:07] [I] Max batch: explicit batch
- [10/12/2023-16:01:07] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
- [10/12/2023-16:01:07] [I] minTiming: 1
- [10/12/2023-16:01:07] [I] avgTiming: 8
- [10/12/2023-16:01:07] [I] Precision: FP32+FP16
- [10/12/2023-16:01:07] [I] LayerPrecisions:
- [10/12/2023-16:01:07] [I] Calibration:
- [10/12/2023-16:01:07] [I] Refit: Disabled
- [10/12/2023-16:01:07] [I] Sparsity: Disabled
- [10/12/2023-16:01:07] [I] Safe mode: Disabled
- [10/12/2023-16:01:07] [I] DirectIO mode: Disabled
- [10/12/2023-16:01:07] [I] Restricted mode: Disabled
- [10/12/2023-16:01:07] [I] Build only: Disabled
- [10/12/2023-16:01:07] [I] Save engine: C:\Users\Loserfailure\AppData\Roaming\SVP4\cache\Program Files (x86)/SVP 4/rife\models\rife\rife_v4.6.onnx.3840x2144_fp16_trt-8502_cudnn_I-fp16_O-fp16_NVIDIA-GeForce-RTX-4090_3dcbe72f.engine
- [10/12/2023-16:01:07] [I] Load engine:
- [10/12/2023-16:01:07] [I] Profiling verbosity: 0
- [10/12/2023-16:01:07] [I] Tactic sources: cublas [OFF], cublasLt [OFF],
- [10/12/2023-16:01:07] [I] timingCacheMode: global
- [10/12/2023-16:01:07] [I] timingCacheFile: C:\Users\Loserfailure\AppData\Roaming\SVP4\cache\Program Files (x86)/SVP 4/rife\models\rife\rife_v4.6.onnx.3840x2144_fp16_trt-8502_cudnn_I-fp16_O-fp16_NVIDIA-GeForce-RTX-4090_3dcbe72f.engine.cache
- [10/12/2023-16:01:07] [I] Heuristic: Disabled
- [10/12/2023-16:01:07] [I] Preview Features: Use default preview flags.
- [10/12/2023-16:01:07] [I] Input(s): fp16:chw
- [10/12/2023-16:01:07] [I] Output(s): fp16:chw
- [10/12/2023-16:01:07] [I] Input build shape: input=1x11x2144x3840+1x11x2144x3840+1x11x2144x3840
- [10/12/2023-16:01:07] [I] Input calibration shapes: model
- [10/12/2023-16:01:07] [I] === System Options ===
- [10/12/2023-16:01:07] [I] Device: 0
- [10/12/2023-16:01:07] [I] DLACore:
- [10/12/2023-16:01:07] [I] Plugins:
- [10/12/2023-16:01:07] [I] === Inference Options ===
- [10/12/2023-16:01:07] [I] Batch: Explicit
- [10/12/2023-16:01:07] [I] Input inference shape: input=1x11x2144x3840
- [10/12/2023-16:01:07] [I] Iterations: 10
- [10/12/2023-16:01:07] [I] Duration: 3s (+ 200ms warm up)
- [10/12/2023-16:01:07] [I] Sleep time: 0ms
- [10/12/2023-16:01:07] [I] Idle time: 0ms
- [10/12/2023-16:01:07] [I] Streams: 1
- [10/12/2023-16:01:07] [I] ExposeDMA: Disabled
- [10/12/2023-16:01:07] [I] Data transfers: Disabled
- [10/12/2023-16:01:07] [I] Spin-wait: Disabled
- [10/12/2023-16:01:07] [I] Multithreading: Disabled
- [10/12/2023-16:01:07] [I] CUDA Graph: Enabled
- [10/12/2023-16:01:07] [I] Separate profiling: Disabled
- [10/12/2023-16:01:07] [I] Time Deserialize: Disabled
- [10/12/2023-16:01:07] [I] Time Refit: Disabled
- [10/12/2023-16:01:07] [I] NVTX verbosity: 0
- [10/12/2023-16:01:07] [I] Persistent Cache Ratio: 0
- [10/12/2023-16:01:07] [I] Inputs:
- [10/12/2023-16:01:07] [I] === Reporting Options ===
- [10/12/2023-16:01:07] [I] Verbose: Disabled
- [10/12/2023-16:01:07] [I] Averages: 10 inferences
- [10/12/2023-16:01:07] [I] Percentiles: 90,95,99
- [10/12/2023-16:01:07] [I] Dump refittable layers:Disabled
- [10/12/2023-16:01:07] [I] Dump output: Disabled
- [10/12/2023-16:01:07] [I] Profile: Disabled
- [10/12/2023-16:01:07] [I] Export timing to JSON file:
- [10/12/2023-16:01:07] [I] Export output to JSON file:
- [10/12/2023-16:01:07] [I] Export profile to JSON file:
- [10/12/2023-16:01:07] [I]
- [10/12/2023-16:01:07] [I] === Device Information ===
- [10/12/2023-16:01:07] [I] Selected Device: NVIDIA GeForce RTX 4090
- [10/12/2023-16:01:07] [I] Compute Capability: 8.9
- [10/12/2023-16:01:07] [I] SMs: 128
- [10/12/2023-16:01:07] [I] Compute Clock Rate: 2.55 GHz
- [10/12/2023-16:01:07] [I] Device Global Memory: 24563 MiB
- [10/12/2023-16:01:07] [I] Shared Memory per SM: 100 KiB
- [10/12/2023-16:01:07] [I] Memory Bus Width: 384 bits (ECC disabled)
- [10/12/2023-16:01:07] [I] Memory Clock Rate: 10.501 GHz
- [10/12/2023-16:01:07] [I]
- [10/12/2023-16:01:07] [I] TensorRT version: 8.5.1
- [10/12/2023-16:01:08] [I] [TRT] [MemUsageChange] Init CUDA: CPU +475, GPU +0, now: CPU 15323, GPU 1783 (MiB)
- [10/12/2023-16:01:09] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +504, GPU +116, now: CPU 16313, GPU 1899 (MiB)
- [10/12/2023-16:01:09] [W] [TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in [url]https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars[/url]
- [10/12/2023-16:01:09] [I] Start parsing network model
- [10/12/2023-16:01:09] [I] [TRT] ----------------------------------------------------------------
- [10/12/2023-16:01:09] [I] [TRT] Input filename: C:/Program Files (x86)/SVP 4/rife\models\rife\rife_v4.6.onnx
- [10/12/2023-16:01:09] [I] [TRT] ONNX IR version: 0.0.8
- [10/12/2023-16:01:09] [I] [TRT] Opset version: 16
- [10/12/2023-16:01:09] [I] [TRT] Producer name: pytorch
- [10/12/2023-16:01:09] [I] [TRT] Producer version: 1.12.0
- [10/12/2023-16:01:09] [I] [TRT] Domain:
- [10/12/2023-16:01:09] [I] [TRT] Model version: 0
- [10/12/2023-16:01:09] [I] [TRT] Doc string:
- [10/12/2023-16:01:09] [I] [TRT] ----------------------------------------------------------------
- [10/12/2023-16:01:09] [W] [TRT] onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
- [10/12/2023-16:01:09] [I] Finish parsing network model
- [10/12/2023-16:01:09] [W] Could not read timing cache from: C:\Users\Loserfailure\AppData\Roaming\SVP4\cache\Program Files (x86)/SVP 4/rife\models\rife\rife_v4.6.onnx.3840x2144_fp16_trt-8502_cudnn_I-fp16_O-fp16_NVIDIA-GeForce-RTX-4090_3dcbe72f.engine.cache. A new timing cache will be generated and written.
- [10/12/2023-16:01:10] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +1133, GPU +406, now: CPU 17095, GPU 2305 (MiB)
- [10/12/2023-16:01:10] [I] [TRT] Global timing cache in use. Profiling results in this builder pass will be stored.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement