Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Loading WD14 moat tagger v2 model file from SmilingWolf/wd-v1-4-moat-tagger-v2, model.onnx
- 2024-04-18 12:02:37.6210736 [E:onnxruntime:, inference_session.cc:1981 onnxruntime::InferenceSession::Initialize::<lambda_b88fb06a047eb3fba81e04c489d52b7d>::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DKSPWNDJ ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=178 ; expr=cudnnSetStream(cudnn_handle_, stream);
- *** Error completing request
- *** Arguments: (<PIL.Image.Image image mode=RGB size=1040x1562 at 0x234319FF700>, 'WD14 moat tagger v2', '', '', '', '', '', '') {}
- Traceback (most recent call last):
- File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 57, in f
- res = list(func(*args, **kwargs))
- File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 36, in f
- res = func(*args, **kwargs)
- File "E:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\ui.py", line 113, in on_interrogate_image_submit
- interrogator.interrogate_image(image)
- File "E:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 150, in interrogate_image
- data = ('', '', fi_key) + self.interrogate(image)
- File "E:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 448, in interrogate
- self.load()
- File "E:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 433, in load
- self.model = ort.InferenceSession(model_path,
- File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
- self._create_inference_session(providers, provider_options, disabled_optimizers)
- File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
- sess.initialize_session(providers, provider_options, disabled_optimizers)
- onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DKSPWNDJ ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=178 ; expr=cudnnSetStream(cudnn_handle_, stream);
- ---
- Traceback (most recent call last):
- File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
- output = await app.get_blocks().process_api(
- File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
- data = self.postprocess_data(fn_index, result["prediction"], state)
- File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data
- self.validate_outputs(fn_index, predictions) # type: ignore
- File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs
- raise ValueError(
- ValueError: An event handler (on_interrogate_image_submit) didn't receive enough output values (needed: 7, received: 3).
- Wanted outputs:
- [state, html, html, label, label, label, html]
- Received outputs:
- [None, "", "<div class='error'>RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DKSPWNDJ ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=178 ; expr=cudnnSetStream(cudnn_handle_, stream);
- </div><div class='performance'><p class='time'>Time taken: <wbr><span class='measurement'>0.9 sec.</span></p><p class='vram'><abbr title='Active: peak amount of video memory used during generation (excluding cached data)'>A</abbr>: <span class='measurement'>2.21 GB</span>, <wbr><abbr title='Reserved: total amount of video memory allocated by the Torch library '>R</abbr>: <span class='measurement'>2.60 GB</span>, <wbr><abbr title='System: peak amount of video memory allocated by all running programs, out of total capacity'>Sys</abbr>: <span class='measurement'>2.9/23.9844 GB</span> (12.0%)</p></div>"]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement