Advertisement
Guest User

Untitled

a guest
Apr 17th, 2024
75
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.78 KB | None | 0 0
  1. Loading WD14 moat tagger v2 model file from SmilingWolf/wd-v1-4-moat-tagger-v2, model.onnx
  2. 2024-04-18 12:02:37.6210736 [E:onnxruntime:, inference_session.cc:1981 onnxruntime::InferenceSession::Initialize::<lambda_b88fb06a047eb3fba81e04c489d52b7d>::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DKSPWNDJ ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=178 ; expr=cudnnSetStream(cudnn_handle_, stream);
  3.  
  4.  
  5. *** Error completing request
  6. *** Arguments: (<PIL.Image.Image image mode=RGB size=1040x1562 at 0x234319FF700>, 'WD14 moat tagger v2', '', '', '', '', '', '') {}
  7. Traceback (most recent call last):
  8. File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 57, in f
  9. res = list(func(*args, **kwargs))
  10. File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 36, in f
  11. res = func(*args, **kwargs)
  12. File "E:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\ui.py", line 113, in on_interrogate_image_submit
  13. interrogator.interrogate_image(image)
  14. File "E:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 150, in interrogate_image
  15. data = ('', '', fi_key) + self.interrogate(image)
  16. File "E:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 448, in interrogate
  17. self.load()
  18. File "E:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 433, in load
  19. self.model = ort.InferenceSession(model_path,
  20. File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
  21. self._create_inference_session(providers, provider_options, disabled_optimizers)
  22. File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
  23. sess.initialize_session(providers, provider_options, disabled_optimizers)
  24. onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DKSPWNDJ ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=178 ; expr=cudnnSetStream(cudnn_handle_, stream);
  25.  
  26.  
  27.  
  28. ---
  29. Traceback (most recent call last):
  30. File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
  31. output = await app.get_blocks().process_api(
  32. File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
  33. data = self.postprocess_data(fn_index, result["prediction"], state)
  34. File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data
  35. self.validate_outputs(fn_index, predictions) # type: ignore
  36. File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs
  37. raise ValueError(
  38. ValueError: An event handler (on_interrogate_image_submit) didn't receive enough output values (needed: 7, received: 3).
  39. Wanted outputs:
  40. [state, html, html, label, label, label, html]
  41. Received outputs:
  42. [None, "", "<div class='error'>RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DKSPWNDJ ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=178 ; expr=cudnnSetStream(cudnn_handle_, stream);
  43.  
  44. </div><div class='performance'><p class='time'>Time taken: <wbr><span class='measurement'>0.9 sec.</span></p><p class='vram'><abbr title='Active: peak amount of video memory used during generation (excluding cached data)'>A</abbr>: <span class='measurement'>2.21 GB</span>, <wbr><abbr title='Reserved: total amount of video memory allocated by the Torch library '>R</abbr>: <span class='measurement'>2.60 GB</span>, <wbr><abbr title='System: peak amount of video memory allocated by all running programs, out of total capacity'>Sys</abbr>: <span class='measurement'>2.9/23.9844 GB</span> (12.0%)</p></div>"]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement