Advertisement
Tom_Neverwinter

log tts-generation-webui 112723

Nov 27th, 2023
62
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 9.73 KB | None | 0 0
  1. Loading extensions:
  2. Loaded extension: callback_save_generation_ffmpeg
  3. Loaded extension: callback_save_generation_musicgen_ffmpeg
  4. Loaded extension: empty_extension
  5. Loaded 2 callback_save_generation extensions.
  6. Loaded 1 callback_save_generation_musicgen extensions.
  7. 2023-11-27 21:36:53 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
  8. 2023-11-27 21:36:53 | WARNING | xformers | WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
  9. PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.0.0+cpu)
  10. Python 3.10.11 (you have 3.10.13)
  11. Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  12. Memory-efficient attention, SwiGLU, sparse and more won't be available.
  13. Set XFORMERS_MORE_DETAILS=1 for more details
  14. 2023-11-27 21:36:53 | WARNING | xformers | Triton is not available, some optimizations will not be enabled.
  15. This is just a warning: No module named 'triton'
  16. Failed to load rvc demo
  17. No module named 'rvc_beta'
  18. Starting Gradio server...
  19. Gradio interface options:
  20. inline: False
  21. inbrowser: True
  22. share: False
  23. debug: False
  24. enable_queue: True
  25. max_threads: 40
  26. auth: None
  27. auth_message: None
  28. prevent_thread_lock: False
  29. show_error: False
  30. server_name: 0.0.0.0
  31. server_port: None
  32. show_tips: False
  33. height: 500
  34. width: 100%
  35. favicon_path: None
  36. ssl_keyfile: None
  37. ssl_certfile: None
  38. ssl_keyfile_password: None
  39. ssl_verify: True
  40. quiet: True
  41. show_api: True
  42. file_directories: None
  43. _frontend: True
  44. Running on local URL: http://0.0.0.0:7860
  45. Loading Bark models
  46. - Text Generation: GPU: Yes, Small Model: No
  47. - Coarse-to-Fine Inference: GPU: Yes, Small Model: No
  48. - Fine-tuning: GPU: Yes, Small Model: No
  49. - Codec: GPU: Yes
  50. C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\huggingface_hub\file_download.py:147: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\Tom_N. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
  51. To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
  52. warnings.warn(message)
  53. Traceback (most recent call last):
  54. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\routes.py", line 437, in run_predict
  55. output = await app.get_blocks().process_api(
  56. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1352, in process_api
  57. result = await self.call_function(
  58. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1077, in call_function
  59. prediction = await anyio.to_thread.run_sync(
  60. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
  61. return await get_asynclib().run_sync_in_worker_thread(
  62. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
  63. return await future
  64. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
  65. result = context.run(func, *args)
  66. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 225, in generate_voice
  67. full_generation = get_prompts(wav_file, use_gpu)
  68. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 87, in get_prompts
  69. semantic_prompt = get_semantic_prompt(path_to_wav, device)
  70. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 81, in get_semantic_prompt
  71. semantic_vectors = get_semantic_vectors(path_to_wav, device)
  72. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 46, in get_semantic_vectors
  73. hubert_model = _load_hubert_model(device)
  74. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 26, in _load_hubert_model
  75. hubert_model = CustomHubert(
  76. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\bark_hubert_quantizer\pre_kmeans_hubert.py", line 60, in __init__
  77. checkpoint = torch.load(checkpoint_path, map_location=device)
  78. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 797, in load
  79. with _open_zipfile_reader(opened_file) as opened_zipfile:
  80. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 283, in __init__
  81. super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
  82. RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
  83. Downloading HuBERT custom tokenizer
  84. quantifier_hubert_base_ls960.pth: 100%|██████████████████████████████████████████████| 104M/104M [05:26<00:00, 319kB/s]
  85. Downloaded tokenizer
  86. Traceback (most recent call last):
  87. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\routes.py", line 437, in run_predict
  88. output = await app.get_blocks().process_api(
  89. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1352, in process_api
  90. result = await self.call_function(
  91. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1077, in call_function
  92. prediction = await anyio.to_thread.run_sync(
  93. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
  94. return await get_asynclib().run_sync_in_worker_thread(
  95. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
  96. return await future
  97. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
  98. result = context.run(func, *args)
  99. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 198, in load_tokenizer
  100. _load_tokenizer(
  101. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 66, in _load_tokenizer
  102. tokenizer = CustomTokenizer.load_from_checkpoint(
  103. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\bark_hubert_quantizer\customtokenizer.py", line 119, in load_from_checkpoint
  104. model.load_state_dict(torch.load(path, map_location=map_location))
  105. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 809, in load
  106. return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  107. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1172, in _load
  108. result = unpickler.load()
  109. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
  110. typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  111. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
  112. wrap_storage=restore_location(storage, location),
  113. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1083, in restore_location
  114. return default_restore_location(storage, map_location)
  115. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 217, in default_restore_location
  116. result = fn(storage, location)
  117. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
  118. device = validate_cuda_device(location)
  119. File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device
  120. raise RuntimeError('Attempting to deserialize object on a CUDA '
  121. RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
  122.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement