Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Loading extensions:
- Loaded extension: callback_save_generation_ffmpeg
- Loaded extension: callback_save_generation_musicgen_ffmpeg
- Loaded extension: empty_extension
- Loaded 2 callback_save_generation extensions.
- Loaded 1 callback_save_generation_musicgen extensions.
- 2023-11-27 21:36:53 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
- 2023-11-27 21:36:53 | WARNING | xformers | WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
- PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.0.0+cpu)
- Python 3.10.11 (you have 3.10.13)
- Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
- Memory-efficient attention, SwiGLU, sparse and more won't be available.
- Set XFORMERS_MORE_DETAILS=1 for more details
- 2023-11-27 21:36:53 | WARNING | xformers | Triton is not available, some optimizations will not be enabled.
- This is just a warning: No module named 'triton'
- Failed to load rvc demo
- No module named 'rvc_beta'
- Starting Gradio server...
- Gradio interface options:
- inline: False
- inbrowser: True
- share: False
- debug: False
- enable_queue: True
- max_threads: 40
- auth: None
- auth_message: None
- prevent_thread_lock: False
- show_error: False
- server_name: 0.0.0.0
- server_port: None
- show_tips: False
- height: 500
- width: 100%
- favicon_path: None
- ssl_keyfile: None
- ssl_certfile: None
- ssl_keyfile_password: None
- ssl_verify: True
- quiet: True
- show_api: True
- file_directories: None
- _frontend: True
- Running on local URL: http://0.0.0.0:7860
- Loading Bark models
- - Text Generation: GPU: Yes, Small Model: No
- - Coarse-to-Fine Inference: GPU: Yes, Small Model: No
- - Fine-tuning: GPU: Yes, Small Model: No
- - Codec: GPU: Yes
- C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\huggingface_hub\file_download.py:147: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\Tom_N. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
- To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
- warnings.warn(message)
- Traceback (most recent call last):
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\routes.py", line 437, in run_predict
- output = await app.get_blocks().process_api(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1352, in process_api
- result = await self.call_function(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1077, in call_function
- prediction = await anyio.to_thread.run_sync(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
- return await get_asynclib().run_sync_in_worker_thread(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
- return await future
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
- result = context.run(func, *args)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 225, in generate_voice
- full_generation = get_prompts(wav_file, use_gpu)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 87, in get_prompts
- semantic_prompt = get_semantic_prompt(path_to_wav, device)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 81, in get_semantic_prompt
- semantic_vectors = get_semantic_vectors(path_to_wav, device)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 46, in get_semantic_vectors
- hubert_model = _load_hubert_model(device)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 26, in _load_hubert_model
- hubert_model = CustomHubert(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\bark_hubert_quantizer\pre_kmeans_hubert.py", line 60, in __init__
- checkpoint = torch.load(checkpoint_path, map_location=device)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 797, in load
- with _open_zipfile_reader(opened_file) as opened_zipfile:
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 283, in __init__
- super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
- RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
- Downloading HuBERT custom tokenizer
- quantifier_hubert_base_ls960.pth: 100%|██████████████████████████████████████████████| 104M/104M [05:26<00:00, 319kB/s]
- Downloaded tokenizer
- Traceback (most recent call last):
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\routes.py", line 437, in run_predict
- output = await app.get_blocks().process_api(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1352, in process_api
- result = await self.call_function(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1077, in call_function
- prediction = await anyio.to_thread.run_sync(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
- return await get_asynclib().run_sync_in_worker_thread(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
- return await future
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
- result = context.run(func, *args)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 198, in load_tokenizer
- _load_tokenizer(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 66, in _load_tokenizer
- tokenizer = CustomTokenizer.load_from_checkpoint(
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\bark_hubert_quantizer\customtokenizer.py", line 119, in load_from_checkpoint
- model.load_state_dict(torch.load(path, map_location=map_location))
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 809, in load
- return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1172, in _load
- result = unpickler.load()
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
- typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
- wrap_storage=restore_location(storage, location),
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 1083, in restore_location
- return default_restore_location(storage, map_location)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 217, in default_restore_location
- result = fn(storage, location)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
- device = validate_cuda_device(location)
- File "C:\Users\Tom_N\Desktop\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device
- raise RuntimeError('Attempting to deserialize object on a CUDA '
- RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement