Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Traceback (most recent call last):
- File "/home/pixel/oobabooga_linux/text-generation-webui/server.py", line 70, in load_model_wrapper
- shared.model, shared.tokenizer = load_model(shared.model_name)
- File "/home/pixel/oobabooga_linux/text-generation-webui/modules/models.py", line 95, in load_model
- output = load_func(model_name)
- File "/home/pixel/oobabooga_linux/text-generation-webui/modules/models.py", line 275, in GPTQ_loader
- model = modules.GPTQ_loader.load_quantized(model_name)
- File "/home/pixel/oobabooga_linux/text-generation-webui/modules/GPTQ_loader.py", line 177, in load_quantized
- model = load_quant(str(path_to_model), str(pt_path), shared.args.wbits, shared.args.groupsize, kernel_switch_threshold=threshold)
- File "/home/pixel/oobabooga_linux/text-generation-webui/modules/GPTQ_loader.py", line 84, in _load_quant
- model.load_state_dict(safe_load(checkpoint), strict=False)
- File "/home/pixel/oobabooga_linux/installer_files/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
- raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
- RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM:
- size mismatch for model.layers.0.self_attn.k_proj.qzeros: copying a param with shape torch.Size([1, 832]) from checkpoint, the shape in current model is torch.Size([52, 832]).
- size mismatch for model.layers.0.self_attn.k_proj.scales: copying a param with shape torch.Size([1, 6656]) from checkpoint, the shape in current model is torch.Size([52, 6656]).
- size mismatch for model.layers.0.self_attn.o_proj.qzeros: copying a param with shape torch.Size([1, 832]) from checkpoint, the shape in current model is torch.Size([52, 832]).
- size mismatch for model.layers.0.self_attn.o_proj.scales: copying a param with shape torch.Size([1, 6656]) from checkpoint, the shape in current model is torch.Size([52, 6656]).
- size mismatch for model.layers.0.self_attn.q_proj.qzeros: copying a param with shape torch.Size([1, 832]) from checkpoint, the shape in current model is torch.Size([52, 832]).
- size mismatch for model.layers.0.self_attn.q_proj.scales: copying a param with shape torch.Size([1, 6656]) from checkpoint, the shape in current model is torch.Size([52, 6656]).
- size mismatch for model.layers.0.self_attn.v_proj.qzeros: copying a param with shape torch.Size([1, 832]) from checkpoint, the shape in current model is torch.Size([52, 832]).
- size mismatch for model.layers.0.self_attn.v_proj.scales: copying a param with shape torch.Size([1, 6656]) from checkpoint, the shape in current model is torch.Size([52, 6656]).
- size mismatch for model.layers.0.mlp.down_proj.qzeros: copying a param with shape torch.Size([1, 832]) from checkpoint, the shape in current model is torch.Size([140, 832]).
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement