Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- $ ./build/bin/llama-server -m ~/models/Qwen2.5-7B-Instruct-Q8_0.gguf -n -2 -e -ngl 33 -t 4 -c 4096 --host 0.0.0.0
- ggml_vulkan: Found 1 Vulkan devices:
- ggml_vulkan: 0 = AMD Radeon Graphics (RADV NAVI10) (radv) | uma: 1 | fp16: 1 | warp size: 64 | matrix cores: none
- build: 4510 (b9daaffe) with cc (GCC) 14.0.1 20240411 (Red Hat 14.0.1-0) for x86_64-redhat-linux
- system info: n_threads = 4, n_threads_batch = 4, total_threads = 12
- system_info: n_threads = 4 (n_threads_batch = 4) / 12 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 |
- main: HTTP server is listening, hostname: 0.0.0.0, port: 8080, http threads: 11
- main: loading model
- srv load_model: loading model '/home/user/models/Qwen2.5-7B-Instruct-Q8_0.gguf'
- llama_model_load_from_file_impl: using device Vulkan0 (AMD Radeon Graphics (RADV NAVI10)) - 10240 MiB free
- llama_model_loader: loaded meta data with 38 key-value pairs and 339 tensors from /home/user/models/Qwen2.5-7B-Instruct-Q8_0.gguf (version GGUF V3 (latest))
- llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
- llama_model_loader: - kv 0: general.architecture str = qwen2
- llama_model_loader: - kv 1: general.type str = model
- llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct
- llama_model_loader: - kv 3: general.finetune str = Instruct
- llama_model_loader: - kv 4: general.basename str = Qwen2.5
- llama_model_loader: - kv 5: general.size_label str = 7B
- llama_model_loader: - kv 6: general.license str = apache-2.0
- llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7...
- llama_model_loader: - kv 8: general.base_model.count u32 = 1
- llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B
- llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
- llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B
- llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
- llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
- llama_model_loader: - kv 14: qwen2.block_count u32 = 28
- llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
- llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
- llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
- llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
- llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
- llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
- llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
- llama_model_loader: - kv 22: general.file_type u32 = 7
- llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
- llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
- llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
- llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
- llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
- llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
- llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
- llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
- llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
- llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
- llama_model_loader: - kv 33: general.quantization_version u32 = 2
- llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/Qwen2.5-7B-Instruct-GGUF/...
- llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
- llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 196
- llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128
- llama_model_loader: - type f32: 141 tensors
- llama_model_loader: - type q8_0: 198 tensors
- print_info: file format = GGUF V3 (latest)
- print_info: file type = Q8_0
- print_info: file size = 7.54 GiB (8.50 BPW)
- load: special tokens cache size = 22
- load: token to piece cache size = 0.9310 MB
- print_info: arch = qwen2
- print_info: vocab_only = 0
- print_info: n_ctx_train = 32768
- print_info: n_embd = 3584
- print_info: n_layer = 28
- print_info: n_head = 28
- print_info: n_head_kv = 4
- print_info: n_rot = 128
- print_info: n_swa = 0
- print_info: n_embd_head_k = 128
- print_info: n_embd_head_v = 128
- print_info: n_gqa = 7
- print_info: n_embd_k_gqa = 512
- print_info: n_embd_v_gqa = 512
- print_info: f_norm_eps = 0.0e+00
- print_info: f_norm_rms_eps = 1.0e-06
- print_info: f_clamp_kqv = 0.0e+00
- print_info: f_max_alibi_bias = 0.0e+00
- print_info: f_logit_scale = 0.0e+00
- print_info: n_ff = 18944
- print_info: n_expert = 0
- print_info: n_expert_used = 0
- print_info: causal attn = 1
- print_info: pooling type = 0
- print_info: rope type = 2
- print_info: rope scaling = linear
- print_info: freq_base_train = 1000000.0
- print_info: freq_scale_train = 1
- print_info: n_ctx_orig_yarn = 32768
- print_info: rope_finetuned = unknown
- print_info: ssm_d_conv = 0
- print_info: ssm_d_inner = 0
- print_info: ssm_d_state = 0
- print_info: ssm_dt_rank = 0
- print_info: ssm_dt_b_c_rms = 0
- print_info: model type = 7B
- print_info: model params = 7.62 B
- print_info: general.name = Qwen2.5 7B Instruct
- print_info: vocab type = BPE
- print_info: n_vocab = 152064
- print_info: n_merges = 151387
- print_info: BOS token = 151643 '<|endoftext|>'
- print_info: EOS token = 151645 '<|im_end|>'
- print_info: EOT token = 151645 '<|im_end|>'
- print_info: PAD token = 151643 '<|endoftext|>'
- print_info: LF token = 148848 'ÄĬ'
- print_info: FIM PRE token = 151659 '<|fim_prefix|>'
- print_info: FIM SUF token = 151661 '<|fim_suffix|>'
- print_info: FIM MID token = 151660 '<|fim_middle|>'
- print_info: FIM PAD token = 151662 '<|fim_pad|>'
- print_info: FIM REP token = 151663 '<|repo_name|>'
- print_info: FIM SEP token = 151664 '<|file_sep|>'
- print_info: EOG token = 151643 '<|endoftext|>'
- print_info: EOG token = 151645 '<|im_end|>'
- print_info: EOG token = 151662 '<|fim_pad|>'
- print_info: EOG token = 151663 '<|repo_name|>'
- print_info: EOG token = 151664 '<|file_sep|>'
- print_info: max token length = 256
- ggml_vulkan: Compiling shaders.............................................Done!
- load_tensors: offloading 28 repeating layers to GPU
- load_tensors: offloading output layer to GPU
- load_tensors: offloaded 29/29 layers to GPU
- load_tensors: Vulkan0 model buffer size = 7165.44 MiB
- load_tensors: CPU_Mapped model buffer size = 552.23 MiB
- llama_init_from_model: n_seq_max = 1
- llama_init_from_model: n_ctx = 4096
- llama_init_from_model: n_ctx_per_seq = 4096
- llama_init_from_model: n_batch = 2048
- llama_init_from_model: n_ubatch = 512
- llama_init_from_model: flash_attn = 0
- llama_init_from_model: freq_base = 1000000.0
- llama_init_from_model: freq_scale = 1
- llama_init_from_model: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
- llama_kv_cache_init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
- llama_kv_cache_init: Vulkan0 KV buffer size = 224.00 MiB
- llama_init_from_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB
- llama_init_from_model: Vulkan_Host output buffer size = 0.58 MiB
- llama_init_from_model: Vulkan0 compute buffer size = 304.00 MiB
- llama_init_from_model: Vulkan_Host compute buffer size = 15.01 MiB
- llama_init_from_model: graph nodes = 986
- llama_init_from_model: graph splits = 2
- common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
- common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
- srv init: initializing slots, n_slots = 1
- slot init: id 0 | task -1 | new slot n_ctx_slot = 4096
- main: model loaded
- main: chat template, chat_template: (built-in), example_format: '<|im_start|>system
- You are a helpful assistant<|im_end|>
- <|im_start|>user
- Hello<|im_end|>
- <|im_start|>assistant
- Hi there<|im_end|>
- <|im_start|>user
- How are you?<|im_end|>
- <|im_start|>assistant
- '
- main: server is listening on http://0.0.0.0:8080 - starting the main loop
- srv update_slots: all slots are idle
- slot launch_slot_: id 0 | task 0 | processing task
- slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 4096, n_keep = 0, n_prompt_tokens = 33
- slot update_slots: id 0 | task 0 | kv cache rm [0, end)
- slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 33, n_tokens = 33, progress = 1.000000
- slot update_slots: id 0 | task 0 | prompt done, n_past = 33, n_tokens = 33
- slot release: id 0 | task 0 | stop processing: n_past = 515, truncated = 0
- slot print_timing: id 0 | task 0 |
- prompt eval time = 329.50 ms / 33 tokens ( 9.98 ms per token, 100.15 tokens per second)
- eval time = 14331.91 ms / 483 tokens ( 29.67 ms per token, 33.70 tokens per second)
- total time = 14661.41 ms / 516 tokens
- srv update_slots: all slots are idle
Add Comment
Please, Sign In to add comment