Advertisement
Guest User

ollama

a guest
May 12th, 2025
106
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 18.02 KB | None | 0 0
  1. 2025/05/12 14:23:22 routes.go:1233: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\ricky\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
  2. time=2025-05-12T14:23:22.136-04:00 level=INFO source=images.go:463 msg="total blobs: 8"
  3. time=2025-05-12T14:23:22.137-04:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
  4. time=2025-05-12T14:23:22.138-04:00 level=INFO source=routes.go:1300 msg="Listening on 127.0.0.1:11434 (version 0.6.8)"
  5. time=2025-05-12T14:23:22.138-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
  6. time=2025-05-12T14:23:22.138-04:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
  7. time=2025-05-12T14:23:22.138-04:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
  8. time=2025-05-12T14:23:22.138-04:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=14 efficiency=8 threads=20
  9. time=2025-05-12T14:23:22.268-04:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-a93aa086-2ff7-76c1-b3ce-c677eb3dbaff library=cuda compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3080 Ti Laptop GPU" overhead="804.0 MiB"
  10. time=2025-05-12T14:23:22.270-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a93aa086-2ff7-76c1-b3ce-c677eb3dbaff library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3080 Ti Laptop GPU" total="16.0 GiB" available="14.9 GiB"
  11. [GIN] 2025/05/12 - 14:23:22 | 200 | 505.4µs | 127.0.0.1 | HEAD "/"
  12. time=2025-05-12T14:23:22.535-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  13. time=2025-05-12T14:23:22.538-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  14. [GIN] 2025/05/12 - 14:23:22 | 200 | 11.9106ms | 127.0.0.1 | POST "/api/show"
  15. time=2025-05-12T14:23:22.545-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  16. time=2025-05-12T14:23:22.569-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  17. time=2025-05-12T14:23:22.573-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  18. time=2025-05-12T14:23:22.573-04:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
  19. time=2025-05-12T14:23:22.573-04:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128
  20. time=2025-05-12T14:23:22.573-04:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128
  21. time=2025-05-12T14:23:22.573-04:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\ricky\.ollama\models\blobs\sha256-a49e13d9e328b88c16e8508f0f8ba1cd19ded64e2fa2211ce298ada9d50e1285 gpu=GPU-a93aa086-2ff7-76c1-b3ce-c677eb3dbaff parallel=2 available=15962472448 required="5.9 GiB"
  22. time=2025-05-12T14:23:22.598-04:00 level=INFO source=server.go:106 msg="system memory" total="31.7 GiB" free="15.0 GiB" free_swap="17.4 GiB"
  23. time=2025-05-12T14:23:22.598-04:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
  24. time=2025-05-12T14:23:22.598-04:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.key_length default=128
  25. time=2025-05-12T14:23:22.598-04:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.attention.value_length default=128
  26. time=2025-05-12T14:23:22.598-04:00 level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.9 GiB" memory.required.partial="5.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[5.9 GiB]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="585.0 MiB"
  27. llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\ricky\.ollama\models\blobs\sha256-a49e13d9e328b88c16e8508f0f8ba1cd19ded64e2fa2211ce298ada9d50e1285 (version GGUF V3 (latest))
  28. llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
  29. llama_model_loader: - kv 0: general.architecture str = llama
  30. llama_model_loader: - kv 1: general.name str = cognitivecomputations
  31. llama_model_loader: - kv 2: llama.context_length u32 = 32768
  32. llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
  33. llama_model_loader: - kv 4: llama.block_count u32 = 32
  34. llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
  35. llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
  36. llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
  37. llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
  38. llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
  39. llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
  40. llama_model_loader: - kv 11: general.file_type u32 = 2
  41. llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
  42. llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32001] = ["<unk>", "<s>", "<|im_end|>", "<0x00...
  43. llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32001] = [0.000000, 0.000000, 0.000000, 0.0000...
  44. llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32001] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
  45. llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...
  46. llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
  47. llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
  48. llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
  49. llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
  50. llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true
  51. llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false
  52. llama_model_loader: - kv 23: general.quantization_version u32 = 2
  53. llama_model_loader: - type f32: 65 tensors
  54. llama_model_loader: - type q4_0: 225 tensors
  55. llama_model_loader: - type q6_K: 1 tensors
  56. print_info: file format = GGUF V3 (latest)
  57. print_info: file type = Q4_0
  58. print_info: file size = 3.83 GiB (4.54 BPW)
  59. load: special tokens cache size = 4
  60. load: token to piece cache size = 0.1637 MB
  61. print_info: arch = llama
  62. print_info: vocab_only = 1
  63. print_info: model type = ?B
  64. print_info: model params = 7.24 B
  65. print_info: general.name = cognitivecomputations
  66. print_info: vocab type = SPM
  67. print_info: n_vocab = 32001
  68. print_info: n_merges = 0
  69. print_info: BOS token = 1 '<s>'
  70. print_info: EOS token = 2 '<|im_end|>'
  71. print_info: EOT token = 2 '<|im_end|>'
  72. print_info: UNK token = 0 '<unk>'
  73. print_info: PAD token = 2 '<|im_end|>'
  74. print_info: LF token = 13 '<0x0A>'
  75. print_info: EOG token = 2 '<|im_end|>'
  76. print_info: max token length = 48
  77. llama_model_load: vocab only - skipping tensors
  78. time=2025-05-12T14:23:22.642-04:00 level=INFO source=server.go:410 msg="starting llama server" cmd="C:\\Users\\ricky\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\ricky\\.ollama\\models\\blobs\\sha256-a49e13d9e328b88c16e8508f0f8ba1cd19ded64e2fa2211ce298ada9d50e1285 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 6 --no-mmap --parallel 2 --port 61044"
  79. time=2025-05-12T14:23:22.644-04:00 level=INFO source=sched.go:452 msg="loaded runners" count=1
  80. time=2025-05-12T14:23:22.644-04:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
  81. time=2025-05-12T14:23:22.645-04:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server error"
  82. time=2025-05-12T14:23:22.666-04:00 level=INFO source=runner.go:853 msg="starting go runner"
  83. load_backend: loaded CPU backend from C:\Users\ricky\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
  84. ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
  85. ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
  86. ggml_cuda_init: found 1 CUDA devices:
  87. Device 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU, compute capability 8.6, VMM: yes
  88. load_backend: loaded CUDA backend from C:\Users\ricky\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
  89. time=2025-05-12T14:23:22.756-04:00 level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
  90. time=2025-05-12T14:23:22.757-04:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:61044"
  91. llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3080 Ti Laptop GPU) - 15223 MiB free
  92. llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\ricky\.ollama\models\blobs\sha256-a49e13d9e328b88c16e8508f0f8ba1cd19ded64e2fa2211ce298ada9d50e1285 (version GGUF V3 (latest))
  93. llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
  94. llama_model_loader: - kv 0: general.architecture str = llama
  95. llama_model_loader: - kv 1: general.name str = cognitivecomputations
  96. llama_model_loader: - kv 2: llama.context_length u32 = 32768
  97. llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
  98. llama_model_loader: - kv 4: llama.block_count u32 = 32
  99. llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
  100. llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
  101. llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
  102. llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
  103. llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
  104. llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
  105. llama_model_loader: - kv 11: general.file_type u32 = 2
  106. llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
  107. llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32001] = ["<unk>", "<s>", "<|im_end|>", "<0x00...
  108. llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32001] = [0.000000, 0.000000, 0.000000, 0.0000...
  109. llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32001] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
  110. llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...
  111. llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
  112. llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
  113. llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
  114. llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
  115. llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true
  116. llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false
  117. llama_model_loader: - kv 23: general.quantization_version u32 = 2
  118. llama_model_loader: - type f32: 65 tensors
  119. llama_model_loader: - type q4_0: 225 tensors
  120. llama_model_loader: - type q6_K: 1 tensors
  121. print_info: file format = GGUF V3 (latest)
  122. print_info: file type = Q4_0
  123. print_info: file size = 3.83 GiB (4.54 BPW)
  124. load: special tokens cache size = 4
  125. load: token to piece cache size = 0.1637 MB
  126. print_info: arch = llama
  127. print_info: vocab_only = 0
  128. print_info: n_ctx_train = 32768
  129. print_info: n_embd = 4096
  130. print_info: n_layer = 32
  131. print_info: n_head = 32
  132. print_info: n_head_kv = 8
  133. print_info: n_rot = 128
  134. print_info: n_swa = 0
  135. print_info: n_swa_pattern = 1
  136. print_info: n_embd_head_k = 128
  137. print_info: n_embd_head_v = 128
  138. print_info: n_gqa = 4
  139. print_info: n_embd_k_gqa = 1024
  140. print_info: n_embd_v_gqa = 1024
  141. print_info: f_norm_eps = 0.0e+00
  142. print_info: f_norm_rms_eps = 1.0e-05
  143. print_info: f_clamp_kqv = 0.0e+00
  144. print_info: f_max_alibi_bias = 0.0e+00
  145. print_info: f_logit_scale = 0.0e+00
  146. print_info: f_attn_scale = 0.0e+00
  147. print_info: n_ff = 14336
  148. print_info: n_expert = 0
  149. print_info: n_expert_used = 0
  150. print_info: causal attn = 1
  151. print_info: pooling type = 0
  152. print_info: rope type = 0
  153. print_info: rope scaling = linear
  154. print_info: freq_base_train = 10000.0
  155. print_info: freq_scale_train = 1
  156. print_info: n_ctx_orig_yarn = 32768
  157. print_info: rope_finetuned = unknown
  158. print_info: ssm_d_conv = 0
  159. print_info: ssm_d_inner = 0
  160. print_info: ssm_d_state = 0
  161. print_info: ssm_dt_rank = 0
  162. print_info: ssm_dt_b_c_rms = 0
  163. print_info: model type = 7B
  164. print_info: model params = 7.24 B
  165. print_info: general.name = cognitivecomputations
  166. print_info: vocab type = SPM
  167. print_info: n_vocab = 32001
  168. print_info: n_merges = 0
  169. print_info: BOS token = 1 '<s>'
  170. print_info: EOS token = 2 '<|im_end|>'
  171. print_info: EOT token = 2 '<|im_end|>'
  172. print_info: UNK token = 0 '<unk>'
  173. print_info: PAD token = 2 '<|im_end|>'
  174. print_info: LF token = 13 '<0x0A>'
  175. print_info: EOG token = 2 '<|im_end|>'
  176. print_info: max token length = 48
  177. load_tensors: loading model tensors, this can take a while... (mmap = false)
  178. load_tensors: offloading 32 repeating layers to GPU
  179. load_tensors: offloading output layer to GPU
  180. load_tensors: offloaded 33/33 layers to GPU
  181. load_tensors: CUDA_Host model buffer size = 70.31 MiB
  182. load_tensors: CUDA0 model buffer size = 3847.56 MiB
  183. time=2025-05-12T14:23:22.896-04:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"
  184. llama_context: constructing llama_context
  185. llama_context: n_seq_max = 2
  186. llama_context: n_ctx = 8192
  187. llama_context: n_ctx_per_seq = 4096
  188. llama_context: n_batch = 1024
  189. llama_context: n_ubatch = 512
  190. llama_context: causal_attn = 1
  191. llama_context: flash_attn = 0
  192. llama_context: freq_base = 10000.0
  193. llama_context: freq_scale = 1
  194. llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
  195. llama_context: CUDA_Host output buffer size = 0.28 MiB
  196. init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
  197. init: CUDA0 KV buffer size = 1024.00 MiB
  198. llama_context: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
  199. llama_context: CUDA0 compute buffer size = 560.00 MiB
  200. llama_context: CUDA_Host compute buffer size = 24.01 MiB
  201. llama_context: graph nodes = 1094
  202. llama_context: graph splits = 2
  203. time=2025-05-12T14:23:23.898-04:00 level=INFO source=server.go:628 msg="llama runner started in 1.25 seconds"
  204. [GIN] 2025/05/12 - 14:23:23 | 200 | 1.3573447s | 127.0.0.1 | POST "/api/generate"
  205. [GIN] 2025/05/12 - 14:23:26 | 200 | 3.0154ms | 127.0.0.1 | GET "/api/tags"
  206. time=2025-05-12T14:23:29.798-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  207. [GIN] 2025/05/12 - 14:23:30 | 200 | 1.1812799s | 127.0.0.1 | POST "/api/generate"
  208. [GIN] 2025/05/12 - 14:23:41 | 200 | 0s | 127.0.0.1 | HEAD "/"
  209. time=2025-05-12T14:23:41.598-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  210. time=2025-05-12T14:23:41.616-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  211. [GIN] 2025/05/12 - 14:23:41 | 200 | 31.4453ms | 127.0.0.1 | POST "/api/show"
  212. time=2025-05-12T14:23:41.633-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  213. [GIN] 2025/05/12 - 14:23:41 | 200 | 16.5783ms | 127.0.0.1 | POST "/api/generate"
  214. time=2025-05-12T14:23:42.796-04:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
  215. [GIN] 2025/05/12 - 14:23:43 | 200 | 799.6376ms | 127.0.0.1 | POST "/api/chat"
  216.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement