Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- C:\dev\ml\r1_1p58>t
- ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
- ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
- ggml_cuda_init: found 1 CUDA devices:
- Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
- register_backend: registered backend CUDA (1 devices)
- register_device: registered device CUDA0 (NVIDIA GeForce RTX 4090)
- register_backend: registered backend CPU (1 devices)
- register_device: registered device CPU (13th Gen Intel(R) Core(TM) i9-13900K)
- load_backend: failed to find ggml_backend_init in .\ggml-cuda.dll
- load_backend: failed to find ggml_backend_init in .\ggml-rpc.dll
- load_backend: failed to find ggml_backend_init in .\ggml-cpu.dll
- build: 4657 (8a59053f) with MSVC 19.39.33521.0 for x64 (debug)
- main: llama backend init
- main: load the model and apply lora adapter, if any
- llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22890 MiB free
- llama_model_loader: additional 2 GGUFs metadata loaded.
- llama_model_loader: loaded meta data with 52 key-value pairs and 1025 tensors from DeepSeek-R1-GGUF\DeepSeek-R1-UD-IQ1_S\DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf (version GGUF V3 (latest))
- llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
- llama_model_loader: - kv 0: general.architecture str = deepseek2
- llama_model_loader: - kv 1: general.type str = model
- llama_model_loader: - kv 2: general.name str = DeepSeek R1 BF16
- llama_model_loader: - kv 3: general.quantized_by str = Unsloth
- llama_model_loader: - kv 4: general.size_label str = 256x20B
- llama_model_loader: - kv 5: general.repo_url str = https://huggingface.co/unsloth
- llama_model_loader: - kv 6: deepseek2.block_count u32 = 61
- llama_model_loader: - kv 7: deepseek2.context_length u32 = 163840
- llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168
- llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432
- llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 128
- llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 128
- llama_model_loader: - kv 12: deepseek2.rope.freq_base f32 = 10000.000000
- llama_model_loader: - kv 13: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
- llama_model_loader: - kv 14: deepseek2.expert_used_count u32 = 8
- llama_model_loader: - kv 15: deepseek2.leading_dense_block_count u32 = 3
- llama_model_loader: - kv 16: deepseek2.vocab_size u32 = 129280
- llama_model_loader: - kv 17: deepseek2.attention.q_lora_rank u32 = 1536
- llama_model_loader: - kv 18: deepseek2.attention.kv_lora_rank u32 = 512
- llama_model_loader: - kv 19: deepseek2.attention.key_length u32 = 192
- llama_model_loader: - kv 20: deepseek2.attention.value_length u32 = 128
- llama_model_loader: - kv 21: deepseek2.expert_feed_forward_length u32 = 2048
- llama_model_loader: - kv 22: deepseek2.expert_count u32 = 256
- llama_model_loader: - kv 23: deepseek2.expert_shared_count u32 = 1
- llama_model_loader: - kv 24: deepseek2.expert_weights_scale f32 = 2.500000
- llama_model_loader: - kv 25: deepseek2.expert_weights_norm bool = true
- llama_model_loader: - kv 26: deepseek2.expert_gating_func u32 = 2
- llama_model_loader: - kv 27: deepseek2.rope.dimension_count u32 = 64
- llama_model_loader: - kv 28: deepseek2.rope.scaling.type str = yarn
- llama_model_loader: - kv 29: deepseek2.rope.scaling.factor f32 = 40.000000
- llama_model_loader: - kv 30: deepseek2.rope.scaling.original_context_length u32 = 4096
- llama_model_loader: - kv 31: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
- llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2
- llama_model_loader: - kv 33: tokenizer.ggml.pre str = deepseek-v3
- llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<...
- llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
- llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
- llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 0
- llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 1
- llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 128815
- llama_model_loader: - kv 40: tokenizer.ggml.add_bos_token bool = true
- llama_model_loader: - kv 41: tokenizer.ggml.add_eos_token bool = false
- llama_model_loader: - kv 42: tokenizer.chat_template str = {% if not add_generation_prompt is de...
- llama_model_loader: - kv 43: general.quantization_version u32 = 2
- llama_model_loader: - kv 44: general.file_type u32 = 24
- llama_model_loader: - kv 45: quantize.imatrix.file str = DeepSeek-R1.imatrix
- llama_model_loader: - kv 46: quantize.imatrix.dataset str = /training_data/calibration_datav3.txt
- llama_model_loader: - kv 47: quantize.imatrix.entries_count i32 = 720
- llama_model_loader: - kv 48: quantize.imatrix.chunks_count i32 = 124
- llama_model_loader: - kv 49: split.no u16 = 0
- llama_model_loader: - kv 50: split.tensors.count i32 = 1025
- llama_model_loader: - kv 51: split.count u16 = 3
- llama_model_loader: - type f32: 361 tensors
- llama_model_loader: - type q4_K: 190 tensors
- llama_model_loader: - type q5_K: 116 tensors
- llama_model_loader: - type q6_K: 184 tensors
- llama_model_loader: - type iq2_xxs: 6 tensors
- llama_model_loader: - type iq1_s: 168 tensors
- print_info: file format = GGUF V3 (latest)
- print_info: file type = IQ1_S - 1.5625 bpw
- print_info: file size = 130.60 GiB (1.67 BPW)
- load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
- load: special tokens cache size = 819
- load: token to piece cache size = 0.8223 MB
- print_info: arch = deepseek2
- print_info: vocab_only = 0
- print_info: n_ctx_train = 163840
- print_info: n_embd = 7168
- print_info: n_layer = 61
- print_info: n_head = 128
- print_info: n_head_kv = 128
- print_info: n_rot = 64
- print_info: n_swa = 0
- print_info: n_embd_head_k = 192
- print_info: n_embd_head_v = 128
- print_info: n_gqa = 1
- print_info: n_embd_k_gqa = 24576
- print_info: n_embd_v_gqa = 16384
- print_info: f_norm_eps = 0.0e+00
- print_info: f_norm_rms_eps = 1.0e-06
- print_info: f_clamp_kqv = 0.0e+00
- print_info: f_max_alibi_bias = 0.0e+00
- print_info: f_logit_scale = 0.0e+00
- print_info: n_ff = 18432
- print_info: n_expert = 256
- print_info: n_expert_used = 8
- print_info: causal attn = 1
- print_info: pooling type = 0
- print_info: rope type = 0
- print_info: rope scaling = yarn
- print_info: freq_base_train = 10000.0
- print_info: freq_scale_train = 0.025
- print_info: n_ctx_orig_yarn = 4096
- print_info: rope_finetuned = unknown
- print_info: ssm_d_conv = 0
- print_info: ssm_d_inner = 0
- print_info: ssm_d_state = 0
- print_info: ssm_dt_rank = 0
- print_info: ssm_dt_b_c_rms = 0
- print_info: model type = 671B
- print_info: model params = 671.03 B
- print_info: general.name = DeepSeek R1 BF16
- print_info: n_layer_dense_lead = 3
- print_info: n_lora_q = 1536
- print_info: n_lora_kv = 512
- print_info: n_ff_exp = 2048
- print_info: n_expert_shared = 1
- print_info: expert_weights_scale = 2.5
- print_info: expert_weights_norm = 1
- print_info: expert_gating_func = sigmoid
- print_info: rope_yarn_log_mul = 0.1000
- print_info: vocab type = BPE
- print_info: n_vocab = 129280
- print_info: n_merges = 127741
- print_info: BOS token = 0 '<|begin▁of▁sentence|>'
- print_info: EOS token = 1 '<|end▁of▁sentence|>'
- print_info: EOT token = 1 '<|end▁of▁sentence|>'
- print_info: PAD token = 128815 '<|PAD▁TOKEN|>'
- print_info: LF token = 201 'Ċ'
- print_info: FIM PRE token = 128801 '<|fim▁begin|>'
- print_info: FIM SUF token = 128800 '<|fim▁hole|>'
- print_info: FIM MID token = 128802 '<|fim▁end|>'
- print_info: EOG token = 1 '<|end▁of▁sentence|>'
- print_info: max token length = 256
- load_tensors: loading model tensors, this can take a while... (mmap = true)
- load_tensors: offloading 7 repeating layers to GPU
- load_tensors: offloaded 7/62 layers to GPU
- load_tensors: CPU_Mapped model buffer size = 47058.04 MiB
- load_tensors: CPU_Mapped model buffer size = 47109.49 MiB
- load_tensors: CPU_Mapped model buffer size = 23859.37 MiB
- load_tensors: CUDA0 model buffer size = 15703.16 MiB
- llama_init_from_model: n_seq_max = 1
- llama_init_from_model: n_ctx = 8192
- llama_init_from_model: n_ctx_per_seq = 8192
- llama_init_from_model: n_batch = 2048
- llama_init_from_model: n_ubatch = 512
- llama_init_from_model: flash_attn = 0
- llama_init_from_model: freq_base = 10000.0
- llama_init_from_model: freq_scale = 0.025
- llama_init_from_model: n_ctx_per_seq (8192) < n_ctx_train (163840) -- the full capacity of the model will not be utilized
- llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'q4_0', type_v = 'f16', n_layer = 61, can_shift = 0
- llama_kv_cache_init: CPU KV buffer size = 19656.00 MiB
- llama_kv_cache_init: CUDA0 KV buffer size = 2548.00 MiB
- llama_init_from_model: KV self size = 22204.00 MiB, K (q4_0): 6588.00 MiB, V (f16): 15616.00 MiB
- llama_init_from_model: CPU output buffer size = 0.49 MiB
- llama_init_from_model: CUDA0 compute buffer size = 2589.50 MiB
- llama_init_from_model: CUDA_Host compute buffer size = 96.01 MiB
- llama_init_from_model: graph nodes = 5025
- llama_init_from_model: graph splits = 1015 (with bs=512), 3 (with bs=1)
- common_init_from_params: KV cache shifting is not supported for this model, disabling KV cache shifting
- common_init_from_params: setting dry_penalty_last_n to ctx_size = 8192
- common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
- main: llama threadpool init, n_threads = 32
- system_info: n_threads = 32 (n_threads_batch = 32) / 32 | CUDA : ARCHS = 890 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 |
- sampler seed: 3407
- sampler params:
- repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
- dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 8192
- top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, temp = 0.600
- mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
- sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
- generate: n_ctx = 8192, n_batch = 2048, n_predict = -1, n_keep = 1
- "<|User|>Without rearranging the digits 1 2 3 4 5, insert mathematical operation signs + - * and/or /, and, if necessary, parentheses so that the resulting expression equals 40. The key is that you can combine digits (like 12+3/45) but you cannot change their order from the original sequence 1,2,3,4,5.<|Assistant|>"
- Okay, so I need to make an expression using the digits 1, 2, 3, 4, 5 in order without rearranging them. I can insert mathematical operations like addition, subtraction, multiplication, division, and parentheses if needed. The goal is to get the result 40. Let me think through this step by step.
- First, let's list the digits in order: 1, 2, 3, 4, 5. I can't change their order, so I have to work with them as they are. The challenge is to insert operations between them to get 40. Hmm.
- I know that 40 is a multiple of 8 and 5, or 4 and 10. Maybe I can find a way to get those numbers somewhere. Let me try different combinations.
- Starting with 1 and 2. If I combine 1 and 2 first, that's 12. Then 3, 4, 5. Maybe 12 plus 3 times (4 + 5). Let's calculate: 12 + 3*(9) = 12 + 27 = 39. Close but not 40. If I can adjust that a bit.
- Alternatively, maybe 1 times 2. So 1*2=2. Then 2 plus 3: 2+3=5. Then 5 times 4=20, and 20 plus 5=25. Not enough. Hmm.
- What if I combine 1 and 2 as 12, then add 3, then multiply by 4, and subtract 5? Let's see: 12 + 3 = 15, 15 *4 =60, 60 -5=55. That's 55, not 40.
- Alternatively, 1 + 2 + 3 + 4 *5. Let's compute: 1+2=3, +3=6, +4=10, then times 5? Wait, no, order matters. So it's 1 + 2 + 3 + (4*5). So that's 1+2=3, +3=6, +20=26. Not 40.
- What if I need to use division somewhere? Maybe 12 divided by something. Let's try 12 / 3 *4 +5. So 12/3=4, 4*4=16, 16+5=21. Nope.
- How about 1 + (2 * 3 * 4) -5. Let's compute: 2*3=6, 6*4=24. Then 1+24=25, 25-5=20. Not enough.
- Wait, maybe parentheses can change the order of operations. For example, 1 * (2 + 3) * (4 + 5). That's 5*9=45. Then subtract 5? Wait, but the digits are in order. So after 1,2,3,4,5. So maybe 1 * (2 + 3) * 4 +5. Let's see: 2+3=5, 1*5=5, *4=20, +5=25. Still not.
- Hmm. Maybe combining digits into larger numbers. Like 12 + 34 - 5. 12+34=46, minus 5=41. Close, but 40. If I can adjust that. But 12 + 34 - 5 is 41, so maybe 12 + 34 - 5 needs to be adjusted. Wait, but can't change the order. Maybe 1 + 2 * 3 * 4 -5. Let's compute: 2*3=6, *4=24. Then 1 +24=25, 25-5=20. No.
- Alternatively, 1 * 2 * (3 + 4 +5). Wait, 3+4+5=12. 2*12=24, 1*24=24. Not 40.
- What if I use division. Let's try 1 * (2 + 3) * (4 +5). Wait, that's 5*9=45. Then subtract 5. But the digits after 4 are 5. So maybe 1*(2+3)*(4+5) but that's 45, but how to get 40? Wait, maybe 1*(2+3)*4 -5. So 1*(5)*4=20, 20-5=15. Not 40.
- Alternatively, 1 + 2 * 3 * 4 -5. Let's compute: 2*3=6, *4=24, then 1+24=25, 25-5=20.
- Maybe combining more digits. Like 12 * 3 + 4 *5. Let's see: 12*3=36, 4*5=20, total 56. Not 40.
- Wait, 1 + 2 + 3 + 4 *5. That's 1+2=3, +3=6, +20=26. No.
- What if I do (1 + 2 + 3 + 4) *5. That's 10*5=50. Then subtract 10? But I can't insert extra digits. Wait, but maybe (1+2+3+4) *5 is 10*5=50. But 50 isn't 40. How to reduce by 10? Maybe (1+2+3+4) * (5 - something). But 5 is the last digit.
- Alternatively, (1 + 2) * (3 + 4) *5. Wait, 3+4=7, 1+2=3, 3*7=21, *5=105. No.
- Hmm. Let's think differently. Maybe 1 * 2 * (3 + 4) *5. So 3+4=7, 2*7=14, 1*14=14, *5=70. No.
- Alternatively, 1 * 2 * 3 *4 +5. So 24 +5=29. Not enough.
- Wait, maybe (1 + 2) * 3 *4 -5. Let's compute: 1+2=3, 3*3=9, 9*4=36, minus5=31.
- What if I use division to get some fractions. Like 12 + 3/(4/5). Let's see, 4/5 is 0.8, then 3 divided by that is 3.75. So 12 +3.75=15.75. Not helpful.
- Alternatively, 1 + (2 * 3 * 4) -5. So 2*3=6, *4=24. 1+24=25, 25-5=20.
- Wait, maybe 1 + 2 * (3 +4) *5. Let's see: 3+4=7, 2*7=14, 14*5=70, +1=71. No.
- Hmm. Maybe there's a different approach. Let's think of combining the digits into bigger numbers. For example, 12, 34, 5. Then 12 + 34 +5=51. Not 40. Or 12 + 34 -5=41. Close. If I can get that 41 down by 1. But how?
- Alternatively, 12 + 3 *4 +5. 12 +12=24, +5=29.
- Wait, maybe 1 + 23 +4 +5. Wait, but we can't rearrange the digits. So digits have to stay in order. So 1,2,3,4,5. So combining digits is allowed as long as order is maintained. So 1 can be combined with 2 as 12, but can't rearrange to make 23. So 12 is okay, but 23 would require changing the order. Wait, no, because 2 and 3 are in order. So 1 can be 1, then 2 and 3 can be combined as 23. Wait, but the original sequence is 1,2,3,4,5. So if we combine 2 and 3 into 23, that's allowed? Or does combining digits require consecutive digits? Wait, the problem says "you can combine digits (like 12+3/45)" so combining adjacent digits. So 12 is combining 1 and 2, then 3,4,5 are separate. But combining 2 and 3 into 23 would require that they are adjacent, which they are. So 1, 23, 4,5. So 1 + 23 +4 +5=33. Not 40.
- Alternatively, 1 + 2 + 34 +5. 34 is combining 3 and4. So 1+2=3, +34=37, +5=42. Close.
- Wait, 1 + 2 + 34 +5=42. If I can subtract 2 somewhere. Hmm. Maybe 1 *2 +34 +5. 1*2=2, +34=36, +5=41. Still close.
- Alternatively, 1 *2 +34 +5=41. If I can adjust it by 1. But how? Maybe 1 *2 +34 -5=2 +34 -5=31. No.
- Alternatively, 1 + 2 *3 *4 -5. 2*3=6, *4=24, 1+24=25, -5=20.
- Wait, maybe 12 *3 +4 *5. 12*3=36, 4*5=20. 36+20=56.
- Hmm. What if I do 1 *2 * (3 +4 +5). 3+4+5=12. 2*12=24, 1*24=24. No.
- Alternatively, (1 +2 +3 +4) *5. That's 10*5=50. Then subtract 10. But how? Since we have to use the digits in order. Maybe 1 +2 +3 +4 *5. Wait, that's different. But order is 1,2,3,4,5. So if I insert operations: 1 +2 +3 +4 *5. That would be 1+2+3+(4*5)=6+20=26. Not 40.
- Wait, perhaps using division somewhere to get a higher number. Like 12 *3 +4 /5. 12*3=36, 4/5=0.8, total 36.8. Not 40.
- Alternatively, 1 *2 / (3/4) *5. Let's compute: 3/4=0.75, 2 divided by that is 2.666..., multiplied by 5=13.333..., 1* that is still 13.333.
- Hmm. Maybe using parentheses to change the order. For example, 1 + (2 *3 *4) -5. 2*3=6, *4=24, +1=25, -5=20.
- Alternatively, 1 * (2 *3 *4) -5. 2*3=6, *4=24, *1=24, -5=19.
- Wait, maybe 1 + 2 * (3 +4 +5). 3+4+5=12, 2*12=24, 1+24=25.
- Still not 40. I'm stuck. Let's think differently. Maybe using subtraction.
- What if I do 123 - something. But 123 is a big number. 123 -45=78. Not 40.
- Alternatively, 12 *3 -4 -5. 12*3=36, 36-4=32, 32-5=27.
- Alternatively, 12 *3 - (4 +5)=36 -9=27.
- Hmm. What if I use division to create fractions. Like 1 * (2 + 3) * (4 +5). That's 5*9=45. Then subtract 5. But how? The last digit is 5, so maybe 45 -5=40. Wait, that would require something like (1*(2+3)*(4+5)) -5. Let me check the digits order.
- Original digits: 1,2,3,4,5.
- If I do 1*(2+3)*(4+5)-5. Let's parse this:
- Between 1 and 2: insert a multiplication and parentheses. Wait, the expression would be 1 * (2 +3) * (4 +5) -5. But the digits must remain in order. So the expression would be constructed as follows:
- 1 * (2 +3) * (4 +5) -5.
- Breaking it down:
- 1 * (2+3) = 1 *5=5
- Then * (4+5)=5 *9=45
- Then subtract 5: 45-5=40.
- Wait, does this work with the original digits? Let's see:
- The digits are in order 1,2,3,4,5. So inserting operations and parentheses:
- 1 * (2 +3) * (4 +5) -5
- Breaking down the digits:
- 1 is followed by 2, then 3, then 4, then 5. So inserting * after 1, then (2+3) would be after 2 and 3, then * (4+5) after 4 and 5. Wait, but the subtraction is at the end. Let me check the structure.
- The expression would be:
- 1 * (2+3) * (4+5) -5
- Which is:
- 1 * (2+3) = 1*5=5
- Then * (4+5)=5*9=45
- Then subtract 5: 45-5=40.
- So yes, this seems to work. Let me verify step by step:
- 1 * (2 +3) = 1*5=5
- 5 * (4 +5) = 5*9=45
- 45 -5=40.
- Yes, that gives 40. So the expression is 1*(2+3)*(4+5)-5.
- But wait, are all the digits in their original order? Let's see:
- The digits are 1,2,3,4,5. The operations inserted are:
- After 1: *
- After 2: +
- After 3: ) * (
- After 4: +
- After 5: ) -5.
- Wait, but the exact placement of parentheses matters. The expression is constructed as 1 * (2+3) * (4+5) -5.
- Breaking down the digits and operations:
- 1 * (2 +3) * (4 +5) -5
- So the original digits are 1,2,3,4,5. We insert *, +, *, +, and subtract. The parentheses are around 2+3 and 4+5.
- So the order of digits remains 1,2,3,4,5, with operations inserted between them. The key is that we're combining 2 and 3 as (2+3), and 4 and5 as (4+5), but without rearranging the digits. So this should be acceptable.
- Therefore, the correct expression is 1*(2+3)*(4+5)-5, which equals 40.
- **Final Answer**
- The expression is \boxed{1 \times (2 + 3) \times (4 + 5) - 5}.
- </think>
- To create an expression using the digits 1, 2, 3, 4, 5 in their original order with mathematical operations to equal 40, we can use the following steps:
- 1. Combine the digits 2 and 3 with an addition operation: \(2 + 3\).
- 2. Combine the digits 4 and 5 with an addition operation: \(4 + 5\).
- 3. Multiply the results of these combinations with the digit 1.
- 4. Subtract the digit 5 from the result.
- The resulting expression is:
- \[
- 1 \times (2 + 3) \times (4 + 5) - 5
- \]
- Breaking down the calculation:
- - \(2 + 3 = 5\)
- 8- \(4 + 5 = 9\)
- &1 \times 5 = 5\)
- - \(5 \times 9 = 45\)
- # 1.2.2 应用函数
- 函数应用指的是将函数作用于参数。在 Lisp 中,函数应用的形式为:
- ```lisp
- (函数名 参数1 参数2 ...)
- ```
- 例如,调用函数 `+` 并将参数 1 和 2 传入,可以写成:
- ```lisp
- (+ 1 2)
- ```
- 这种形式的表达式称为组合式(combination)。组合式由操作符(operator,即函数名)和操作数(operands,即参数)组成。组合式求值时,Lisp 会先对操作符和操作数求值,然后将操作符应用于操作数。例如,在上面的例子中,`+` 作为操作符,求值结果为加法函数;`1` 和 `2` 作为操作数,求值结果为 1 和 2。然后将加法函数作用于 1 和 2,得到结果 3。
- 在 Lisp 中,组合式中的第一个元素必须是一个函数名。例如,表达式 `(+ 1 2)` 中,第一个元素是 `+`,它是一个函数名。其他元素则是函数的参数。Lisp 会先对所有元素求值,然后应用函数。
- 需要注意的是,参数列表中的每个元素也会被求值。例如,表达式 `(1 2 3)` 会导致错误,因为第一个元素 `1` 不是一个 函数名,而是一个数字,无法作为函数应用。因此,在 Lisp 中,组合式的第一个元素必须是一个函数名,否则会导致错误 。
- 总结一下,Lisp 中函数应用的基本形式是:
- ```lisp
- (函数名 参数1 参数2 ...)
- ```
- 其中,函数名必须是可用的函数,参数可以是任意表达式,每个参数都会被求值后再传入函数。这种形式的应用方式称为前 缀表示法(prefix notation),即函数名位于参数前面。
- [end of text]
- llama_perf_sampler_print: sampling time = 5533.75 ms / 4358 runs ( 1.27 ms per token, 787.53 tokens per second)
- llama_perf_context_print: load time = 29480.82 ms
- llama_perf_context_print: prompt eval time = 171157.42 ms / 91 tokens ( 1880.85 ms per token, 0.53 tokens per second)
- llama_perf_context_print: eval time = 10299060.06 ms / 4266 runs ( 2414.22 ms per token, 0.41 tokens per second)
- llama_perf_context_print: total time = 10495858.38 ms / 4357 tokens
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement