Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- streamlit run .\gui.py
- Welcome to Streamlit!
- If you’d like to receive helpful onboarding emails, news, offers, promotions,
- and the occasional swag, please enter your email address below. Otherwise,
- leave this field blank.
- Email:
- You can find our privacy policy at https://streamlit.io/privacy-policy
- Summary:
- - This open source library collects usage statistics.
- - We cannot see and do not store information contained inside Streamlit apps,
- such as text, charts, images, etc.
- - Telemetry data is stored in servers in the United States.
- - If you'd like to opt out, add the following to %userprofile%/.streamlit/config.toml,
- creating that file if necessary:
- [browser]
- gatherUsageStats = false
- You can now view your Streamlit app in your browser.
- Local URL: http://localhost:8501
- Network URL: http://192.168.1.10:8501
- llama.cpp: loading model from models/ggml-model-q4_0_new.bin
- llama_model_load_internal: format = ggjt v1 (latest)
- llama_model_load_internal: n_vocab = 32000
- llama_model_load_internal: n_ctx = 2048
- llama_model_load_internal: n_embd = 4096
- llama_model_load_internal: n_mult = 256
- llama_model_load_internal: n_head = 32
- llama_model_load_internal: n_layer = 32
- llama_model_load_internal: n_rot = 128
- llama_model_load_internal: ftype = 2 (mostly Q4_0)
- llama_model_load_internal: n_ff = 11008
- llama_model_load_internal: n_parts = 1
- llama_model_load_internal: model size = 7B
- llama_model_load_internal: ggml ctx size = 68.20 KB
- llama_model_load_internal: mem required = 5809.33 MB (+ 2052.00 MB per state)
- llama_init_from_file: kv self size = 2048.00 MB
- AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
- llama.cpp: loading model from models/ggml-vic7b-uncensored-q4_0.bin
- llama_model_load_internal: format = ggjt v1 (latest)
- llama_model_load_internal: n_vocab = 32001
- llama_model_load_internal: n_ctx = 2048
- llama_model_load_internal: n_embd = 4096
- llama_model_load_internal: n_mult = 256
- llama_model_load_internal: n_head = 32
- llama_model_load_internal: n_layer = 32
- llama_model_load_internal: n_rot = 128
- llama_model_load_internal: ftype = 4 (mostly Q4_1, some F16)
- llama_model_load_internal: n_ff = 11008
- llama_model_load_internal: n_parts = 1
- llama_model_load_internal: model size = 7B
- llama_model_load_internal: ggml ctx size = 68.20 KB
- llama_model_load_internal: mem required = 5809.34 MB (+ 1026.00 MB per state)
- ....................................................................................................
- llama_init_from_file: kv self size = 1024.00 MB
- AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
- llama_print_timings: load time = 755.20 ms
- llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per run)
- llama_print_timings: prompt eval time = 754.77 ms / 8 tokens ( 94.35 ms per token)
- llama_print_timings: eval time = 193.21 ms / 1 runs ( 193.21 ms per run)
- llama_print_timings: total time = 955.70 ms
- It is not clear what you mean by "the vector store". Can you please provide more context or clarification? (cid:46)
- llama_print_timings: load time = 42467.43 ms
- llama_print_timings: sample time = 7.95 ms / 32 runs ( 0.25 ms per run)
- llama_print_timings: prompt eval time = 49900.37 ms / 1167 tokens ( 42.76 ms per token)
- llama_print_timings: eval time = 7382.30 ms / 31 runs ( 238.14 ms per run)
- llama_print_timings: total time = 59438.74 ms
- > Question:
- What documents are in the vector store ?
- > Answer:
- It is not clear what you mean by "the vector store". Can you please provide more context or clarification? (cid:46)
- > source_documents\shor.pdf:
- mathematics (2004): 781-793.
- (cid:46) Bernstein, Daniel. ”Detecting perfect powers in essentially linear time.” Mathe-
- matics of Computation of the American Mathematical Society 67.223 (1998):
- 1253-1283.
- (cid:46) Hardy, Godfrey Harold, et al. An introduction to the theory of numbers. Vol. 4.
- Oxford: Clarendon press, 1979.
- ♀(cid:46) Miller, Gary L. ”Riemann’s hypothesis and tests for primality.” Journal of computer
- and system sciences 13.3 (1976): 300-317.
- > source_documents\shor.pdf:
- (cid:26) −2πi
- q
- (cid:27)
- (a0 + rt)c
- |c(cid:105)
- (cid:26)
- exp
- −
- 2πi
- q
- (cid:27)
- trc
- |c(cid:105) .
- (cid:123)(cid:122)
- αc
- (cid:125)
- (cid:46) The probability for measuring a specific c(cid:48) = kq/r:
- P [c(cid:48)] =
- (cid:12)
- (cid:12)
- (cid:12)
- (cid:68)
- c(cid:48) (cid:12)
- (cid:12)
- (cid:12)
- ˜Φ
- 2
- (cid:69)(cid:12)
- (cid:12)
- (cid:12)
- =
- r
- q2 |αc(cid:48)|2 =
- r
- q2
- q2
- r2 =
- 1
- r
- ♀Period Finding Algorithm
- (cid:46) Overall probability to measure a c of the form kq
- r is then
- (cid:88)
- > source_documents\shor.pdf:
- a=0
- |a(cid:105) |11a( mod 21)(cid:105)
- (|0(cid:105) |1(cid:105) + |1(cid:105) |11(cid:105) + |2(cid:105) |16(cid:105) + |3(cid:105) |8(cid:105) + ...)
- a
- 8 9 10 ...
- 11a(mod21) 1 11 16 8 4 2 1 11 16 8 4 ...
- 2 3 4 5 6 7
- 0 1
- (cid:46) r = 6, but not yet observable
- ♀5. Quantum Fourier Transform
- (cid:46) apply the QFT on the first register:
- | ˜Φ(cid:105) =
- 1
- 512
- 511
- (cid:88)
- 511
- (cid:88)
- a=0
- c=0
- e2πiac/512 |c(cid:105) |11a(mod21)(cid:105)
- ♀6. Measurement!
- > source_documents\shor.pdf:
- 8. Check r → determine factors
- ♀1. Choose a random integer x, 1 < x < n
- (cid:46) if it is not coprime with n, e.g. x = 6:
- → gcd(x, n) = gcd(6, 21) = 3 → 21/3 = 7 → done!
- (cid:46) if it is coprime with n, e.g. x = 11:
- → gcd(11, 21) = 1 → continue!
- ♀2. Determine q
- (cid:46) n2 = 244
- !
- ≤ q = 2l < 2n2 = 882
- → q = 512 = 29
- (cid:46) Initial state consisting of two registers of length l:
- = |0(cid:105)⊗2l
- |Φi(cid:105) = |0(cid:105)r1
- |0(cid:105)r2
- ♀3. Initialize r1
- 2023-05-14 11:54:31.499 Uncaught app exception
- Traceback (most recent call last):
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
- exec(code, module.__dict__)
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\gui.py", line 90, in <module>
- st.form_submit_button('SUBMIT', on_click=generate_response(st.session_state.input), disabled=st.session_state.running)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\gui.py", line 82, in generate_response
- response = startLLM.main(st.session_state.input, True)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\startLLM.py", line 60, in main
- res = qa_system(query)
- ^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\chains\base.py", line 140, in __call__
- raise e
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\chains\base.py", line 134, in __call__
- self._call(inputs, run_manager=run_manager)
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\chains\retrieval_qa\base.py", line 119, in _call
- docs = self._get_docs(question)
- ^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\chains\retrieval_qa\base.py", line 181, in _get_docs
- return self.retriever.get_relevant_documents(question)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\vectorstores\base.py", line 375, in get_relevant_documents
- docs = self.vectorstore.max_marginal_relevance_search(
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\vectorstores\qdrant.py", line 272, in max_marginal_relevance_search
- embedding = self._embed_query(query)
- ^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\vectorstores\qdrant.py", line 122, in _embed_query
- embedding = self.embeddings.embed_query(query)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\embeddings\llamacpp.py", line 123, in embed_query
- embedding = self.client.embed(text)
- ^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\llama_cpp\llama.py", line 564, in embed
- return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\llama_cpp\llama.py", line 530, in create_embedding
- self.eval(tokens)
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\llama_cpp\llama.py", line 243, in eval
- return_code = llama_cpp.llama_eval(
- ^^^^^^^^^^^^^^^^^^^^^
- File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\llama_cpp\llama_cpp.py", line 335, in llama_eval
- return _lib.llama_eval(ctx, tokens, n_tokens, n_past, n_threads)
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- OSError: exception: integer divide by zero
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement