Advertisement
Guest User

Untitled

a guest
May 14th, 2023
22
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 9.60 KB | None | 0 0
  1. streamlit run .\gui.py
  2.  
  3. Welcome to Streamlit!
  4.  
  5. If you’d like to receive helpful onboarding emails, news, offers, promotions,
  6. and the occasional swag, please enter your email address below. Otherwise,
  7. leave this field blank.
  8.  
  9. Email:
  10.  
  11. You can find our privacy policy at https://streamlit.io/privacy-policy
  12.  
  13. Summary:
  14. - This open source library collects usage statistics.
  15. - We cannot see and do not store information contained inside Streamlit apps,
  16. such as text, charts, images, etc.
  17. - Telemetry data is stored in servers in the United States.
  18. - If you'd like to opt out, add the following to %userprofile%/.streamlit/config.toml,
  19. creating that file if necessary:
  20.  
  21. [browser]
  22. gatherUsageStats = false
  23.  
  24.  
  25. You can now view your Streamlit app in your browser.
  26.  
  27. Local URL: http://localhost:8501
  28. Network URL: http://192.168.1.10:8501
  29.  
  30. llama.cpp: loading model from models/ggml-model-q4_0_new.bin
  31. llama_model_load_internal: format = ggjt v1 (latest)
  32. llama_model_load_internal: n_vocab = 32000
  33. llama_model_load_internal: n_ctx = 2048
  34. llama_model_load_internal: n_embd = 4096
  35. llama_model_load_internal: n_mult = 256
  36. llama_model_load_internal: n_head = 32
  37. llama_model_load_internal: n_layer = 32
  38. llama_model_load_internal: n_rot = 128
  39. llama_model_load_internal: ftype = 2 (mostly Q4_0)
  40. llama_model_load_internal: n_ff = 11008
  41. llama_model_load_internal: n_parts = 1
  42. llama_model_load_internal: model size = 7B
  43. llama_model_load_internal: ggml ctx size = 68.20 KB
  44. llama_model_load_internal: mem required = 5809.33 MB (+ 2052.00 MB per state)
  45. llama_init_from_file: kv self size = 2048.00 MB
  46. AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
  47. llama.cpp: loading model from models/ggml-vic7b-uncensored-q4_0.bin
  48. llama_model_load_internal: format = ggjt v1 (latest)
  49. llama_model_load_internal: n_vocab = 32001
  50. llama_model_load_internal: n_ctx = 2048
  51. llama_model_load_internal: n_embd = 4096
  52. llama_model_load_internal: n_mult = 256
  53. llama_model_load_internal: n_head = 32
  54. llama_model_load_internal: n_layer = 32
  55. llama_model_load_internal: n_rot = 128
  56. llama_model_load_internal: ftype = 4 (mostly Q4_1, some F16)
  57. llama_model_load_internal: n_ff = 11008
  58. llama_model_load_internal: n_parts = 1
  59. llama_model_load_internal: model size = 7B
  60. llama_model_load_internal: ggml ctx size = 68.20 KB
  61. llama_model_load_internal: mem required = 5809.34 MB (+ 1026.00 MB per state)
  62. ....................................................................................................
  63. llama_init_from_file: kv self size = 1024.00 MB
  64. AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
  65.  
  66. llama_print_timings: load time = 755.20 ms
  67. llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per run)
  68. llama_print_timings: prompt eval time = 754.77 ms / 8 tokens ( 94.35 ms per token)
  69. llama_print_timings: eval time = 193.21 ms / 1 runs ( 193.21 ms per run)
  70. llama_print_timings: total time = 955.70 ms
  71. It is not clear what you mean by "the vector store". Can you please provide more context or clarification? (cid:46)
  72.  
  73. llama_print_timings: load time = 42467.43 ms
  74. llama_print_timings: sample time = 7.95 ms / 32 runs ( 0.25 ms per run)
  75. llama_print_timings: prompt eval time = 49900.37 ms / 1167 tokens ( 42.76 ms per token)
  76. llama_print_timings: eval time = 7382.30 ms / 31 runs ( 238.14 ms per run)
  77. llama_print_timings: total time = 59438.74 ms
  78.  
  79.  
  80. > Question:
  81. What documents are in the vector store ?
  82.  
  83. > Answer:
  84. It is not clear what you mean by "the vector store". Can you please provide more context or clarification? (cid:46)
  85.  
  86.  
  87. > source_documents\shor.pdf:
  88. mathematics (2004): 781-793.
  89.  
  90. (cid:46) Bernstein, Daniel. ”Detecting perfect powers in essentially linear time.” Mathe-
  91. matics of Computation of the American Mathematical Society 67.223 (1998):
  92. 1253-1283.
  93.  
  94. (cid:46) Hardy, Godfrey Harold, et al. An introduction to the theory of numbers. Vol. 4.
  95.  
  96. Oxford: Clarendon press, 1979.
  97.  
  98. ♀(cid:46) Miller, Gary L. ”Riemann’s hypothesis and tests for primality.” Journal of computer
  99.  
  100. and system sciences 13.3 (1976): 300-317.
  101.  
  102. > source_documents\shor.pdf:
  103. (cid:26) −2πi
  104. q
  105.  
  106. (cid:27)
  107.  
  108. (a0 + rt)c
  109.  
  110. |c(cid:105)
  111.  
  112. (cid:26)
  113.  
  114. exp
  115.  
  116.  
  117. 2πi
  118. q
  119.  
  120. (cid:27)
  121.  
  122. trc
  123.  
  124. |c(cid:105) .
  125.  
  126. (cid:123)(cid:122)
  127. αc
  128.  
  129. (cid:125)
  130.  
  131. (cid:46) The probability for measuring a specific c(cid:48) = kq/r:
  132.  
  133. P [c(cid:48)] =
  134.  
  135. (cid:12)
  136. (cid:12)
  137. (cid:12)
  138.  
  139. (cid:68)
  140.  
  141. c(cid:48) (cid:12)
  142. (cid:12)
  143. (cid:12)
  144.  
  145. ˜Φ
  146.  
  147. 2
  148.  
  149. (cid:69)(cid:12)
  150. (cid:12)
  151. (cid:12)
  152.  
  153. =
  154.  
  155. r
  156. q2 |αc(cid:48)|2 =
  157.  
  158. r
  159. q2
  160.  
  161. q2
  162. r2 =
  163.  
  164. 1
  165. r
  166.  
  167. ♀Period Finding Algorithm
  168.  
  169. (cid:46) Overall probability to measure a c of the form kq
  170.  
  171. r is then
  172.  
  173. (cid:88)
  174.  
  175. > source_documents\shor.pdf:
  176. a=0
  177.  
  178. |a(cid:105) |11a( mod 21)(cid:105)
  179.  
  180. (|0(cid:105) |1(cid:105) + |1(cid:105) |11(cid:105) + |2(cid:105) |16(cid:105) + |3(cid:105) |8(cid:105) + ...)
  181.  
  182. a
  183. 8 9 10 ...
  184. 11a(mod21) 1 11 16 8 4 2 1 11 16 8 4 ...
  185.  
  186. 2 3 4 5 6 7
  187.  
  188. 0 1
  189.  
  190. (cid:46) r = 6, but not yet observable
  191.  
  192. ♀5. Quantum Fourier Transform
  193.  
  194. (cid:46) apply the QFT on the first register:
  195.  
  196. | ˜Φ(cid:105) =
  197.  
  198. 1
  199. 512
  200.  
  201. 511
  202. (cid:88)
  203.  
  204. 511
  205. (cid:88)
  206.  
  207. a=0
  208.  
  209. c=0
  210.  
  211. e2πiac/512 |c(cid:105) |11a(mod21)(cid:105)
  212.  
  213. ♀6. Measurement!
  214.  
  215. > source_documents\shor.pdf:
  216. 8. Check r → determine factors
  217.  
  218. ♀1. Choose a random integer x, 1 < x < n
  219.  
  220. (cid:46) if it is not coprime with n, e.g. x = 6:
  221.  
  222. → gcd(x, n) = gcd(6, 21) = 3 → 21/3 = 7 → done!
  223.  
  224. (cid:46) if it is coprime with n, e.g. x = 11:
  225. → gcd(11, 21) = 1 → continue!
  226.  
  227. ♀2. Determine q
  228.  
  229. (cid:46) n2 = 244
  230.  
  231. !
  232. ≤ q = 2l < 2n2 = 882
  233.  
  234. → q = 512 = 29
  235.  
  236. (cid:46) Initial state consisting of two registers of length l:
  237. = |0(cid:105)⊗2l
  238.  
  239. |Φi(cid:105) = |0(cid:105)r1
  240.  
  241. |0(cid:105)r2
  242.  
  243. ♀3. Initialize r1
  244. 2023-05-14 11:54:31.499 Uncaught app exception
  245. Traceback (most recent call last):
  246. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
  247. exec(code, module.__dict__)
  248. File "C:\Users\Hippa\PycharmProjects\CASALIOY\gui.py", line 90, in <module>
  249. st.form_submit_button('SUBMIT', on_click=generate_response(st.session_state.input), disabled=st.session_state.running)
  250. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  251. File "C:\Users\Hippa\PycharmProjects\CASALIOY\gui.py", line 82, in generate_response
  252. response = startLLM.main(st.session_state.input, True)
  253. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  254. File "C:\Users\Hippa\PycharmProjects\CASALIOY\startLLM.py", line 60, in main
  255. res = qa_system(query)
  256. ^^^^^^^^^^^^^^^^
  257. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\chains\base.py", line 140, in __call__
  258. raise e
  259. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\chains\base.py", line 134, in __call__
  260. self._call(inputs, run_manager=run_manager)
  261. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\chains\retrieval_qa\base.py", line 119, in _call
  262. docs = self._get_docs(question)
  263. ^^^^^^^^^^^^^^^^^^^^^^^^
  264. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\chains\retrieval_qa\base.py", line 181, in _get_docs
  265. return self.retriever.get_relevant_documents(question)
  266. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  267. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\vectorstores\base.py", line 375, in get_relevant_documents
  268. docs = self.vectorstore.max_marginal_relevance_search(
  269. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  270. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\vectorstores\qdrant.py", line 272, in max_marginal_relevance_search
  271. embedding = self._embed_query(query)
  272. ^^^^^^^^^^^^^^^^^^^^^^^^
  273. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\vectorstores\qdrant.py", line 122, in _embed_query
  274. embedding = self.embeddings.embed_query(query)
  275. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  276. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\langchain\embeddings\llamacpp.py", line 123, in embed_query
  277. embedding = self.client.embed(text)
  278. ^^^^^^^^^^^^^^^^^^^^^^^
  279. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\llama_cpp\llama.py", line 564, in embed
  280. return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
  281. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  282. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\llama_cpp\llama.py", line 530, in create_embedding
  283. self.eval(tokens)
  284. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\llama_cpp\llama.py", line 243, in eval
  285. return_code = llama_cpp.llama_eval(
  286. ^^^^^^^^^^^^^^^^^^^^^
  287. File "C:\Users\Hippa\PycharmProjects\CASALIOY\venv\Lib\site-packages\llama_cpp\llama_cpp.py", line 335, in llama_eval
  288. return _lib.llama_eval(ctx, tokens, n_tokens, n_past, n_threads)
  289. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  290. OSError: exception: integer divide by zero
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement