Guest User

GPT-4chan tutorial

a guest
Jun 13th, 2022
2,827
0
Never
1
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.43 KB | None | 0 0
  1. Requirements:
  2. 1- A working tensorflow installation.
  3. 2- The pytorch library: pip install torch
  4. 3- The transformers library: pip install transformers
  5. 4- The model file (pytorch_model.bin). HuggingFace deleted it, but you can download it by torrent: https://archive.org/details/gpt4chan_model
  6. 5- The model config file (config.json). It should be placed in the same folder as the model file and can be found here: https://github.com/yk/gpt-4chan-public
  7. 6- A computer with 24GB of free RAM. A GPU is not required. The model will be executed directly from the CPU.
  8.  
  9. The folder structure will look like this:
  10.  
  11. gpt4chan_model
  12. \_ pytorch_model.bin
  13. \_ config.json
  14. \_ CITATION.bib
  15. \_ gpt4chan_model_meta.sqlite
  16. \_ gpt4chan_model_meta.xml
  17. \_ LICENSE.txt
  18.  
  19. Then execute the model with the following Python script, making sure to place it in the same parent folder as the gpt4chan_model folder:
  20.  
  21.  
  22. from transformers import AutoModelForCausalLM, AutoTokenizer
  23. import sys
  24. import time
  25.  
  26. model = AutoModelForCausalLM.from_pretrained("gpt4chan_model", low_cpu_mem_usage=True)
  27. tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
  28.  
  29. prompt = ("-----\n--- 865467536\nAI is interesting because ")
  30. input_ids = tokenizer(prompt, return_tensors="pt").input_ids
  31.  
  32. gen_tokens = model.generate(
  33. input_ids,
  34. do_sample=True,
  35. temperature=0.8,
  36. top_p=1,
  37. typical_p=0.3,
  38. max_length=128,
  39. )
  40.  
  41. gen_text = tokenizer.batch_decode(gen_tokens)[0]
  42. print(gen_text)
  43.  
  44.  
  45. It takes a few minutes to run the script above.
  46. In case config.json is taken down, here are its contents:
  47.  
  48.  
  49. {
  50. "activation_function": "gelu_new",
  51. "architectures": [
  52. "GPTJForCausalLM"
  53. ],
  54. "attn_pdrop": 0.0,
  55. "bos_token_id": 50256,
  56. "embd_pdrop": 0.0,
  57. "eos_token_id": 50256,
  58. "gradient_checkpointing": false,
  59. "initializer_range": 0.02,
  60. "layer_norm_epsilon": 1e-05,
  61. "model_type": "gptj",
  62. "n_embd": 4096,
  63. "n_head": 16,
  64. "n_layer": 28,
  65. "n_positions": 2048,
  66. "rotary_dim": 64,
  67. "summary_activation": null,
  68. "summary_first_dropout": 0.1,
  69. "summary_proj_to_labels": true,
  70. "summary_type": "cls_index",
  71. "summary_use_proj": true,
  72. "transformers_version": "4.10.0.dev0",
  73. "tokenizer_class": "GPT2Tokenizer",
  74. "task_specific_params": {
  75. "text-generation": {
  76. "do_sample": true,
  77. "temperature": 1.0,
  78. "max_length": 50
  79. }
  80. },
  81. "torch_dtype": "float32",
  82. "use_cache": true,
  83. "vocab_size": 50400
  84. }
  85.  
Comments
  • Aspie96
    1 year
    Comment was deleted
Add Comment
Please, Sign In to add comment