BaSs_HaXoR

OOBABOOGA

Nov 25th, 2025 (edited)
3,279
1
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Batch 2.97 KB | None | 1 0
  1. rem https://github.com/oobabooga/text-generation-webui
  2. rem A Gradio web UI for Large Language Models.
  3.  
  4.  
  5. rem #simple #duh
  6. rem INSTALL TO HERE (git clone to directory and run start_windows.bat)
  7. C:\text-generation-webui\text-generation-webui\start_windows.bat
  8.  
  9.  
  10. rem One-click installer
  11. rem For users who need additional backends (ExLlamaV3, Transformers) or extensions (TTS, voice input, translation, etc). rem rem Requires ~10GB disk space and downloads PyTorch.
  12.  
  13. rem Clone the repository, or download its source code and extract it.
  14. rem Run the startup script for your OS: start_windows.bat, start_linux.sh, or start_macos.sh.
  15. rem When prompted, select your GPU vendor.
  16. rem After installation, open http://127.0.0.1:7860 in your browser.
  17. rem To restart the web UI later, run the same start_ script.
  18.  
  19. rem You can pass command-line flags directly (e.g., ./start_linux.sh --help), or add them to user_data/CMD_FLAGS.txt (e.g., --api to enable the API).
  20.  
  21. rem To update, run the update script for your OS: update_wizard_windows.bat, update_wizard_linux.sh, or update_wizard_macos.sh.
  22.  
  23. rem To reinstall with a fresh Python environment, delete the installer_files folder and run the start_ script again.
  24.  
  25.  
  26.  
  27. rem Supports multiple local text generation backends, including llama.cpp, Transformers, ExLlamaV3, ExLlamaV2, and TensorRT-LLM (the latter via its own Dockerfile).
  28. rem Easy setup: Choose between portable builds (zero setup, just unzip and run) for GGUF models on Windows/Linux/macOS, or the one-click installer that creates a self-contained installer_files directory.
  29. rem 100% offline and private, with zero telemetry, external resources, or remote update requests.
  30. rem File attachments: Upload text files, PDF documents, and .docx documents to talk about their contents.
  31. rem Vision (multimodal models): Attach images to messages for visual understanding (tutorial).
  32. rem Web search: Optionally search the internet with LLM-generated queries to add context to the conversation.
  33. rem Aesthetic UI with dark and light themes.
  34. rem Syntax highlighting for code blocks and LaTeX rendering for mathematical expressions.
  35. rem instruct mode for instruction-following (like ChatGPT), and chat-instruct/chat modes for talking to custom characters.
  36. rem Automatic prompt formatting using Jinja2 templates. You don't need to ever worry about prompt formats.
  37. rem Edit messages, navigate between message versions, and branch conversations at any point.
  38. rem Multiple sampling parameters and generation options for sophisticated text generation control.
  39. rem Switch between different models in the UI without restarting.
  40. rem Automatic GPU layers for GGUF models (on NVIDIA GPUs).
  41. rem Free-form text generation in the Notebook tab without being limited to chat turns.
  42. rem OpenAI-compatible API with Chat and Completions endpoints, including tool-calling support – see examples.
  43. rem Extension support, with numerous built-in and user-contributed extensions available. See the wiki and extensions directory for details.
Advertisement