Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- rem https://github.com/oobabooga/text-generation-webui
- rem A Gradio web UI for Large Language Models.
- rem #simple #duh
- rem INSTALL TO HERE (git clone to directory and run start_windows.bat)
- C:\text-generation-webui\text-generation-webui\start_windows.bat
- rem One-click installer
- rem For users who need additional backends (ExLlamaV3, Transformers) or extensions (TTS, voice input, translation, etc). rem rem Requires ~10GB disk space and downloads PyTorch.
- rem Clone the repository, or download its source code and extract it.
- rem Run the startup script for your OS: start_windows.bat, start_linux.sh, or start_macos.sh.
- rem When prompted, select your GPU vendor.
- rem After installation, open http://127.0.0.1:7860 in your browser.
- rem To restart the web UI later, run the same start_ script.
- rem You can pass command-line flags directly (e.g., ./start_linux.sh --help), or add them to user_data/CMD_FLAGS.txt (e.g., --api to enable the API).
- rem To update, run the update script for your OS: update_wizard_windows.bat, update_wizard_linux.sh, or update_wizard_macos.sh.
- rem To reinstall with a fresh Python environment, delete the installer_files folder and run the start_ script again.
- rem Supports multiple local text generation backends, including llama.cpp, Transformers, ExLlamaV3, ExLlamaV2, and TensorRT-LLM (the latter via its own Dockerfile).
- rem Easy setup: Choose between portable builds (zero setup, just unzip and run) for GGUF models on Windows/Linux/macOS, or the one-click installer that creates a self-contained installer_files directory.
- rem 100% offline and private, with zero telemetry, external resources, or remote update requests.
- rem File attachments: Upload text files, PDF documents, and .docx documents to talk about their contents.
- rem Vision (multimodal models): Attach images to messages for visual understanding (tutorial).
- rem Web search: Optionally search the internet with LLM-generated queries to add context to the conversation.
- rem Aesthetic UI with dark and light themes.
- rem Syntax highlighting for code blocks and LaTeX rendering for mathematical expressions.
- rem instruct mode for instruction-following (like ChatGPT), and chat-instruct/chat modes for talking to custom characters.
- rem Automatic prompt formatting using Jinja2 templates. You don't need to ever worry about prompt formats.
- rem Edit messages, navigate between message versions, and branch conversations at any point.
- rem Multiple sampling parameters and generation options for sophisticated text generation control.
- rem Switch between different models in the UI without restarting.
- rem Automatic GPU layers for GGUF models (on NVIDIA GPUs).
- rem Free-form text generation in the Notebook tab without being limited to chat turns.
- rem OpenAI-compatible API with Chat and Completions endpoints, including tool-calling support – see examples.
- rem Extension support, with numerous built-in and user-contributed extensions available. See the wiki and extensions directory for details.
Advertisement