A powerful CLI for chatting with AI models through OpenRouter with streaming responses, token tracking, and extensive customization options.
- Universal Model Access: Connect to any AI model available on OpenRouter
- Interactive Chat: Enjoy a smooth conversation experience with real-time streaming responses
- Rich Markdown Rendering: View formatted text, code blocks, tables and more directly in your terminal
- Performance Analytics: Track token usage and response times for cost efficiency
- Multimodal Support: Share images and various file types with compatible AI models
- Smart Thinking Mode: See the AI's reasoning process with compatible models
- Extensible Plugin System: Easily extend functionality with custom plugins
- Multiple Export Formats: Save conversations as Markdown, HTML, JSON, TXT, or PDF
- Smart Context Management: Automatically manages conversation history to stay within token limits
- Customizable Themes: Choose from different visual themes for your terminal
- File Attachment Support: Share files of various types with the AI for analysis
pip install orchat
git clone https://github.com/oop7/OrChat.git
pip install -r requirements.txt
python main.py
- Python 3.7 or higher
- An OpenRouter API key (get one at OpenRouter.ai)
- Required packages: requests, tiktoken, rich, dotenv, colorama
- Install OrChat using one of the methods above
- Run the setup wizard
- if you follow from source PyPI:
orchat --setup
- if you follow from source method:
python main.py --setup
- if you follow from source PyPI:
- Enter your OpenRouter API key when prompted
- Select your preferred AI model and configure settings
- Start chatting!
OrChat can be configured in multiple ways:
- Setup Wizard: Run
python main.py --setup
for interactive configuration - Config File: Edit the
config.ini
file in the application directory - Environment Variables: Create a
.env
file with your configuration
Example .env
file:
OPENROUTER_API_KEY=your_api_key_here
Example config.ini
structure:
[API]
OPENROUTER_API_KEY = your_api_key_here
[SETTINGS]
MODEL = anthropic/claude-3-opus
TEMPERATURE = 0.7
SYSTEM_INSTRUCTIONS = You are a helpful AI assistant.
THEME = default
MAX_TOKENS = 8000
AUTOSAVE_INTERVAL = 300
STREAMING = True
THINKING_MODE = True
--setup
: Run the setup wizard--model MODEL
: Specify the model to use (e.g.,--model "anthropic/claude-3-opus"
)--task {creative,coding,analysis,chat}
: Optimize for a specific task type--image PATH
: Analyze an image file
Command | Description |
---|---|
/help |
Show available commands |
/exit |
Exit the chat |
/new |
Start a new conversation |
/clear |
Clear conversation history |
/cls or /clear-screen |
Clear the terminal screen |
/save [format] |
Save conversation (formats: md, html, json, txt, pdf) |
/model |
Change the AI model |
/temperature <0.0-2.0> |
Adjust temperature setting |
/system |
View or change system instructions |
/tokens |
Show token usage statistics |
/speed |
Show response time statistics |
/theme <theme> |
Change the color theme (default, dark, light, hacker) |
/thinking |
Show last AI thinking process |
/thinking-mode |
Toggle thinking mode on/off |
/attach or /upload |
Share a file with the AI |
/about |
Show information about OrChat |
/update |
Check for updates |
/settings |
View current settings |
Share files with the AI for analysis:
/attach path/to/your/file.ext
Supported file types:
- Images: JPG, PNG, GIF, WEBP (displayed visually with multimodal models)
- Code Files: Python, JavaScript, Java, C++, etc. (with syntax highlighting)
- Text Documents: TXT, MD, CSV (full content displayed)
- Data Files: JSON, XML (displayed with formatting)
- PDFs and Archives: Basic metadata support
With compatible models (Claude 3.7 Sonnet, Gemini, DeepSeek, Qwen, etc.), OrChat can display the AI's reasoning process:
/thinking-mode # Toggle thinking mode on/off
/thinking # Show the most recent thinking process
This feature allows you to see how the AI approached your question before giving its final answer.
Extend OrChat's functionality with custom plugins by creating Python files in the plugins
directory:
from main import Plugin
class MyCustomPlugin(Plugin):
def __init__(self):
super().__init__("My Plugin", "Description of what my plugin does")
def on_message(self, message, role):
# Process message before sending/after receiving
return message
def on_command(self, command, args):
# Handle custom commands
if command == "my_command":
return True, "Command executed successfully!"
return False, "Command not handled"
def get_commands(self):
return ["/my_command - Description of my custom command"]
Change the visual appearance with the /theme
command:
- default: Blue user, green assistant
- dark: Cyan user, magenta assistant
- light: Blue user, green assistant with lighter colors
- hacker: Matrix-inspired green text on black
OrChat intelligently manages conversation context to keep within token limits:
- Automatically trims old messages when approaching limits
- Displays token usage statistics after each response
- Allows manual clearing of context with
/clear
Check for updates with the /update
command to see if a newer version is available.
- API Key Issues: Ensure your OpenRouter API key is correctly set in config.ini or .env file
- File Path Problems: When using
/attach
, make sure to use the correct path format for your OS - Model Compatibility: Some features like thinking mode only work with specific models
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Feel free to open issues or submit pull requests.
- OpenRouter for providing unified API access to AI models
- Rich for the beautiful terminal interface