For local AI enthusiasts, managing Ollama models just got significantly easier. **Brave Browser's integrated AI assistant, LEO, now directly supports local Ollama installations.** This eliminates the need for separate interfaces like OpenWebUI, streamlining your workflow and enhancing privacy by keeping AI interactions within your browser.
This guide provides step-by-step instructions to configure Brave LEO for your local Ollama setup, including adding multiple models and accessing the chat UI.
---
### Prerequisites
* **Brave Browser:** You'll need **version 1.69 or higher**. As of July 2025, current stable desktop versions are well past this, around 1.80.x, so just make sure you're on the latest release for the best features.
* **Ollama:** Make sure you have **Ollama installed** and at least one model downloaded. If you don't, head over to [ollama.com](https://ollama.com/) for quick setup instructions.
---
### Configuring Brave LEO with Local Ollama
1. **Open Brave Settings:** Click the menu icon (three horizontal lines) in Brave's top-right corner, then select "Settings."
2. **Navigate to LEO Settings:** In the left sidebar, click on "Leo."
3. **Access "Bring Your Own Model" (BYOM):** Scroll down until you find the "Bring your own model" section.
4. **Add a New Model:** Click the "Add new model" button.
5. **Enter Model Details:**
* **Label:** Give it a friendly name for Brave's menu, like "My Llama3-8B."
* **Model request name:** This is crucial: it **must exactly match** the model name you used in Ollama (e.g., `llama3`). You can verify this by running `ollama list` in your terminal.
* **Server endpoint:** For Ollama, this will always be `http://localhost:11434/v1/chat/completions`.
* **API Key:** Leave this field **blank** for local Ollama installations.
6. **Add Model:** Click "Add model."
---
### Setting Up Multiple Models
The cool thing about Brave's BYOM feature is you aren't limited to just one local model. You can add as many as you have downloaded and are running with Ollama! Simply repeat steps 4-6 for each additional Ollama model you want to make available in Brave LEO. This allows you to seamlessly switch between different models directly from the browser.
---
### Accessing the Brave LEO Chat UI
Once your local models are configured, you're ready to chat! Launch the LEO chat interface using one of these methods:
* **Sidebar Icon:** Look for the **LEO icon** (it looks like a stylized lion's head) in Brave's right-hand sidebar. Clicking it will open the chat panel.
* **Address Bar:** You can also type `brave://leo-ai` into the address bar and press Enter to open the full-page chat interface.
Once the LEO chat is open, simply select your desired local Ollama model from the dropdown menu and start chatting away.
This integration delivers a private, efficient, and streamlined AI experience directly within your Browse environment, marking a significant advancement for local LLM users.
---
### Limitations
- Leo only allows 2000 chractes per prompt, so that's pretty limiting.
- While it DOES do markdown code blocks, there's no color systax highlighting. That plus the prompt size limitation restricts the usefulness to simple chats.
- Doesn't have access to search the web, though it CAN read the current web page you have open (assuming you're using that feature... there are 2 modes of Leo - Full screen chatbot (like OpenWebUI or ChatGPT) and side panel of the web page you're on.
- No tool use.
It's basically a very simplified chat bot interface, but it DOES keep your chat history.
If you're needing LLMs for coding, you should probably be using the more advanced and newer tools designed to assist you in programming and chatbot webui's ain't it. (emphasis on the unprofessional "ain't").
---
Note that if you want the more advanced features that OpenWebUI has like its plugins and RAG and such, you'll need to hang on to OpenWebUI.
OpenWebUI is No Longer Needed
By Mike
44 views
0