Installation & Setup
Get BioRouter running in three steps โ download, install, and connect an AI provider. The whole process takes about five minutes.
Download BioRouter
Download the installer for your platform from the Download page or directly from GitHub Releases.
- macOS: Open the
.dmgand drag BioRouter to/Applications - Windows: Unzip and run
BioRouter.exe - Linux (Debian):
sudo dpkg -i biorouter_*.deb - Linux (RPM):
sudo rpm -i BioRouter-*.rpm
chmod 755 ~/.config in Terminal and relaunch.
Connect an LLM Provider
BioRouter will guide you on first launch. Choose the option that fits your needs:
- UCSF users: Select Azure OpenAI (UCSF ChatGPT) โ use endpoint
https://unified-api.ucsf.edu/general, deploymentgpt-5-2025-08-07, and your Versa API key. Or select Amazon Bedrock (UCSF Anthropic). Full details in the UCSF Setup Guide. - Commercial API: Enter your Anthropic, OpenAI, or Google API key.
- Fully local: Install Ollama, pull a model (
ollama pull qwen3), select Ollama in BioRouter โ no API key, no data leaves your machine.
Verify & Explore
Send a test message in the chat. If BioRouter responds, you're all set. Open the Extensions sidebar to add agents, enable tools, and start building workflows.
macOS CLI Installation
curl -fsSL https://github.com/BaranziniLab/BioRouter/releases/download/stable/download_cli.sh | bash
To update the CLI: biorouter update
Configuration File
All settings are stored in:
~/.config/biorouter/config.yaml # macOS / Linux
This file is shared between the Desktop app and the CLI. API keys are stored encrypted.
Troubleshooting
| Issue | Solution |
|---|---|
| No window on launch (macOS M-chip) | chmod 755 ~/.config then relaunch |
| Extension fails to activate | Ensure Node.js (npx) or Python/uv (uvx) is installed |
| API key not working | Verify the key is valid and has available quota |
| Logs | ~/.config/biorouter/logs/ |
UCSF Setup Guide
BioRouter is built for UCSF researchers. Use your institutional AI access for secure, compliant workflows with enterprise-managed models.
Option A โ UCSF Versa / Azure OpenAI (Recommended)
UCSF's institutional AI platform, accessed via Azure OpenAI. Get your API key at:
ai.ucsf.edu/platforms-tools-and-resources/ucsf-versa
- In BioRouter, go to Settings โ Models โ Azure OpenAI โ Configure
- Enter the endpoint and credentials below, then click Save
- Click Launch to start chatting
| Field | Environment Variable | Value |
|---|---|---|
| Endpoint | AZURE_OPENAI_ENDPOINT | https://unified-api.ucsf.edu/general |
| Deployment Name | AZURE_OPENAI_DEPLOYMENT_NAME | gpt-5-2025-08-07 |
| API Version | AZURE_OPENAI_API_VERSION | 2024-10-21 |
| API Key | AZURE_OPENAI_API_KEY | Your UCSF Versa API key |
Data stays within UCSF's Azure tenant โ safe for institution-approved use cases.
Option B โ Amazon Bedrock (UCSF-hosted Anthropic)
- Log in via UCSF AWS SSO:
aws sso login --profile <your-profile> - In BioRouter: Settings โ Models โ Amazon Bedrock โ Configure
- Set
AWS_PROFILEandAWS_REGIONwhen prompted - Select Claude Sonnet or another Bedrock-hosted model
Default model: claude-sonnet-4-5. Data stays within UCSF's AWS environment.
Option C โ Local (Ollama, Air-gapped)
For maximum privacy โ nothing ever leaves your device:
- Install Ollama from ollama.com/download
- Pull a model:
ollama pull qwen3 - In BioRouter, select Ollama as your provider โ no configuration needed
Other Institutions
Check with your institution's IT or compliance office for approved AI hosting options. For institutions without managed AI services, commercial API keys (Anthropic, OpenAI) or local Ollama inference are recommended โ but always verify compliance before processing any sensitive data.
Providers & Models
BioRouter connects to a wide range of LLM providers. Switch providers at any time from Settings โ Models without changing your workflows.
UCSF Institutional Providers
| Provider | Access | Default Model |
|---|---|---|
| Azure OpenAI (UCSF ChatGPT) | UCSF Versa API key ยท Setup guide โ | gpt-5-2025-08-07 |
| Amazon Bedrock (UCSF Anthropic) | UCSF AWS SSO profile | Claude Sonnet 4.5 |
Commercial Cloud Providers
| Provider | Env Variable | Get API Key |
|---|---|---|
| Anthropic | ANTHROPIC_API_KEY | platform.claude.com |
| OpenAI | OPENAI_API_KEY | platform.openai.com |
| Google Gemini | GOOGLE_API_KEY | aistudio.google.com |
| X.AI (Grok) | XAI_API_KEY | console.x.ai |
| OpenRouter | OPENROUTER_API_KEY | openrouter.ai |
Local Models โ Ollama
Install Ollama, pull any model, and select Ollama in BioRouter. Everything stays on your machine.
ollama pull qwen3 # Recommended general-purpose ollama pull llama3.2 # Fast lightweight option ollama pull deepseek-r1 # Strong reasoning model
Browse all available Ollama models โ
Switching Providers
Desktop: Settings โ Models โ select a provider card โ Configure or Launch.
CLI: biorouter configure โ Select "Configure Providers"
Extensions, Skills & MCP
Extend BioRouter with MCP servers, reusable Skills, and built-in platform capabilities โ from database access to browser automation.
Built-in Extensions
| Extension | What It Does | Default |
|---|---|---|
| Developer | File operations, shell commands, code search, text editing | โ On |
| Computer Controller | Web scraping, file caching, browser automation | Off |
| Memory | Remembers user preferences and context across sessions | Off |
| Auto Visualiser | Auto-generates data visualizations from conversation | Off |
| Chat Recall | Search across all past conversation history | Off |
| Code Execution | Run JavaScript in a sandboxed environment | Off |
| Skills | Load and invoke reusable instruction sets | โ On |
| Extension Manager | Discover, enable, and disable extensions mid-session | โ On |
Enabling Extensions
Desktop: Sidebar โ Extensions โ toggle on.
CLI: biorouter configure โ Add Extension โ Built-in Extension
Adding External MCP Servers
Desktop: Sidebar โ Extensions โ Add custom extension โ enter type, ID, name, command, and env vars.
Config file (~/.config/biorouter/config.yaml):
extensions:
github:
name: GitHub
cmd: npx
args: [-y @modelcontextprotocol/server-github]
enabled: true
envs:
GITHUB_PERSONAL_ACCESS_TOKEN: "<your_token>"
type: stdio
timeout: 300
Skills System
Skills are reusable .md instruction files that encode your team's workflows and best practices. Drop a skill file in your skills directory and BioRouter makes it callable in any session.
Skills can be shared across a team or institution to standardize analysis pipelines โ without sharing any underlying data.
Recipes & Automation
Recipes package any workflow into a shareable, parameterizable, schedulable unit โ the foundation of federated research collaboration in BioRouter.
What is a Recipe?
A Recipe is a YAML file that defines a prompt template, input parameters, and an optional schedule. Recipes enable reproducible, shareable workflows that travel across institutions without carrying any data.
Recipe Structure
name: "Literature Review"
description: "Summarize recent papers on a research topic"
parameters:
- name: topic
description: "Research topic to review"
required: true
- name: years
description: "How many years back to cover"
default: "3"
prompt: |
You are a scientific literature reviewer.
Summarize key findings on: {{ topic }}
Cover publications from the past {{ years }} years.
Focus on clinical relevance and methodology.
Running a Recipe
Desktop: Sidebar โ Recipes โ select a recipe โ fill parameters โ Run.
CLI:
biorouter run recipe.yaml --param topic="SPOKE knowledge graph" --param years=5
Scheduling Recipes
Add a schedule field using cron syntax to run a recipe automatically:
schedule: "0 8 * * 1" # Every Monday at 8:00 AM
Scheduled recipes run via BioRouter's background service โ even when the desktop app is closed.
Sharing Recipes
Recipe files are plain YAML text. Commit them to a shared repository, publish them on GitHub, or email them. Recipients open them with File โ Open Recipe or biorouter run <recipe.yaml>. No data is embedded โ only the workflow logic.
Data Privacy Guide
BioRouter routes your inputs to an LLM provider. Privacy properties depend entirely on which provider you use โ here's how to choose the right one.
Provider Privacy Properties
| Provider | Data Stays Within | Best For |
|---|---|---|
| Ollama (local) | Your device only โ no network | Maximum privacy, air-gapped requirements |
| UCSF Azure OpenAI | UCSF's Azure tenant | Institution-approved clinical use cases |
| UCSF Amazon Bedrock | UCSF's AWS environment | Institution-approved clinical use cases |
| Commercial APIs (personal) | Provider's cloud infrastructure | De-identified or non-sensitive data only |
Best Practices
- De-identify first โ remove names, dates of birth, MRNs, and other identifiers before any AI processing
- Minimize data exposure โ provide only what's needed for the task
- Prefer local models for exploratory work with real data
- Protect your device โ session logs at
~/.config/biorouter/logs/may contain your inputs - Never share sessions that contain patient or sensitive data
- Always verify with UCSF IT or your compliance office before processing regulated data
Architecture
BioRouter is a modular, plugin-based system built with a Rust backend and Electron + React frontend, connected via a local REST API.
Three-Layer Design
- Interface โ Desktop GUI (Electron + React 19) or CLI, accepts user input and renders responses
- Agent Core โ Rust-based reasoning loop managing LLM interaction, tool execution, context, and session state
- Extensions โ Pluggable MCP servers providing tools: file system, databases, web, code execution, and custom agents
Backend โ Rust Workspace
| Crate | Role |
|---|---|
biorouter | Core agent library โ loop, providers, sessions, recipes, scheduling |
biorouter-server | Local REST API server (biorouterd) the desktop communicates with |
biorouter-cli | CLI binary โ biorouter session, biorouter configure, etc. |
biorouter-mcp | Built-in MCP servers (Developer, Computer Controller, etc.) |
Key dependencies: tokio (async), axum (HTTP), rmcp (Model Context Protocol), serde/serde_json, tiktoken-rs (token counting), sqlx/SQLite (session persistence), minijinja (recipe templates), tokio-cron-scheduler.
Frontend โ Electron + React
Desktop app: Electron 39 + React 19 + TypeScript, built with Vite + Electron Forge. Communicates with the local biorouterd REST API.
Model Context Protocol (MCP)
All extensions use MCP โ a standard protocol for LLM tool invocation. Any MCP-compatible server (local process or remote HTTP) can be added as a BioRouter extension, enabling a growing open ecosystem of research agents.
MCP Agents
BioRouter connects to local and remote MCP agents for specialized research tasks. Browse the curated agent ecosystem in the BAAM Marketplace.
Adding an Agent via Desktop UI
- Go to Sidebar โ Extensions โ Add custom extension
- Choose type: Command-line (stdio) for local agents or Streamable HTTP for remote
- Enter the agent name and command (e.g.,
uvx --from git+https://... agentname) - Add any required environment variables
- Click Add and enable the extension
Adding an Agent via Config YAML
extensions:
spokeagent:
name: SPOKE Agent
cmd: uvx
args:
- --from
- "git+https://github.com/BaranziniLab/SPOKEAgent"
- spokeagent
enabled: true
type: stdio
timeout: 300
Remote HTTP Agents
extensions:
remote-agent:
name: My Remote Research Agent
url: https://my-agent.example.com/mcp
type: streamable_http
enabled: true
timeout: 300
Prerequisites
Most UCSF agents use uvx (requires Python + uv). Install with:
curl -LsSf https://astral.sh/uv/install.sh | sh # macOS / Linux # or: pip install uv
Playwright MCP uses npx โ requires Node.js: nodejs.org
Available Agents
Browse all available agents, copy install commands, and find GitHub links in the BAAM Marketplace โ