Introduction Download Documentation BAAM About

Installation & Setup

Get BioRouter running in three steps โ€” download, install, and connect an AI provider. The whole process takes about five minutes.

1

Download BioRouter

Download the installer for your platform from the Download page or directly from GitHub Releases.

  • macOS: Open the .dmg and drag BioRouter to /Applications
  • Windows: Unzip and run BioRouter.exe
  • Linux (Debian): sudo dpkg -i biorouter_*.deb
  • Linux (RPM): sudo rpm -i BioRouter-*.rpm
macOS tip: If you see a security warning on Apple Silicon, run chmod 755 ~/.config in Terminal and relaunch.
2

Connect an LLM Provider

BioRouter will guide you on first launch. Choose the option that fits your needs:

  • UCSF users: Select Azure OpenAI (UCSF ChatGPT) โ€” use endpoint https://unified-api.ucsf.edu/general, deployment gpt-5-2025-08-07, and your Versa API key. Or select Amazon Bedrock (UCSF Anthropic). Full details in the UCSF Setup Guide.
  • Commercial API: Enter your Anthropic, OpenAI, or Google API key.
  • Fully local: Install Ollama, pull a model (ollama pull qwen3), select Ollama in BioRouter โ€” no API key, no data leaves your machine.
3

Verify & Explore

Send a test message in the chat. If BioRouter responds, you're all set. Open the Extensions sidebar to add agents, enable tools, and start building workflows.

macOS CLI Installation

curl -fsSL https://github.com/BaranziniLab/BioRouter/releases/download/stable/download_cli.sh | bash

To update the CLI: biorouter update

Configuration File

All settings are stored in:

~/.config/biorouter/config.yaml    # macOS / Linux

This file is shared between the Desktop app and the CLI. API keys are stored encrypted.

Troubleshooting

IssueSolution
No window on launch (macOS M-chip)chmod 755 ~/.config then relaunch
Extension fails to activateEnsure Node.js (npx) or Python/uv (uvx) is installed
API key not workingVerify the key is valid and has available quota
Logs~/.config/biorouter/logs/

UCSF Setup Guide

BioRouter is built for UCSF researchers. Use your institutional AI access for secure, compliant workflows with enterprise-managed models.

Option A โ€” UCSF Versa / Azure OpenAI (Recommended)

UCSF's institutional AI platform, accessed via Azure OpenAI. Get your API key at:

ai.ucsf.edu/platforms-tools-and-resources/ucsf-versa

  1. In BioRouter, go to Settings โ†’ Models โ†’ Azure OpenAI โ†’ Configure
  2. Enter the endpoint and credentials below, then click Save
  3. Click Launch to start chatting
FieldEnvironment VariableValue
EndpointAZURE_OPENAI_ENDPOINThttps://unified-api.ucsf.edu/general
Deployment NameAZURE_OPENAI_DEPLOYMENT_NAMEgpt-5-2025-08-07
API VersionAZURE_OPENAI_API_VERSION2025-01-01-preview
API KeyAZURE_OPENAI_API_KEYYour UCSF Versa API key
Where to get your Versa API key: Sign in with your UCSF MyAccess credentials at the link above to generate or retrieve your API key.

Data stays within UCSF's Azure tenant โ€” safe for institution-approved use cases.

Option B โ€” Amazon Bedrock (UCSF MuleSoft Proxy)

UCSF provides access to Anthropic Claude models via Amazon Bedrock, routed through the same UCSF MuleSoft gateway used by UCSF Versa. Your data never leaves UCSF's infrastructure.

Step 1 โ€” Obtain Your Credentials

The UCSF Bedrock proxy uses a static AWS Access Key + Secret Key pair โ€” these are not personal AWS SSO credentials. Obtain them from your lab's BioRouter onboarding document or contact the Baranzini Lab / UCSF Research IT for access. You will receive four values:

CredentialEnvironment VariableNotes
AWS Access Key IDAWS_ACCESS_KEY_ID32-character hex string provided by UCSF
AWS Secret Access KeyAWS_SECRET_ACCESS_KEY32-character mixed string provided by UCSF
RegionAWS_REGIONus-west-2
Proxy EndpointAWS_ENDPOINT_URL_BEDROCKhttps://unified-api.ucsf.edu/general/awsai
Why a custom endpoint? UCSF routes Bedrock calls through its own MuleSoft proxy (unified-api.ucsf.edu/general/awsai) rather than direct to AWS. BioRouter automatically translates AWS_ENDPOINT_URL_BEDROCK to the correct SDK variable โ€” you don't need to worry about that distinction.

Step 2 โ€” Configure Credentials

macOS GUI apps (launched from Finder, Spotlight, or Dock) do not inherit your shell's environment variables. For BioRouter to work reliably you need to configure credentials in all three of the following places. The quickest way is the setup script:

Quick Setup โ€” Run the Script

Create a file called setup_bedrock.sh, paste the block below (filling in your credentials), and run it once:

#!/bin/bash
AWS_ACCESS_KEY_ID="<your-access-key-id>"
AWS_SECRET_ACCESS_KEY="<your-secret-access-key>"
AWS_REGION="us-west-2"
BEDROCK_ENDPOINT="https://unified-api.ucsf.edu/general/awsai"

# 1. Shell env โ€” picked up by Terminal / CLI
grep -qF "AWS_ACCESS_KEY_ID" ~/.zshrc || cat >> ~/.zshrc <<EOF

export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
export AWS_REGION=$AWS_REGION
export AWS_ENDPOINT_URL_BEDROCK=$BEDROCK_ENDPOINT
EOF

# 2. ~/.aws files โ€” picked up by GUI apps that skip the shell
mkdir -p ~/.aws
cat > ~/.aws/credentials <<EOF
[default]
aws_access_key_id = $AWS_ACCESS_KEY_ID
aws_secret_access_key = $AWS_SECRET_ACCESS_KEY
EOF
cat > ~/.aws/config <<EOF
[default]
region = $AWS_REGION
EOF

# 3. launchctl โ€” injects vars into macOS at OS level for GUI apps
launchctl setenv AWS_ACCESS_KEY_ID "$AWS_ACCESS_KEY_ID"
launchctl setenv AWS_SECRET_ACCESS_KEY "$AWS_SECRET_ACCESS_KEY"
launchctl setenv AWS_REGION "$AWS_REGION"
launchctl setenv AWS_ENDPOINT_URL_BEDROCK "$BEDROCK_ENDPOINT"

source ~/.zshrc
echo "Done. Restart BioRouter for changes to take effect."
chmod +x setup_bedrock.sh && ./setup_bedrock.sh

Why Three Locations?

LocationWhat it covers
~/.zshrc exportsTerminal sessions and CLI tools that inherit the shell environment
~/.aws/credentials + configThe AWS SDK's file-based credential chain โ€” used by apps that don't inherit shell env vars
launchctl setenvInjects vars at the macOS process level โ€” the only reliable way to pass env vars to apps launched from Spotlight, Dock, or Finder
Linux users: Skip the launchctl step. Add the exports to ~/.bashrc or ~/.profile and write the ~/.aws/ files โ€” that covers both CLI and GUI apps.
Windows users: Set the four values as System Environment Variables via System Properties โ†’ Environment Variables. No ~/.aws or launchctl steps needed.

Step 3 โ€” Configure BioRouter

After running the setup script, open BioRouter and point it at the Bedrock provider:

  1. Go to Settings โ†’ Models โ†’ Amazon Bedrock โ†’ Configure
  2. Confirm the four environment variables are visible โ€” BioRouter reads them automatically
  3. Set Model to us.anthropic.claude-sonnet-4-6 (default)
  4. Click Launch โ€” send a test message to verify
Tip: If BioRouter shows an auth error after setup, quit it fully (โŒ˜Q on macOS) and relaunch โ€” the launchctl vars only take effect for processes started after the script runs.

Available Models

ModelModel IDNotes
Claude Sonnet 4.6 โญ Defaultus.anthropic.claude-sonnet-4-6Best balance of speed and capability
Claude Sonnet 4.0us.anthropic.claude-sonnet-4-20250514-v1:0Previous generation Sonnet
Claude Opus 4.5us.anthropic.claude-opus-4-5-20251101-v1:0Most capable, slower
Claude Opus 4.1us.anthropic.claude-opus-4-1-20250805-v1:0Previous Opus generation

Only models permitted by UCSF's IAM policy are listed โ€” selecting others will result in an authentication error.

Troubleshooting

ErrorCauseFix
"The security token included in the request is invalid"BioRouter is hitting real AWS (not the UCSF proxy) โ€” endpoint var missingRe-run the setup script; quit and relaunch BioRouter
"invalid model identifier"Model ID missing us. prefix or -v1:0 suffixUse a model ID exactly as listed in the table above
Auth error in GUI, CLI works finelaunchctl vars not set โ€” GUI app doesn't inherit shell envRe-run the script, then fully quit and relaunch BioRouter
"AccessDeniedException" on a specific modelModel not permitted by UCSF IAM policySwitch to one of the four confirmed models above

Option C โ€” Local (Ollama, Air-gapped)

For maximum privacy โ€” nothing ever leaves your device:

  1. Install Ollama from ollama.com/download
  2. Pull a model: ollama pull qwen3
  3. In BioRouter, select Ollama as your provider โ€” no configuration needed

Other Institutions

Check with your institution's IT or compliance office for approved AI hosting options. For institutions without managed AI services, commercial API keys (Anthropic, OpenAI) or local Ollama inference are recommended โ€” but always verify compliance before processing any sensitive data.

โš ๏ธ Patient data & PHI: Never use personal commercial API keys with patient data, PHI, or other regulated research data. Use UCSF-managed services (Azure, Bedrock) or local Ollama only. Always verify with UCSF compliance before processing regulated data.

Providers & Models

BioRouter connects to a wide range of LLM providers. Switch providers at any time from Settings โ†’ Models without changing your workflows.

UCSF Institutional Providers

ProviderAccessDefault Model
Azure OpenAI (UCSF ChatGPT)UCSF Versa API key ยท Setup guide โ†’gpt-5-2025-08-07
Amazon Bedrock (UCSF Anthropic)UCSF proxy credentials ยท Setup guide โ†’us.anthropic.claude-sonnet-4-6

Commercial Cloud Providers

ProviderEnv VariableGet API Key
AnthropicANTHROPIC_API_KEYplatform.claude.com
OpenAIOPENAI_API_KEYplatform.openai.com
Google GeminiGOOGLE_API_KEYaistudio.google.com
X.AI (Grok)XAI_API_KEYconsole.x.ai
OpenRouterOPENROUTER_API_KEYopenrouter.ai

Local Models โ€” Ollama

Install Ollama, pull any model, and select Ollama in BioRouter. Everything stays on your machine.

ollama pull qwen3           # Recommended general-purpose
ollama pull llama3.2        # Fast lightweight option
ollama pull deepseek-r1     # Strong reasoning model

Browse all available Ollama models โ†’

Switching Providers

Desktop: Settings โ†’ Models โ†’ select a provider card โ†’ Configure or Launch.
CLI: biorouter configure โ†’ Select "Configure Providers"

Extensions, Skills & MCP

Extend BioRouter with MCP servers, reusable Skills, and built-in platform capabilities โ€” from database access to browser automation.

Built-in Extensions

ExtensionWhat It DoesDefault
DeveloperFile operations, shell commands, code search, text editingโœ… On
Computer ControllerWeb scraping, file caching, browser automationOff
MemoryRemembers user preferences and context across sessionsOff
Auto VisualiserAuto-generates data visualizations from conversationOff
Chat RecallSearch across all past conversation historyOff
Code ExecutionRun JavaScript in a sandboxed environmentOff
SkillsLoad and invoke reusable instruction setsโœ… On
Extension ManagerDiscover, enable, and disable extensions mid-sessionโœ… On

Enabling Extensions

Desktop: Sidebar โ†’ Extensions โ†’ toggle on.
CLI: biorouter configure โ†’ Add Extension โ†’ Built-in Extension

Adding External MCP Servers

Desktop: Sidebar โ†’ Extensions โ†’ Add custom extension โ†’ enter type, ID, name, command, and env vars.

Config file (~/.config/biorouter/config.yaml):

extensions:
  github:
    name: GitHub
    cmd: npx
    args: [-y @modelcontextprotocol/server-github]
    enabled: true
    envs:
      GITHUB_PERSONAL_ACCESS_TOKEN: "<your_token>"
    type: stdio
    timeout: 300

Skills System

Skills are reusable .md instruction files that encode your team's workflows and best practices. Drop a skill file in your skills directory and BioRouter makes it callable in any session.

Skills can be shared across a team or institution to standardize analysis pipelines โ€” without sharing any underlying data.

Recipes & Automation

Recipes package any workflow into a shareable, parameterizable, schedulable unit โ€” the foundation of federated research collaboration in BioRouter.

What is a Recipe?

A Recipe is a YAML file that defines a prompt template, input parameters, and an optional schedule. Recipes enable reproducible, shareable workflows that travel across institutions without carrying any data.

Recipe Structure

name: "Literature Review"
description: "Summarize recent papers on a research topic"
parameters:
  - name: topic
    description: "Research topic to review"
    required: true
  - name: years
    description: "How many years back to cover"
    default: "3"
prompt: |
  You are a scientific literature reviewer.
  Summarize key findings on: {{ topic }}
  Cover publications from the past {{ years }} years.
  Focus on clinical relevance and methodology.

Running a Recipe

Desktop: Sidebar โ†’ Recipes โ†’ select a recipe โ†’ fill parameters โ†’ Run.

CLI:

biorouter run recipe.yaml --param topic="SPOKE knowledge graph" --param years=5

Scheduling Recipes

Add a schedule field using cron syntax to run a recipe automatically:

schedule: "0 8 * * 1"   # Every Monday at 8:00 AM

Scheduled recipes run via BioRouter's background service โ€” even when the desktop app is closed.

Sharing Recipes

Recipe files are plain YAML text. Commit them to a shared repository, publish them on GitHub, or email them. Recipients open them with File โ†’ Open Recipe or biorouter run <recipe.yaml>. No data is embedded โ€” only the workflow logic.

Data Privacy Guide

BioRouter routes your inputs to an LLM provider. Privacy properties depend entirely on which provider you use โ€” here's how to choose the right one.

โš ๏ธ Patient data / PHI: Only use UCSF-managed services (Azure OpenAI, Amazon Bedrock) or local Ollama when working with patient data, PHI, or any regulated research data. Never use personal commercial API keys for sensitive data. Always verify compliance with your institution before processing regulated data.

Provider Privacy Properties

ProviderData Stays WithinBest For
Ollama (local)Your device only โ€” no networkMaximum privacy, air-gapped requirements
UCSF Azure OpenAIUCSF's Azure tenantInstitution-approved clinical use cases
UCSF Amazon BedrockUCSF's AWS environmentInstitution-approved clinical use cases
Commercial APIs (personal)Provider's cloud infrastructureDe-identified or non-sensitive data only

Best Practices

  • De-identify first โ€” remove names, dates of birth, MRNs, and other identifiers before any AI processing
  • Minimize data exposure โ€” provide only what's needed for the task
  • Prefer local models for exploratory work with real data
  • Protect your device โ€” session logs at ~/.config/biorouter/logs/ may contain your inputs
  • Never share sessions that contain patient or sensitive data
  • Always verify with UCSF IT or your compliance office before processing regulated data

Architecture

BioRouter is a modular, plugin-based system built with a Rust backend and Electron + React frontend, connected via a local REST API.

Three-Layer Design

  • Interface โ€” Desktop GUI (Electron + React 19) or CLI, accepts user input and renders responses
  • Agent Core โ€” Rust-based reasoning loop managing LLM interaction, tool execution, context, and session state
  • Extensions โ€” Pluggable MCP servers providing tools: file system, databases, web, code execution, and custom agents

Backend โ€” Rust Workspace

CrateRole
biorouterCore agent library โ€” loop, providers, sessions, recipes, scheduling
biorouter-serverLocal REST API server (biorouterd) the desktop communicates with
biorouter-cliCLI binary โ€” biorouter session, biorouter configure, etc.
biorouter-mcpBuilt-in MCP servers (Developer, Computer Controller, etc.)

Key dependencies: tokio (async), axum (HTTP), rmcp (Model Context Protocol), serde/serde_json, tiktoken-rs (token counting), sqlx/SQLite (session persistence), minijinja (recipe templates), tokio-cron-scheduler.

Frontend โ€” Electron + React

Desktop app: Electron 39 + React 19 + TypeScript, built with Vite + Electron Forge. Communicates with the local biorouterd REST API.

Model Context Protocol (MCP)

All extensions use MCP โ€” a standard protocol for LLM tool invocation. Any MCP-compatible server (local process or remote HTTP) can be added as a BioRouter extension, enabling a growing open ecosystem of research agents.

MCP Agents

BioRouter connects to local and remote MCP agents for specialized research tasks. Browse the curated agent ecosystem in the BAAM Marketplace.

Adding an Agent via Desktop UI

  1. Go to Sidebar โ†’ Extensions โ†’ Add custom extension
  2. Choose type: Command-line (stdio) for local agents or Streamable HTTP for remote
  3. Enter the agent name and command (e.g., uvx --from git+https://... agentname)
  4. Add any required environment variables
  5. Click Add and enable the extension

Adding an Agent via Config YAML

extensions:
  spokeagent:
    name: SPOKE Agent
    cmd: uvx
    args:
      - --from
      - "git+https://github.com/BaranziniLab/SPOKEAgent"
      - spokeagent
    enabled: true
    type: stdio
    timeout: 300

Remote HTTP Agents

extensions:
  remote-agent:
    name: My Remote Research Agent
    url: https://my-agent.example.com/mcp
    type: streamable_http
    enabled: true
    timeout: 300

Prerequisites

Most UCSF agents use uvx (requires Python + uv). Install with:

curl -LsSf https://astral.sh/uv/install.sh | sh   # macOS / Linux
# or: pip install uv

Playwright MCP uses npx โ€” requires Node.js: nodejs.org

Available Agents

Browse all available agents, copy install commands, and find GitHub links in the BAAM Marketplace โ†’