Skip to content

Getting Started

BEAR is built for easy institutional deployment. Use this guide to quickly set up a proof-of-concept for semantic search and expert discovery at your university.

Prerequisites

  • Git
  • Docker and Docker Compose

Installation

1. Clone the Repository

git clone https://github.com/uw-madison-dsi/bear.git
cd bear

2. Install Dependencies

BEAR uses uv for dependency management:

curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync

3. Configuration

Use the CLI to initialize BEAR. Follow the prompts to set up the institution ID, start Docker Compose, run the crawler, and ingest data.

uv run bear-init

Testing the Installation

API

The API will be available at http://localhost:8000.

Test if API is actually running, go to: http://localhost:8000

API docs

Test with a sample API call:

curl "http://localhost:8000/search_author?query=data%20science"

Next Steps

  • Explore the API Usage for hands-on examples

Advanced/Manual setup

If you are using the bear-init CLI, these steps are already handled.

Manually configure system

see the Config Reference and example.env for detailed configuration options.

Manually starting backend

docker compose up -d

This will start:

Manually initialize DB

uv run bear/db.py

Crawl Academic Data

Crawl data from OpenAlex for your institution:

# Test run (Crawl for 10 people)
uv run bear/crawler.py --test
# Full crawl
uv run bear/crawler.py

Ingest Data

Process and vectorize the crawled data:

# Test ingest
uv run bear/ingest.py --test

# Full ingest
uv run bear/ingest.py