Getting Started¶
BEAR is built for easy institutional deployment. Use this guide to quickly set up a proof-of-concept for semantic search and expert discovery at your university.
Prerequisites¶
- Git
- Docker and Docker Compose
Installation¶
1. Clone the Repository¶
2. Install Dependencies¶
BEAR uses uv for dependency management:
3. Configuration¶
Use the CLI to initialize BEAR. Follow the prompts to set up the institution ID, start Docker Compose, run the crawler, and ingest data.
Testing the Installation¶
API¶
The API will be available at http://localhost:8000
.
Test if API is actually running, go to: http://localhost:8000
Test with a sample API call:
Next Steps¶
- Explore the API Usage for hands-on examples
Advanced/Manual setup¶
If you are using the bear-init CLI, these steps are already handled.
Manually configure system¶
see the Config Reference and example.env for detailed configuration options.
Manually starting backend¶
This will start:
- API service: http://localhost:8000
- MCP service: http://localhost:8001/mcp
- attu GUI for Milvus: http://localhost:3000
- Milvus vector database:
- Endpoint: http://localhost:19530
- Diagnostic Web-UI: http://localhost:9091/webui/
- MinIO (internal service)
- etcd (internal service)
Manually initialize DB¶
Crawl Academic Data¶
Crawl data from OpenAlex for your institution:
Ingest Data¶
Process and vectorize the crawled data: