MIROFISH

MiroFish 500 Error: Why Your Simulation Fails and How to Fix It

By Eric Coste · 2026-03-26 · 8 min read
You set up MiroFish, hit 'start simulation,' and immediately get: Request failed with status code 500. The backend log shows POST /api/graph/ontology/generate HTTP/1.1 500. This is the most common MiroFish error, and it has three distinct causes — each with a different fix.

Cause 1: .env File Not in the Backend Folder

MiroFish reads its configuration from two locations. The frontend reads from the project root .env, but the backend reads from backend/.env. If you only created the root copy during setup, the backend has no API keys and returns 500 on every LLM call. Fix: cp .env backend/.env and restart with npm run dev.

Cause 2: Using Anthropic's API URL

MiroFish uses the OpenAI SDK internally. If your .env has LLM_BASE_URL=https://api.anthropic.com/v1, every call will fail because Anthropic's API format is different from OpenAI's. Fix: Change to LLM_BASE_URL=https://api.openai.com/v1 and set LLM_MODEL_NAME=gpt-4o. Or use OpenRouter for Claude access via OpenAI format.

Cause 3: Invalid or Expired API Key

If your key is wrong, expired, or has no credits, the LLM call fails and returns 500. Test manually to isolate the issue — run a simple Python script that sends a chat completion request using your .env credentials. If it responds, the key works.

How to Test Your API Key Manually

Open a new terminal tab, navigate to the MiroFish backend folder, and run a test script that loads your .env and sends a simple 'say hi' request to your configured LLM. If you get a response, the problem isn't your API key — go back and check the .env file location.

Still Getting 500s After All Fixes?

Check your Zep Cloud API key — an invalid Zep key can also cause 500s during the graph construction phase. Sign up free at app.getzep.com, generate a key, and add it to both .env files. Also check that your Python version is 3.11 — a wrong Python version can cause import failures that surface as generic 500 errors.

Frequently Asked Questions

What does the MiroFish 500 error mean?

A 500 error means the backend server crashed during an operation — usually the LLM call during ontology generation. It's almost always a configuration issue, not a code bug.

Can I use DeepSeek with MiroFish?

Yes. Any OpenAI-compatible API works. Set LLM_BASE_URL to your provider's OpenAI-compatible endpoint and the model name accordingly.

Skip the guesswork. Get DevLaunch.

Interactive setup wizard + AI debugger for MiroFish, OpenClaw, and Claude Code.

GET DEVLAUNCH — $27 →