Skip to content

AI Analysis

AI analysis is optional. You can use auralogs as a structured logger with MCP access without connecting an AI provider. When you do add an Anthropic or OpenAI key, auralogs can generate diagnoses, scheduled reviews, notifications, and optional GitHub autofix PRs.

auralogs runs on a bring-your-own-key model — you connect your Anthropic or OpenAI API key in the project’s Settings page. AI analyses are unlimited; auralogs never resells inference.

You can pick the model independently for two pipelines:

PipelineAnthropic optionOpenAI option
Analyze logs withClaude Sonnet 4GPT-5 mini
Create fix PRs withClaude Opus 4.6GPT-5

A common setup is to start with log analysis only, then add a separate autofix model if you connect GitHub later. Configure both under Settings → Analysis → Model routing.

Every time an error or fatal log arrives, it triggers an analysis. The provider receives:

  • The error message and stack trace
  • Recent logs from the same project (for context)
  • Your source code (if a GitHub repo is connected)

The analysis includes:

  • Severity — high, medium, or low based on impact assessment
  • Root cause — what the model thinks went wrong and why
  • Recommendations — specific steps to fix the issue
  • Affected code — file and line references (when source is available)

Analyses are also readable through the MCP tools, so your agent can combine generated diagnoses with raw logs during an investigation.

Your provider also runs periodic reviews across your recent logs, looking for:

  • Recurring error patterns
  • Increasing error rates
  • Correlated failures across services

Scheduled reviews surface trends that individual error analysis might miss.

If the same error occurs multiple times, auralogs deduplicates analyses so you’re not flooded with repeated findings. The first occurrence triggers a full analysis. Subsequent occurrences within the dedup window are grouped.

Open your project on auralog.ai, then navigate to Analyses in the sidebar. Each analysis shows the severity badge, summary, and full assessment. Click into an analysis for the detailed view.

If your provider’s key is missing or rejected, the analysis is recorded as skipped instead of running. The dashboard surfaces these in two places:

  • A sticky banner at the top of every project page when a key is missing or rejected.
  • An aggregated card on the Analyses page summarizing the past week’s skipped count.

Adding or rotating the key in Settings clears the banner and resumes analyses on the next error.