Back to Status

Eric Provencher

Updated Nov 28, 2025

Creator of RepoPrompt

Pioneer in AI-assisted development workflows. Creator of RepoPrompt for context management and multi-model consultation (MCP). His hybrid model approach has become the gold standard for serious AI-powered development.

Context EngineeringMulti-Model WorkflowsDeveloper ToolsMCPHybrid AI Workflows

Philosophy

"Manual context curation beats auto-context. Hybrid setups (Opus + GPT-5.1) outperform single-model approaches. Medium reasoning balances speed and accuracy—avoid high reasoning under context pressure."

Recommended Models

GPT-5.1 Pro (Codex 5.1 Pro)

The deep thinker. Unmatched for planning and architecture on complex tasks. Use vanilla (non-tool-calling) mode for pure reasoning after building context.

Use forPlanning, architecture, fleshing out elaborate ideas, deep analysis
Hand off to Opus for actual execution—5.1 Pro plans, Opus implements.
source

Claude Opus 4.5

The reliable executor. Impressive for implementation, code review, and translating user rambles into coherent prompts. Efficient "light reasoning" model that gets things done.

Use forImplementation, code review, agentic tasks, prompt refinement
Not as deep as OpenAI alternatives for planning—pair with GPT-5.1 for best results.
source

GPT-5.1 High (Codex 5.1 High)

The validator. Ideal for second opinions and quick unblocking during iterations. Great bird's-eye view of context via RepoPrompt MCP chat.

Use forSecond opinions, validation, unblocking Opus during iterations
Avoid for primary implementation—can be sloppy with file edits.
source

Codex-Max Medium

The surgeon. Specifically endorsed for context building in large codebases. More surgical than Opus at selecting relevant files without overthinking.

Use forContext building, file selection in large codebases
Use medium reasoning—avoid high reasoning due to context pollution from traces.
source

Kimi-for-coding

The dark horse. First non-Western model in Repo Bench top 10. Scores on par with Opus 4.5 (non-thinking mode). Worth considering for coding tasks.

Use forCoding tasks, alternative to Western models
source

Gemini 3 Pro

Mixed results. Worse at large-context reasoning than Gemini 2.5 Pro. Google fighting on both light and deep reasoning fronts—expect improvements.

Use forGeneral tasks (with caution)
Don't replace bigger models with smaller ones under context pressure. Caution advised.
source

Recommended Tools

RepoPrompt + MCP

Essential stack. Manual context curation avoids "junk" in prompts. MCP enables multi-model consultation—consult GPT-5.1 while Opus executes.

Use forContext building, multi-model workflows, large codebase navigation
source

Recommended Workflows

Hybrid Opus + GPT-5.1 Workflow

The optimal setup: GPT-5.1 Pro for planning → Opus 4.5 for implementation → GPT-5.1 High for review/validation. Iterate with Opus, consult 5.1 when stuck.

Use forComplex development tasks, large features, architectural work
source

Pro Tips

Use Medium Reasoning

Medium reasoning levels balance speed and accuracy. High reasoning can pollute context with traces. Avoid over-relying on high reasoning or small models under context pressure.

Use forAll model interactions, especially context building
source

Manual Context Curation

Curate context manually to avoid "junk" in prompts. Auto-context can include irrelevant files. Surgical file selection beats dumping everything.

Use forBefore any planning or implementation task
source