A MAGI System-like Approach: Ask 3 AIs the Same Question, Adopt If Unanimous
The other day, I wrote this in our company Slack:
“It’s really helpful that I can consult AI and decide on an approach for handling xx. A little while ago, I think I would have had to do a lot of Googling or consult experts to make a decision.”
It was a moment when I realized how much AI has lowered the barrier to decision-making in technical areas where I’m not an expert but where well-established best practices exist.
That said, blindly accepting AI responses is risky. So the approach I’ve adopted is to ask the same question to 3 AIs and adopt the answer if they unanimously agree.
The “MAGI System” from Evangelion makes decisions through a consensus of three supercomputers (MELCHIOR, BALTHASAR, and CASPAR). My approach is inspired by this.
It’s not the actual MAGI System, but I operate on the principle that if all 3 AIs give the same answer, it’s probably reliable information.
At the time of writing, I define the following three as “3 AIs”:
| Provider | Service |
|---|---|
| Anthropic | Claude |
| Gemini | |
| OpenAI | ChatGPT |
These are currently the most widely used major AI services, each with different training data and design philosophies.
The number three has significance:
Ensuring Diversity: AIs from different providers have different training data and design philosophies. Multiple perspectives can cover biases and errors that a single AI might miss.
Minimum Unit for Consensus: With two, you can’t decide when opinions split. With three, at least a majority vote is possible.
Realistic Cost: Adding four or more increases the verification effort too much. Three strikes the right balance between “sufficient diversity” and “realistic operational cost.”
The practical method is simple:
When opinions differ, dig deeper with follow-up questions or verify with official documentation and expert opinions.
This method is particularly effective in cases like:
Conversely, for cutting-edge technologies or fields where best practices haven’t been established yet, AI responses tend to diverge, making this method less reliable.
Asking 3 AIs individually is, honestly, time-consuming. Copying and pasting the same question three times and comparing each response is tediously repetitive.
This is where Giselle comes in handy.
Giselle is a no-code platform for building AI workflows, with the key feature that you can combine multiple AI models like GPT, Claude, and Gemini within a single workflow.
This means you can input one question and query 3 AIs simultaneously, then display the results side by side for comparison. You just drag and drop nodes and connect them like drawing a flowchart—no programming knowledge required.
As someone involved in Giselle’s development, I believe this “multi-AI consensus” use case is where Giselle’s strengths really shine.
For more details, visit the Giselle official site or Giselle documentation.
With the evolution of AI, the barrier to technical decision-making has definitely lowered. However, by adopting a “consensus approach” with multiple AIs rather than relying on a single one, you can make more reliable decisions.
It’s not a perfect decision-making mechanism like the MAGI System, but the pragmatic approach of “if 3 AIs unanimously agree, it’s probably right” has been streamlining my daily development decisions.
That’s all from the Gemba, where I’m leveraging 3 AIs in a MAGI System-like fashion.