Why AI outputs feel 60% relevant
-
Moe Hachem - February 26, 2026
There’s a particular type of frustrating AI output that most teams recognize right away. It’s well-structured and grammatically correct. It seems to understand the question. Yet, it’s only about 60% relevant. It’s good enough to show what you wanted, but it’s wrong enough that you end up rewriting most of it anyway.
The team blames the prompt. They try again with more detail. The output improves slightly but still misses the mark.
Here’s what’s really happening. When AI gets a question without context—without knowing your product, your clients, your decisions, or your language—it defaults to matching patterns against the broadest interpretation of your question. It finds the average answer to a question that resembles yours and delivers it confidently.
Your question isn’t average. Your context isn’t generic. But AI can’t know that unless someone creates the layer that provides that information.
This leads to three overlapping problems I often see across teams:
-
Output quality. Generic answers require just as much time to fix as to write from scratch. The team uses AI but isn’t saving time.
-
Context loss. Every session starts from scratch. Decisions made last week aren’t remembered. The team has to re-explain the same questions. They’re paying twice for the same knowledge.
-
False reliability. AI sounds authoritative. Teams trust outputs they shouldn’t. The confidence in the delivery hides the lack of relevant context.
These aren’t problems with prompting. You can’t prompt your way out of a structural gap. The model doesn’t know you, and there’s no organized way for it to learn.
The solution isn’t a better prompt library or a pricier model. It’s building the orientation layer—the structured, AI-friendly surface of your team’s accumulated knowledge—that makes relevance possible before execution starts.
This is the work I do with teams. I don’t provide AI training or tool demonstrations. I design the structure that makes AI outputs actually useful in your specific context.