The default in 2026 is to reach for AI first. New project? Is there an AI angle? Efficiency problem? Could a model help? The question has inverted: instead of "should we use AI here," the question has become "why wouldn't we use AI here?"

This post is the answer to that question. Not because AI isn't useful — we build AI integrations for a living — but because the best technology decision is the right one for the problem, not the most recent one.

When the Problem is Actually a Process Problem

AI doesn't fix bad processes. It automates them, which is worse. If a team spends 20 hours a week manually reconciling two spreadsheets, building an AI to do the reconciliation faster doesn't address the reason two separate spreadsheets exist. Before scoping an AI solution, ask: if a new employee joined and we explained this process to them, would they think it made sense? If the answer is no, fix the process first.

When You Don't Have Enough Data

Machine learning models need data. Not aspirationally — specifically. A classification model trained on 200 examples will perform poorly on the 201st edge case. The honest question is: do we have enough labeled, clean, representative data to train and evaluate this system? If the answer is "we'll collect it as we go," that's a sign to wait.

When a Simple Rule Would Do

There's a version of AI enthusiasm where every pattern-matching problem gets framed as a machine learning problem. If you can describe the logic in plain language — "flag any invoice where the amount exceeds $10,000 and the vendor isn't on the approved list" — that's a rule, not a model. Rules are cheaper to build, easier to audit, and more reliable in production.

When the Stakes of Being Wrong Are High and Opaque

AI systems fail in ways that are harder to detect than rule-based systems. In high-stakes contexts — medical decisions, financial risk, legal compliance — the requirement isn't just that the system is usually right. It's that you can explain why it produced each output, audit the decision trail, and catch failures before they compound.

When Adoption Is the Real Problem

The most sophisticated AI system in the world has zero value if the team it was built for doesn't use it. Adoption failures are more common than technical failures. If the team currently avoids the process entirely, or uses workarounds, or has strong opinions about how the work should be done — those are adoption signals.

The Checklist

Before committing to an AI solution: Is the underlying process actually well-designed? Do we have sufficient, clean, labeled data? Could a rule or simple heuristic solve 90% of the problem? Can we explain and audit the system's outputs if something goes wrong? Is the team ready to adopt a new tool? Is the product stable enough to justify the maintenance overhead? If you can answer yes to all six, the AI approach is probably right.