Strategy 10 min read February 14, 2026

The 3 AI Projects Every Company Should Kill (And What to Do Instead)

Every organization has a graveyard of AI projects. They're not officially dead. They're 'in development' or 'being refined.' But everyone knows the truth: they're never going to deliver value.

Alex Ryan
Alex Ryan
CEO & Co-Founder

Every organization has a graveyard of AI projects.

They’re not officially dead. They’re “in development” or “being refined” or “in the next phase.” But everyone knows the truth: they’re never going to deliver value. They exist because nobody has the authority or the courage to kill them.

Here are the three most common zombies we see — and what you should build instead.

1. The “AI-Powered” Dashboard Nobody Uses

What It Looks Like

Someone built a beautiful dashboard with “AI-powered insights.” It’s got predictive analytics, trend detection, anomaly highlighting, and a dozen charts that update in real time. It was demo’d to leadership six months ago to enthusiastic applause.

Nobody uses it.

The operations team still runs their reports from the same Excel spreadsheet they’ve been using for three years. The “insights” the dashboard surfaces are either obvious (“sales dip in January” — yes, we know) or unexplainable (“anomaly detected in Segment 7” — what does that mean and what should I do about it?).

Why It Fails

The dashboard was built for a demo, not for a workflow. Nobody asked the people who’d actually use it what they needed. The “insights” don’t connect to decisions anyone actually makes. The data refresh cycle doesn’t match the cadence of the business. And the anomaly detection generates so many alerts that people stopped reading them in week two.

What to Do Instead

Talk to the actual decision-makers. Not their managers — the people who make day-to-day operational decisions. Ask: “What information do you need, when do you need it, and what decision does it inform?” Then build exactly that — nothing more. If a simple alert that fires twice a week is more valuable than a 50-widget dashboard, build the alert.

2. The Chatbot That Makes Customers Angrier

What It Looks Like

Someone deployed a chatbot on the customer support page. It was supposed to handle 40% of inquiries automatically, reducing support costs and improving response times.

Instead, customers spend five minutes arguing with a bot that can’t understand their problem, then call the support line more frustrated than if they’d just called in the first place. Support calls are now longer because agents have to deal with the customer’s chatbot frustration before addressing the actual issue. Customer satisfaction scores dropped.

But the chatbot reports 30,000 “interactions” per month, so the metrics look great on paper.

Why It Fails

Chatbots fail when they’re deployed to cut costs rather than improve the customer experience. When the goal is “deflect 40% of tickets,” you build something optimized for deflection, not resolution. The bot tries to handle everything, fails at most of it, and the failures create worse experiences than no bot at all.

What to Do Instead

Identify the 3-5 inquiries that are genuinely simple, high-volume, and have unambiguous resolution paths. Think password resets, order tracking, account balance checks. Build the bot to handle only those — and make it trivially easy to reach a human for anything else. Measure success by customer satisfaction, not deflection rate.

3. The Predictive Model Nobody Trusts

What It Looks Like

The data science team built a predictive model — demand forecasting, equipment failure prediction, customer churn risk, whatever. The model accuracy is impressive. The back-testing results are beautiful. The team presented it to stakeholders with genuine enthusiasm.

The stakeholders said “interesting” and went back to using their gut instinct and Excel spreadsheets.

Why It Fails

The model fails the trust test on three levels:

Transparency: When the model says a customer is “high churn risk,” nobody can explain why. The model is a black box. When the prediction doesn’t match someone’s experience (“I talked to that customer last week, they’re fine”), there’s no way to resolve the disagreement between human judgment and model output.

Integration: The predictions exist in a separate system. Using them requires logging into a different tool, pulling a report, and manually applying the recommendations. It’s faster to just go with your gut.

Accountability: When the model is wrong — and it will be sometimes — nobody has a framework for understanding acceptable error rates or adjusting their behavior. Either they trust the model completely (dangerous) or they don’t trust it at all (wasteful).

What to Do Instead

Before building the model, design the decision framework. Who will use the prediction? What decision will it inform? What happens when the prediction is wrong? What confidence threshold triggers action versus human review? Build the model into the existing workflow — don’t ask people to use a separate system. And make the model explainable: “This customer is high churn risk because their usage dropped 40% last month and they haven’t responded to the last two emails.”

The Common Thread

All three of these failures share the same root cause: they were built to showcase technology rather than solve a business problem. The dashboard was built because someone wanted to show “AI-powered analytics.” The chatbot was built because “everyone has a chatbot now.” The predictive model was built because “we should be doing machine learning.”

None of them started with: “What specific problem are we solving, who benefits, and how will we measure success?”

How to Kill These Projects

Killing a project is politically hard. Here’s the framework we use:

  1. Reframe it as a pivot, not a failure. “We’ve learned valuable lessons from this initiative. Here’s how we’re applying them to something with higher impact.”

  2. Quantify the ongoing cost. Every zombie project consumes resources — hosting, maintenance, people’s attention, opportunity cost. Put a number on it. “This project costs us $15K/month in cloud hosting and 20 hours/week of engineering time for something nobody uses.”

  3. Redirect the resources. Don’t just kill it — immediately reallocate the team and budget to something with a clear business case. Leaders are more willing to end something when there’s a better alternative.

  4. Get executive cover. The person who kills a project should be more senior than the person who started it. Otherwise, it becomes personal.

What to Build Instead

The best AI projects share four characteristics:

  1. Process acceleration — they make an existing process faster, not different. People are already doing the work; AI just does it faster or handles the routine cases so humans focus on exceptions.

  2. Decision support — they help humans make better decisions with better information, rather than trying to replace human judgment entirely.

  3. Quality improvement — they catch errors, flag anomalies, or ensure consistency in ways that humans can verify and trust.

  4. Insight discovery — they surface patterns in data that humans couldn’t find manually, but present them in ways that connect to specific business decisions.

Notice what’s not on this list: “impress the board,” “keep up with competitors,” or “prove we’re innovative.”


Have AI projects that might need killing? Get Your AI Readiness Score to get an honest assessment, or book a 30-minute call to talk through your AI portfolio.

AI StrategyROIProject ManagementLeadership

If this is the kind of thinking you want in your inbox, The Logit covers AI strategy for industrial operators every two weeks. No vendor content. No hype. Just honest takes from practitioners.

Subscribe to The Logit
Alex Ryan
About the author
Alex Ryan
CEO & Co-Founder at Ryshe

Alex Ryan is CEO of Ryshe, where he helps engineering and manufacturing companies build the data foundations that make AI projects actually deliver. He's spent over a decade in the gap between what vendors promise and what ships to production. He's learned to tell clients what they need to hear, not what they want to hear.

Want to Discuss This Topic?

Let's talk about how these insights apply to your organization.