Three things to know
- The most common reason AI initiatives stall is not technology. It is picking the wrong starting point and running out of momentum before anything works
- Most companies cannot identify their best AI opportunity themselves because the waste in their processes has been normalized over years
- A structured discovery process, interviewing the people doing the work, mapping real processes, and calculating actual time costs, surfaces better candidates in days than most teams find in months of internal debate
This post, and about forty others like it, ends the same way: pick a workflow, start there, expand from it.
It is good advice. It is also incomplete. Because the question most people are sitting with after reading something like that is not "should I pick a workflow?" It is "which one?"
That question sounds simple. It is not. And the gap between good general advice and a specific, defensible answer to that question is exactly where most AI initiatives lose momentum.
You probably cannot see your own best opportunity
The process you are running today, no matter how inefficient it is, has been normalized. You hired people to do it. You trained people to do it. You watched it work well enough, for long enough, that the hours it consumes feel like the cost of doing business rather than a problem worth solving.
This is proximity blindness. Not a character flaw. It is what happens when you are inside a system: you stop seeing the system.
I have sat through enough of these conversations to know what it looks like from both sides. The person closest to the work explains their job in full detail. They include the workarounds they built over time. The manual steps that compensate for tool limitations. The spreadsheet that three people have to update every Friday before the Monday report can run. They walk through all of it without once flagging any of it as unusual. It is just how the job works. To them, it is invisible.
An outsider sees something different. An outsider sees a half-day exercise in reformatting information that already exists somewhere else. An outsider sees three people doing a job that one person with better tools should be able to do in an hour.
This is also why internal AI brainstorming sessions rarely produce actionable results. You get a list of ideas that sound reasonable in a conference room and mostly do not survive contact with the actual work. The people in the room know their jobs, but they are the wrong people to diagnose the inefficiency in those jobs. That requires a different vantage point.
What structured discovery actually looks like
The way to find your best AI opportunity is not to brainstorm it. It is to go through a process that looks at what your people actually do, how long it takes, and whether any of it fits the profile of work that AI handles well.
Here is what that process looks like when we run it.
Interview the people doing the work, not just leadership
Leadership knows strategy. They do not know what their team does between 9 and 10 on a Tuesday. The most revealing conversations happen two or three levels down, with the people who do the work every day and have never been asked to explain it to anyone from outside the department. That is where the real picture is.
Map the real process, not the documented one
Every organization has SOPs. Very few people follow them exactly. The actual process includes the Slack threads, the spreadsheet workarounds, the handoffs that function because two specific people happen to communicate well. If you map the SOP, you will automate the SOP. If you map what is actually happening, you will find what is actually worth fixing.
Calculate the time cost before evaluating any solution
Volume times time times salary. How many times per week does this task happen? How long does it take per instance? What does it cost in people hours across the team? That math converts a vague "we spend a lot of time on this" into a dollar figure. Dollar figures are what get things funded. They are also what let you measure whether the solution actually worked.
Score each candidate for AI fit
Not every slow, repetitive process is a good AI candidate. The work AI handles well has a recognizable profile: consistent input format, clear quality criteria, some tolerance for occasional errors, and a task that is fundamentally about moving information from one place to another. Work that requires nuanced judgment, relationship context, or creative synthesis is a harder fit. Not impossible, but the ROI math looks different and the implementation path is longer.
McKinsey's research found that organizations reporting significant financial returns from AI were twice as likely to have redesigned workflows end-to-end before selecting any technology. The numbers behind that finding are worth sitting with: companies that redesigned processes before deploying AI achieved cost savings of up to 25 percent in affected functions. Companies that layered AI on top of unchanged processes averaged around 5 percent. Same tools, very different starting points.
What tends to show up
After running this process across a range of industries, the same categories of work keep surfacing as strong candidates.
Research and summarization is usually the first thing. Someone, somewhere in the organization, regularly pulls information from multiple sources, assembles it, formats it, and sends it somewhere. That might be a competitive brief before a pitch. A market update that goes to a partner group. A prospect summary before a sales call. It is almost always slower and more manual than it needs to be.
First-draft generation is the second. Proposals, reports, outreach emails, project summaries. Someone starts from a blank page every time, even though the last ten versions of this document are sitting in a folder. The structure is the same. The content changes. AI handles that gap well.
Document review and extraction comes up constantly. Contracts, agreements, filings, applications, where someone is reading through pages of material to find specific information, flag specific clauses, or pull specific terms. It is reading with a checklist. AI reads faster and does not get tired.
Status aggregation is the fourth. Someone, usually a manager or an ops person, regularly pulls information from multiple systems to answer a recurring question: where are all the deals, what is the project status, which accounts need attention this week. It is not hard work. It takes an hour that should take five minutes.
McKinsey estimates that 60 percent of employees could save at least 30 percent of their time if the repetitive parts of their job were automated. Across a team of ten, that is the equivalent of recovering three full-time people worth of hours every week. Most organizations have no clear picture of where exactly that time is going, because nobody has gone to find out.
Almost every company we have worked with has at least two or three of these. Most have more. The reason they have not automated them is not that the problem is too complex. It is that nobody has been assigned to look.
Why the starting point matters more than you think
Once you have a specific candidate, you build the workflow, test it on real work, and run a training process that gets your team operating differently. That is the execution phase. It is real work, and it is doable.
An S&P Global survey from 2025 found that 42 percent of companies abandoned most of their AI initiatives during the year, up from 17 percent the year before. The most common reason was not technical failure. It was that teams could not demonstrate ROI, because they had not picked a starting point with a measurable before-and-after. They built something vague and got vague results.
But here is what often gets missed: the first workflow you implement is not primarily about optimizing that process. It is about proof.
Proof that AI works for your team. Proof that the investment was worth something. Proof that you can point to in the next conversation when someone asks whether any of this is real. The first win changes the internal conversation from "should we do this" to "what is next." That shift is what makes everything after it easier.
Which means picking something with a clear before-and-after matters. Something where the time savings are visible and the quality of the output is easy to evaluate. A workflow that takes two hours and becomes twenty minutes is a better starting point than a workflow that takes a week and becomes four days, even if the absolute hours saved are similar. People need to feel the change. That feeling is what drives adoption beyond the pilot.
The companies that have built real AI capability did not start with the most sophisticated use case. They started with something that produced a win fast enough to justify the next step. Each win funded the next experiment. That compounding is how you end up, twelve months later, with AI running across your operations rather than sitting in a demo nobody uses.
What to do if you do not know where to start
Most companies are in this position and do not say so. They have AI licenses. They have done some training. They have a vague sense that they should be doing more. But nobody has done the work of looking at the actual processes, calculating the actual costs, and identifying the actual candidates worth pursuing.
That work does not take months. A focused discovery engagement, structured interviews with the right people, process mapping across two or three departments, and a clear prioritization of what to build first, can be done in a matter of weeks. What you come out with is not a strategy document. It is a specific workflow, a cost baseline, and a realistic plan for implementation.
That is the starting point. Everything else follows from it.
Not sure which workflow to start with?
That is exactly what we help companies figure out. Our AI Workflow Audit is a structured discovery process: we interview your team, map your real processes, calculate where the time is going, and walk away with a prioritized list of automation candidates. Most engagements surface three to five serious candidates in the first round. Book a free initial conversation.
Book Your Free Audit