I stopped debating whether Krish Services needed an AI strategy about six months ago. Not because I had all the answers, but because waiting felt worse than starting. So I started building a team.
This is the story of that decision, what we're actually doing, and what I'd honestly tell you if you asked me whether you should do the same thing.
The question isn't whether. It's where.
AI moves fast enough that your evaluation cycle can outlast the thing you're evaluating. I watched this happen for two years. Teams would plug ChatGPT into a workflow or roll out Copilot and call it an AI strategy. Meanwhile, the real work went untouched: building AI capabilities that actually set your company apart from every other shop doing the same implementations.
We're a 100-person global services company. No massive R&D budget. No research lab with PhDs. What we have is decades of enterprise delivery experience and a growing list of clients asking for AI capabilities we couldn't deliver by configuring someone else's product.
They didn't want a ChatGPT wrapper. They needed systems that understood their domain, their data, their specific workflows, and they needed those systems running in their environment with their data never leaving their control. Security and data sovereignty come up in the first meeting with almost every enterprise client we work with. That kind of thing requires actual engineering. Private AI infrastructure. Custom retrieval pipelines. We weren't set up to do any of it.
So I started putting together an AI Center of Excellence at Krish Services. A team with a charter, budget, and deadlines.
What a CoE actually is (and isn't)
The term gets thrown around loosely, so let me be specific. A CoE is not a committee that writes AI guidelines and emails them out as a PDF. It's a team that builds things, teaches people, and ships working software.
Our charter is pretty simple: build AI capabilities that go beyond what you can get from off-the-shelf platform tools. Build things that give Krish Services actual differentiation, both for our clients and for ourselves.
Day to day, we do a few things. We make our products and services AI-ready, meaning AI is part of the design from the start rather than something tacked on at the end. We run an internal task force that finds manual processes inside our own company and automates them, because if we can't eat our own dog food we shouldn't be selling AI to clients. And we run a continuous R&D cycle that takes experiments and turns them into things we can reuse across engagements.
Finding the right people is the hard part
On paper, standing up a CoE is straightforward. In practice, the biggest bottleneck is talent. You need people who understand both AI systems and enterprise delivery, and that combination is genuinely hard to find. You can't just hire ML engineers and throw them at client projects. They'll build something clever that nobody can deploy. And you can't just train your existing delivery team on prompt engineering and expect them to architect agentic systems.
But here's something I learned: before you go to the market, look inside your own org. Some of our best CoE contributors weren't people we hired for AI. They were engineers and architects already on the team who stepped up, put in extra time learning, and brought something that outside hires couldn't: they knew our clients, our delivery patterns, and our codebase. That institutional knowledge combined with new AI skills turned out to be more valuable than hiring someone with a perfect ML resume and no context about our business.
What we ended up with is a team that covers six areas: POC delivery (build it, prove it works before anyone writes a check), internal process automation (where we learn fastest because the feedback loop is tight), training and coaching for the broader org, accelerator development to turn successful POCs into repeatable solutions, service development support to embed AI into live client projects, and presales enablement so our sales team walks in with working prototypes instead of PowerPoints.
The thing I didn't expect: building POCs for internal use trains the team faster than any course. Every internal project generates reusable components and gives us proof points we can show clients. The internal work and the client work feed each other.
Choosing where to focus (and where not to)
I made the mistake early on of trying to cover too much. AI is enormous. If you try to be good at everything, you end up being mediocre at all of it. The discipline is knowing what to say no to, at least for now.
We settled on seven focus areas that we think matter most for enterprise AI right now:
Knowledge retrieval and reasoning, which is what happens when basic RAG isn't enough and you need GraphRAG, multi-hop reasoning, retrieval pipelines that actually understand domain context. Agentic AI and multi-agent orchestration, where agents coordinate on complex tasks with human oversight built in. Intelligent workflow automation for processes too messy for traditional RPA but too repetitive to keep doing by hand.
Those three get most of my attention right now. The remaining four are decision intelligence, predictive operations, sovereign and private AI infrastructure (big for clients with data sovereignty requirements), and stateful AI / context engineering, which is the problem of giving agents memory and continuity instead of starting from zero every conversation.
Knowledge retrieval, agentic AI, and workflow automation are where we're seeing the most pull from clients and the fastest proof of value internally.
Our first real win
The first POC we shipped was an AI-powered proposal generator. If you've worked at a services company, you know that building SOWs and ROMs eats an absurd amount of senior engineer time. Every proposal is different, every client has their own format, and the people writing them are usually the same people you need on billable work.
We built a system that evaluates industry standards, pulls from our previous proposals, cross-references a live pricing catalogue and product catalogue, and generates a rough order of magnitude estimate. What used to take the team three days of back-and-forth now gets a solid first ROM into the client's hands in minutes. Not a final number, but a credible starting point that moves the conversation forward instead of stalling it.
What mattered more than the time savings was that it proved the model works. We built something, we used it ourselves, the people who used it gave us feedback, the feedback made it better. That loop is the whole reason a CoE exists.
Pitfalls to avoid
A few things I got wrong, or watched others get wrong.
The biggest one: falling in love with a technology before you have a problem. I've seen teams pick a model or framework they're excited about and then go hunting for something to point it at. That's backwards. Find the most expensive, most repetitive workflow in your org first. The AI part comes after.
Then there's the trust problem. Everyone has an opinion about AI, and most of those opinions are shaped by bad experiences with inconsistent output. AI doesn't produce identical results every time. It has its own reasoning, and that makes people nervous. You have to design for that. Build in transparency so users can see why the system made a particular recommendation. Keep humans in charge of final decisions without making the review process so burdensome that it cancels out the time savings. And build systems that actually learn from corrections and feedback, so the output gets better the more people use it. Trust isn't a feature you ship once. It's something you earn over hundreds of interactions.
The POC-to-production gap is another one. A lot of teams skip validation entirely, take a demo that worked in a meeting, and try to run it in production. A failed POC costs two weeks. A failed production rollout costs months and whatever trust the AI team had built.
AI systems get better over time, and most orgs don't budget for that. The first version of anything is rough. You need to plan for iteration, for feedback loops, for the long stretch between "cool demo" and "thing people actually rely on."
And if your CoE only builds for external clients, you're going to struggle. Internal use cases give you tighter feedback and cheaper failures. That's where you learn what works before the stakes go up.
If you're planning your own
I get asked some version of "should we build a CoE?" fairly often now. Here's what I usually say:
Start with one thing. One area of focus, one internal use case, one POC. Ship it. See what happens. Don't write a twelve-month roadmap before you've proven you can deliver one working prototype.
Build on what you already know. If your company is good at healthcare delivery, your CoE should be solving healthcare problems with AI. Don't try to become a general-purpose AI lab. Amplify your existing strengths.
Use your own stuff first. Internal adoption gives you faster feedback, lower stakes, and credibility when you go to clients. It's also the quickest way to find out if your solution actually works when someone who didn't build it tries to use it.
Invest in people over tools. The tools change every few months. The capability you build in your team stays. A CoE is a people bet, not a technology bet.
Where we are now
The CoE at Krish Services is months old, not years. But we're past the "is this going to work?" phase. We've shipped the proposal generator to production use, we've got multiple internal automation projects at various stages, and we're spending a lot of time just watching how our own teams use these systems day to day. That part has been more educational than the building itself, honestly. You see where people trust the output and where they don't. You see what makes someone abandon a feature after two tries versus adopt it into their daily routine. We're taking those observations straight into client work now, and they're more useful than anything we could have learned from a vendor webinar.
I'm writing about it now, at the beginning, because I think the "here's what we learned along the way" version is more useful than the retrospective where everything sounds like it was planned. It wasn't. We started because the alternative was falling behind, and we're learning as we go.
If you're working on something similar, reach out. The best ideas I've stolen so far came from other people figuring this out at their own shops.


