The situation
The telecom client had a clear use case for AI — surfacing engineering knowledge from documentation and historical tickets to accelerate operations and reduce repeat investigations. The constraint was equally clear: data could not leave their perimeter, cost had to be controlled, and the system had to integrate with existing automation rather than become a parallel silo.
What we did
We built the AI workflow the way we build operational automation — observability, evaluation, cost controls, and human-in-the-loop where it matters. Model serving runs in the client's private cloud (vLLM); a policy layer decides whether a request can fall back to a cloud LLM at all. The agent layer is wired into the existing automation systems already delivered at L2.
This is not a chatbot. It is a tool that takes specific actions, with audit trails, and that an engineer can reason about.
Why this matters
Most AI projects do not survive past pilot — and the reasons are predictable. L3 is the discipline of treating AI in production as a systems problem, not a model problem. Done right, it compounds onto the L2 systems that fed it the data in the first place.
This engagement is in development. Detailed write-up will be published as the production rollout progresses.