
I spent 5 days of conference time there last week (including the pre-day workshop), and left with some good ideas, answers to my questions, and a lot of things to follow up on.
If last year was all about Microsoft 365 Copilot, this year was about agents. Intelligent agents are nothing new, but what is interesting about this new agentic world is that these agents are assuming the use of large or small language models (LLMs or SLMs), even if that’s not their primary task. An intelligent agent is, at its core, an application that can perform a task autonomously. You ask it to do something like summarize a transcript, run a piece of code, or even just tell you what time it is, and it’ll do it. You don’t have to know how it works, and you don’t necessarily know when it will come back with a response, but you can generally trust that the task will happen. This is a lot like services or APIs in the software world. By adding language models into the mix though, we can orchestrate and communicate between agents using natural language, which has incredible potential for intelligent systems.
While APIs are useful, they require a well-defined contract between endpoints. They expect a specifically formatted request and will deliver a specifically formatted response. A developer needs to know both in order to use the service through the API, and if that contract changes, then any consumers of that service need to update their code to match. LLM interfaces change that and mean that these definitions can be more flexible. For example, I could ask a time agent “What time is it?” and it could reply “4:30 pm” or perhaps “4:30 pm on Nov 24, 2025” or even “Half four” if the agent is particularly British. Because the response is also intended to be processed by an LLM, those variants would likely be handled, and the agent would have done its job.
What makes agents exciting is the way they can extend the capabilities of intelligent systems modularly by implementing discrete pieces of functionality, meaning that you don’t need to build one all-knowing, all-encompassing AI. There also continue to be features that enhance ice without any product changes. It’s already possible to use Microsoft 365 Copilot to analyze an ongoing call in Teams, so I suspect that the new interpreter (real-time translation) and content understanding (Copilot accessing screen share content) will work out of the box as well once they’re released.
Just having agents means nothing without a way to find them though, and that is where an agent orchestrator comes in. An agent orchestrator is really the “front door” for any AI conversation and is responsible for knowing how to handle a request, including which AI agents need to be involved. This means that it needs to know what agents exist and what they can do. Adding to this complexity is that an agent may itself orchestrate other agents, making for a graph that starts to look like an organizational chart.
At Ignite, several orchestration methods were discussed, but all of them were reasonably brief. Obviously, Microsoft 365 Copilot was held up as an example of an orchestrator, but there were other solutions that applied to custom applications. In the ideal case, the orchestration of agents would be through natural language description of capabilities, for example “this agent returns the current time” for a time agent. Some examples used a more manual agent orchestration flow though, manually invoking things like a researcher agent, annotation agent, and a personalization agent, before composing a final response. Both architectures have their place and will likely seem very familiar to anyone who has spent time with contact center. In ice, workflow is the agent orchestration platform, and the agents are either workflows, building blocks, or human agents that we define rules to involve in service requests.
Aside from all of the agentic architecture stuff, I also learned more about how multi-LoRA fine-tuning could be applied to ingest knowledge and jargon. More importantly, I talked to some experts about best practices for how and when a fine-tuning approach makes sense as opposed to a RAG (Retrieval Augmented Generation) one. Of course, adding agents to the picture makes for a “why not both” approach, and actually meshes well with what our team has been experimenting with already.
Additionally, I got some good exposure to using toolchains in generative AI apps (which function like agents themselves) that I hadn’t really had a chance to experiment with before, inspiring ideas for future ice Contact Center features.
There are also some cool new features worth exploring further:
One of the things that’s great about a conference like Ignite is that it’s a chance to spend a week talking to interesting people, learning new things, and thinking about the future of our business. Being in a different location and having the new stimuli always prompts new ideas, and I’ll come back next year with a good list of things that I want to try out.
Coming out of this year’s show, I’m both confident that ComputerTalk is in a great position with respect to our AI features, and excited about what we’re going to be able to do next.