As we gear up for 2025, I've been doing a lot of thinking about the advancements in generative AI over the past year. It feels like over the next twelve months we could see a shift in how we approach data, interact with AI, and ensure that we're doing it all in an ethical and responsible way. I've got some thoughts on why data quality is about to become a major focus, the ways agentic AI could change our daily interactions with technology as a whole, and why we need to make sure we're handling these systems with care. So, let's dive in and take a look at what I'm seeing on the horizon for 2025.
It’s All About the Data
The phrase “data is the new oil” has been commonplace over the past decade or so, but I predict 2025 is when we’ll actually start seeing it play out. As more organizations adopt and build generative AI systems, the need to ensure those systems are ingesting good, quality data becomes increasingly important.
The shift towards quality data isn't about having more data; it's about having better data. Generative AI models are only as good as the data they're trained on. In 2025, we’ll see a focus on data quality initiatives. This means organizations will be investing heavily in data cleaning, validation, and enrichment processes, moving away from simply hoarding vast quantities of information to curating high-value datasets. This commitment to quality will be a key differentiator for AI performance and the success of these systems.
Furthermore, the role of data stewards will become increasingly crucial. These individuals (or maybe even AI agents. More on this in a moment.) will be responsible for the lifecycle management of data – from its creation and collection, through its storage and use, to its eventual archiving or removal. Good data stewardship is about more than just governance; it’s about ensuring data is accurate, consistent, secure, and used ethically and responsibly within the context of AI projects. Without solid data stewardship, the most sophisticated AI models will still struggle, leading to biased, inaccurate, or even harmful outcomes.
One conversation I've had recently is about using AI to help employees with health insurance, like asking “What are my vision benefits?" and "What in-network eye doctors are near me?” Ingesting your health insurance policy into such a system may not seem like an arduous task. But your policies will change over time. New policies will need to be added, and expired policies will need to be removed. We can’t simply feed AI a bunch of data and call it day; we need processes for the removal of data as well. This illustrates the need for effective data lifecycle management programs.
Agentic AI
The way we interact with AI systems is set to change as the popularity of agentic AI increases. But what is agentic AI?
Generative AI like ChatGPT, Gemini, or Claude is kind of like a chef in a kitchen. You give them an instruction, and they can whip up a delicious meal. That’s where the “generative” in generative AI comes from; they generate content. But an agentic AI system is more like the restaurant manager, that not only can create new recipes but also decide what ingredients to buy, how to set the menu, and how to coordinate the kitchen staff. Agentic AI systems are focused on goals and outcomes, and they can act autonomously to achieve those goals without human interaction.
While agentic AI systems leverage the creativity of generative AI models such as ChatGPT, they differ in several ways. Primarily, these systems prioritize decision-making over content generation. Secondly, they operate autonomously, driven by predefined goals – like boosting revenue, improving customer feedback, or streamlining logistics – rather than requiring constant human instruction. And finally, these systems possess a more sophisticated ability to handle complex, multi-step processes, navigating data sources and initiating workflows without manual intervention.
Because agentic AI systems are designed to carry out specific, granular tasks, they enable greater specialization of roles compared to general purpose models like ChatGPT. Some agentic systems will have an “orchestrator” which can be thought of as the main agent that interacts with and instructs all of its downstream agents. Imagine this: An AI system is monitoring a salesperson's calendar, and before each meeting, it automatically prepares a brief with information about that customer. And not just any information. Relevant information like past purchase history, previous interactions with their team, and even business trends in their industry.
Responsible AI
“With great power comes great responsibility.” In 2025, organizations will no longer be able to treat ethical AI as an afterthought; it will become a core component of any AI strategy. This means proactively establishing clear policies to guide AI development, encompassing data privacy, transparency, and bias mitigation. This will require not just technical solutions but also a broader ethical commitment across all levels of an organization.
Beyond ethical considerations, robust governance of AI systems will be critical. This includes establishing clear lines of responsibility for AI outputs, implementing testing and validation processes to ensure the reliability of AI models, and setting up clear mechanisms for recourse when things go wrong. We can’t just deploy AI and hope for the best; we have to be good stewards of it. We'll see a growing need for specialized roles, such as AI ethicists and AI risk managers, who will be tasked with ensuring that AI is used responsibly and for the benefit of all stakeholders. The path to realizing the full potential of AI is paved with careful planning, robust oversight, and a dedication to building systems that are not just powerful but also trustworthy and ethical.
I've shared my predictions, but now I'm curious – what are you seeing on the horizon for AI?
Jason Clishe
Jason is a Google Cloud Platform solutions architect at CDW with 30 years of IT experience spanning delivery consulting, partner enablement, and sales engineering. His expertise includes cloud technologies, generative AI, and on-premises solutions, and he leads the design of innovative generative AI solutions on GCP Vertex AI.