FinOps X 2026 · June 8-11 · San Diego
Register Now
FinOps Foundation Insights

AI for FinOps: Agentic Use Cases in FinOps

March 27, 2026 | Article: 10-minute read

Key Insight: As organizations move beyond the initial excitement of generative AI models, proactive, autonomous agents are transforming how some FinOps practitioners manage technology value and operational efficiency. In this piece, Jonathan Morley of the FinOps Foundation shares direct insights from advanced FinOps practitioners and explores what these AI-focused conversations mean for their practices.


Part of my role at the FinOps Foundation is to stay close to the voice of the practitioner. Through our training delivery, our FinOps Certified Professional program, our advanced practitioner community calls, and the ongoing work shaping the FinOps Framework, I’m in regular conversation with people doing the work of FinOps every day. In the lead-up to FinOps X in San Diego, one theme has made a steady hum: the emergence of Agentic AI.

For years, FinOps has relied on reactive reporting and manual intervention to manage technology value. What I’m hearing now is different. Practitioners are beginning to move beyond just using AI to summarize data or answer questions: they’re building systems that autonomously iterate, investigate, and—in some cases—execute actions on their behalf.

Where Generative AI is primarily reactive, Agentic AI is proactive and iterative, capable of using tools across the technology ecosystem to create specific outcomes. According to the State of FinOps 2026, FinOps for AI is the top forward-looking priority for teams, with 98% of FinOps practices managing AI spend.

This evolution is being driven by the sheer complexity and scale of modern environments. As organizations ingest massive amounts of data to fuel AI initiatives, the cost of the underlying data infrastructure is growing fast. Advanced practitioners are now exploring how agents can bridge the gap between identification and action—to connect their business, engineering, and finance teams in ways that weren’t possible before.

What Advanced Practitioners Are Building—and Why It Matters

In my conversations, this transition is often described as a “quantum leap” in how practitioners interact with data. Rather than spending hours in spreadsheets or complex dashboards, teams are beginning to use “coding companions” and “FinOps agents” that provide expert-level perspective in real time.

Andrew Feig, Managing Director, FinOps Strategy and Practice Lead for Global Technology at JPMorgan Chase & Co., recently experimented with a FinOps coding companion. He shared his experience on LinkedIn:

Been having lots of discussions of what it means to shift FinOps left and have had some traction with shifting cost awareness and recommendations in that direction. But still doesn’t feel like it’s going to really have the impact it did when we started talking about it.

For me, that changed today. I finally had some time and spent a few hours playing with Claude Code and how it could help the world of FinOps. Claude Code as a FinOps pair programmer.

The use cases span a wide range of complexity, but they tend to cluster around a few recurring themes: eliminating manual data work, discovering waste autonomously, enforcing policy earlier in the development lifecycle, and driving action through personalized outreach. Here’s what I’m seeing and the architectural patterns behind it.

Natural Language Dashboards and Financial Reconciliation: A recurring theme across maturing practices is the elimination of manual data manipulation. Organizations are experimenting with agents that ingest raw financial data (Excel budgets, quarterly forecasts, etc.) and analyze them through natural language interfaces. These agents compare actual spend against approved budgets, identify deviations above specific thresholds, and generate specialized dashboards on demand without a human ever opening a spreadsheet. Some teams are already using this approach to read quarterly budget cycles and automatically produce delta reports across multiple technology providers.

A Director of Cloud at a Danish media company said:

We could easily compare budgets, the deviations from the approved budget, the latest estimate, and the forecast, just with natural language… The AI processed all the spreadsheets, made all the calculations, and then created another spreadsheet we could then pass on to finance.

Autonomous Waste Discovery: Several practitioners shared stories about moving beyond static alerts toward what they call “agentic waste discovery.” In these scenarios, an agent doesn’t just flag an underutilized resource. It surgically investigates the resource, finds the associated tags or resource IDs, identifies the appropriate owner, and then automatically creates and assigns a Jira ticket for underutilized cache clusters or orphaned resources. One team described reducing initial investigation time from 15 minutes per ticket to essentially zero. Behind the scenes, the more sophisticated implementations use an “orchestrator agent” with semantic routing to delegate specific tasks to specialized “consultant agents” (think: a governance engine, a cost anomaly detector, a tagging policy expert), allowing the system to provide holistic reports or real-time advice depending on the context of the query.

Proactive Guardrails in the CI/CD Pipeline: Another pattern is integrating agents directly into the developer workflow, specifically within pull requests. Instead of waiting for a resource to be provisioned and then flagging it as a violation, background agents check the configuration against FinOps best practices and guardrails before the infrastructure is even deployed. This “shift-left” approach makes cost and policy implications visible at the point of decision, not after the fact.

Personalized Outreach and Gamification: To improve action rates on optimization recommendations, some teams are using agents to send personalized messages through tools like Slack. In one case, agents pull usage data to identify the last person who interacted with a specific service and send them a tailored note about the cost implications of their resources.

A longtime FinOps practitioner at a North American technology company stated:

We pull CloudTrail data, figure out who has logged into that account last… and then we are able to send them a custom message… We saw 40%, almost 50% action rate on the messages that we send out via Slack.

To further drive engagement, these systems can automatically notify managers when a team member takes an action that saves the company money and create a positive feedback loop for FinOps maturity.

Security Score Improvement: Agents are also being used to create pull requests that automatically address cloud security findings. While the exact hours saved can be hard to quantify, the tangible improvement in visible security scores gives leadership a clear metric of progress. FinOps practitioners in media companies see these efficiencies in action:

We enable, through Gen AI, the ability to create PRs directly on the product team’s repository that address some of those findings… suddenly, the security scores are changing.

Contextual Resource Labeling: Practitioners are exploring agents that contextually understand unlabeled resources by analyzing when they run and what other services they connect to, enabling better showback and accountability.

Candidate and Management Fit: Perhaps the most unexpected use case involved a Senior Director of Engineering at a Brazilian financial services company witnessing a manager use an AI agent to assess internal transfer candidates by comparing their writing style and experience against the specific scope of responsibilities for her team.

A manager used Claude to assess candidates in the internal transfer portal as to which one would be a best fit for her scope of responsibilities… and then she also asked which ones would be most suitable to her style of management based upon how they write.

Implications for FinOps Teams

As practices continue to mature, Agentic AI will complement and, in some cases, reshape the fundamental nature of a FinOps practitioner’s role. The shift, at least in these early days, seems to be from “doing the work” to “orchestrating the workers.”

This is something I think about a lot in the context of education and career development: If the tools are changing this fast, how do we prepare practitioners for what’s next? What skills become more important, and which ones get absorbed by the agents themselves? These are questions we’ll be grappling with as we evolve the FinOps Framework.

The Trust Gap and Human-in-the-Loop

Despite the technological capabilities, a clear “trust gap” remains around autonomous action. Most organizations are not yet comfortable allowing agents to make production changes or delete resources without human approval. Practitioners emphasize that agents currently serve as co-designers or companions, not autonomous decision-makers. The inaccuracies inherent in current models (what some call “slop”) mean that verification is still a critical step in any agentic workflow.

The Innovation Value Paradox

A major challenge for FinOps leaders is managing the conflict between cost governance and AI creativity. Executives are increasingly demanding that AI projects demonstrate a positive Net Present Value (NPV), yet strict business case requirements can stifle the very experimentation needed to find high-value use cases. Practitioners have observed that when organizations become too focused on proving the value of every minor experiment, innovation often stops entirely.

But the volume and variety of AI opportunities require some level of governance. Relaxing some of the constraints of typical business case funding can allow for creativity to thrive under remaining constraints. Funding initiatives for shorter time periods with more frequent reviews, and relaxing business case requirements for earlier-stage investments, allow for more flexible experimentation.

New Unit Economics: Cost per Thought

As agents perform more complex reasoning tasks, FinOps teams must begin to think about new unit economic metrics, such as the “cost per thought,” or the “token use budgeted” per project. Understanding the API costs associated with running these agents is becoming as important as understanding the cost of the underlying compute resources.

Looking Ahead

Agentic AI is moving pieces of FinOps from observation to orchestration. By leveraging agents to handle the long tail of waste discovery, financial reporting, and policy enforcement, teams can finally address the cognitive burden that has historically limited FinOps scalability.

The path forward requires balance. FinOps leaders must foster a culture of experimentation while simultaneously building the change management processes needed to transition from “agent-assisted” to “agent-executed” operations.

As these systems grow more sophisticated, I believe the value of the FinOps practitioner will increasingly lie in their ability to define the guardrails and objectives within which agents operate. That’s not a diminished role; rather, it’s an elevated one. And it’s something we’ll be building into how we educate and support the FinOps community for the future.

Topics

  • FinOps Foundation Perspectives
Share
Related assets
padded

Unlocking AI Business Value with FinOps

padded

Choosing an AI Approach and Infrastructure Strategy

padded

Managing AI Value Using FinOps Practice Operations