FinOps Scopes defined for AI focus on addressing the cost complexity, faster development cycle, spend unpredictability, and the need for a greater degree of policy and governance to support innovation through allocation, forecasting, and optimization decisions that align consumption, investment, and business value.
FinOps Scopes: Considerations for AI
The decision to differentiate the FinOps practice into a different Scope is driven by the need to address key differences in the expectations or outcomes desired from the spending within the lens of the Scope. AI represents a massive new technology category of spend, portions of which will likely merit creation of new Scopes to effectively manage.
AI cost and usage is not only new to many organizations, and very granular, but also tends to transcend technology category boundaries – with investments in the data center, in enterprise agreements with AI companies, SaaS products, startup AI model vendors, AI-based neo-clouds, and, of course, multiple hyperscale cloud providers.
Given the high visibility of AI adoption in many organizations, it’s likely that the FinOps team managing AI will have to answer critical questions from a broader perspective and to a more senior audience than traditional cloud. Expectations may be different – faster insights, more emphasis on innovation than governance, etc. – especially during early experimentation, for some AI use.
Many AI services are cloud-like, and include cloud costs as components of spending, but there are key differences that make differentiating the FinOps practice into a Scope that focuses attention on key AI spending important in many organizations. Not all AI spending must come into a new scope, but leading organizations have high expectations matching high spending on AI and may want more rigorous management.
When considering the needs and expected outcomes that FinOps can support, AI spending is very similar to any new technology spending.
- There are fewer well-known technology architecture blueprints to follow yet, so spending will be more varied across technology stacks and services
- Experimentation is higher, so the duration of use for any service or resource may be shorter or more sporadic
- The likelihood of anomalous spending will increase as teams experiment
- There are a higher number of vendors, tools, and services available for use as new companies move to expand into the rapidly growing AI marketplace
- The mechanisms for purchasing services and products is not yet well-established, and the channels used will be more diverse – marketplace, direct purchases, online and github acquisition, new “AI” SKUs appearing in existing enterprise agreements – all of which create work for Procurement to assess
- The volume of projects and requests for investment is high, requiring organizations to field requests quickly and compare projects against one another
One additional differentiator of AI spend from other technology spend is the diversity of teams or people building AI projects. The low barrier to entry to become an “AI Developer” means that people in non-technical roles will be operating as “Engineers” by creating AI applications, purchasing AI Services, etc. This creates a new challenge for the FinOps team to work with people who may not have as much IT experience, or who have not worked with FinOps before. This likely means that additional work in the Education & Enablement or Practice Management Capability will be required in an AI-targeted Scope.
Creating a Scope for AI spending, like creating any Scope is not about including all of the AI spending going on necessarily, but rather creating a lens on the AI spending that is important to handle differently than other technology cost and usage.
Driven by your organization’s goals and priorities for AI, a differentiated AI Scope might focus on some of the Capabilities listed. See the Framework Domains & Capabilities section for more information.
- Allocation of costs may be more complex, particularly if many projects at once are using the same types of services, and because fast moving teams may not be tagging or identifying their spend in all cases
- Forecasting may be much more challenging for new technology areas, leading to more forecast variance, requiring shorter forecasting windows. Funding for projects may need to be revisited more frequently until forecast accuracy is improved
- Unit Economics for experimental AI projects may be more challenging in the short term, and should be a key area for organizations to focus on to compare AI projects
- Rate Optimization may be challenging both for vendor rate negotiation and discount purchasing. Short-term, bursty usage patterns for early AI projects may indicate less commitment purchases, but resource scarcity may require commitments, so careful consideration of purchases is important here.
- FinOps Practice Operations, and FinOps Education & Enablement will be critical to help those not aware of FinOps Principles and the practice in general, and to deal effectively with high volumes of AI projects through an AI Investment Council or similar review organization
- Policy & Governance will be challenging in managing AI projects because of perceived demand to move faster and innovate. However, this is not the time to abandon governance, but rather the time to mature that Capability to make it responsive to the needs of the organization.
Another consideration is the actual people serving in FinOps Persona roles may be broader than for other technology cost areas. The nature of AI solutions can enable non-technical people to serve as developers of AI systems, giving them responsibilities in the “Engineering” Persona. Likewise, procurement of AI solutions may be done differently than for traditional IT services, and the Product owner for an AI productivity tool may not be in the traditional IT group. Watch for the behaviors of the various people involved with AI solutions to determine what Persona roles they might be filling.
FinOps Personas

FinOps Practitioner
As a FinOps Practitioner Persona, I will…
- Participate or lead coordinated AI investment planning discussions. AI use will be new to many organizations. Architectural and operational patterns will not be fully understood. Richer coordination between persona groups, platform teams, and others will be required, particularly in the short term
- Clearly define the differentiation in the FinOps practice tying specific practice changes to the outcomes desired for the AI development in the Scope.
- Ensure that other Personas are being considered in decision making, regardless of whether the people serving in these Personas are traditionally included in FinOps practice

Engineering
As a FinOps Engineering Persona, I will…
- Ensure transparency of my decision making and assumptions to all stakeholders
- Balance the need for innovation and speed with the governance, controls, and approvals required to ensure funding is appropriately allocated (with an AI Investment Council or the FinOps team)
- Ensure that services used for my projects are appropriately evaluated, selected, procured, utilized, and disposed of when no longer required.
- Engage with the FinOps team to appropriately operate within established guidelines for the Engineering persona, even if I am not a traditional IT person

Finance
As a FinOps Finance Persona, I will…
- Participate in the AI Investment Council or similar body to regularly approve, evaluate, and track the impact of AI projects in this Scope
- Evaluate, modify, or provide alternative methods to Forecast, Budget and Chargeback AI costs when required to meet business objectives for this Scope
- Provide appropriate flexibility in procuring, allocating, and funding AI services and vendors in relation to this Scope

Product
As a FinOps Product Persona, I will…
- Participate in the AI Investment Council or similar body to regularly approve, evaluate, and track the impact of AI projects in this Scope
- Create and consistently evaluate the business case for my AI products managed within this Scope
- Engage with Engineering to ensure decision making cost accountable in addition to supporting innovation
- Clearly understand and communicate the investment status – e.g. Proof of Value, Proof of Concept, Scaling to Production – of AI projects in this Scope to align to expectation of their outcomes

Procurement
As a FinOps Procurement Persona, I will…
- Participate in the AI Investment Council or similar body to regularly approve, evaluate, and track the impact of AI projects in this Scope
- Proactively identify ways to streamline or improve procurement channels, processes, vendor selection, or rates by analyzing usage across the organization and implementing improvements where appropriate

Leadership
As a FinOps Leadership Persona, I will…
- Lead or direct the AI Investment Council or similar body to regularly approve, evaluate, and track the impact of AI projects in this Scope
- Consistently provide guidance and overall direction to the organization related to AI usage
- Create clear expectations for AI project outcomes, delivery models, risk profiles, and strategic objectives to allow all personas to make consistent decisions at every level of the organization
- Demand and maintain information in specific domains or areas where you require differentiation in the data, practice, or outcomes from the FinOps team
Framework Domains & Capabilities
This section outlines practical considerations for applying the FinOps Framework within the context of FinOps for AI. Refer to the FinOps Framework for foundational guidance.
Understand Usage & CostExpand allCollapse all
- Higher level of uncertainty, so data source selection requires higher skill more labor
- Evaluating the cost-effectiveness of data ingestion is much more difficult
- Different 3rd party providers
- More uncertainty about the quality of data received from vendors
- More acute are the challenges of identifying the consumer of the model output, which is especially difficult when the consumers of the same model can be different
- interfaces/functional modules in the same user application (e.g., “tech support chatbot” or “new customer chatbot”)
- Overall architecture complexity typically involves additional tiers, which further complicates traceability
- Lack of generally accepted frameworks for cost allocation across multi-agent workloads
- With inconsistent/incomplete information from vendors about billing, the complexity of allocation increases further
- Specific reporting forms that include specific metrics for AI workloads
- Additional data structures required for tracking costs in all needed dimensions and linking them with business outputs
- A wider range of interested stakeholders
- Anomalies in general carry more risks and require more attention
- The frequency of anomaly management processes can be higher
- Higher volatility and more difficulty in establishing correct criteria for defining anomalies
Quantify Business ValueExpand allCollapse all
- Estimation of “successful” outputs of a model and their segregation from unsuccessful (e.g. irrelevant outputs, hallucination) is a relatively new and large task
- Additional challenges to estimate quality and accuracy thresholds and and the subsequent selection of the optimal AI model in terms of TCO
- Lack of benchmarks, especially applicable to the estimation of unsuccessful outputs of the AI model
- Lots of new tools that are rapidly evolving
- Predictability is generally lower, especially for the Crawl and Walk phases, and much more experience is required to make forecasts
- Complicated and low consistent pricing and physical consumption volume assessment (for example, for token-based billing) across 3rd-party services providers
- Additional challenges to integrate cloud and non-cloud AI spend forecasts, as well as forecasts for different cost components
- Higher volatility of costs makes trend-based forecasting in general more challenging
- More frequent revision of forecasts is needed
- The accuracy of “top-down” budgeting based on total costs in $ is limited due to the heterogeneity and variability of vendor pricing approaches
- “Bottom-up” budgeting is especially overloaded requiring individual cost estimates for each component and also requires taking into account the pricing features of a specific vendor
- Additional stakeholder management efforts are needed, especially when piloting new AI projects where consumption volumes are virtually impossible to predict and flexible budgeting is required
- More frequent revision of budgets is needed
- AI use may be spread across many stakeholders, making it harder to assign P&L responsibility for AI services
- A number of new specific metrics important for benchmarking, primarily per-token metrics
- Few “external” benchmarks and their low consistency
- The complexity of building “internal” benchmarks, in particular due to the uniqueness of various AI projects within the company
- Additional token-based units and drivers
- In addition to common metrics, specific Workload and Value metrics can be added, for instance, for customer service AI, such as cost-per-call, time-to-close, customer satisfaction score divided by AI costs
Optimize Usage and CostExpand allCollapse all
- New engineering steps (such as model training)
- A lot of new services offered by cloud providers specified for AI, which can be incorporated into the solution in a different combination
- More complicated choice between AI-specific and general-purpose tools
- Higher demands for the adoption of microservice architecture
- Processes require more careful and frequent monitoring, which is more labor intensive
- Volatile market requires more commitment up front to usage, including approaches such as pre-purchase, capacity reservations
- The number of factors which should be taken into account and their impact can vary greatly, including related to scarcity, commitments, and other concepts
- Approaches are much more dynamic, there can be changing models from vendors (e.g. OpenAI Scale Tier)
- New vendors and services from them can create more complicated processes of collaboration with them for those organizations that were previously limited to simple models of cloud service consumption
- Subscription management process is more dynamic
- More emerging concerns about per-request environmental impact.
- More attention to unused capacity, which encourages companies to minimize long-term commitments when there is no reliable forecast for high performance (even when using a serverless structure in the early stages).
Manage the FinOps PracticeExpand allCollapse all
- New stakeholders the FinOps team is not familiar with (or who are not familiar with FinOps) may be added
- Higher requirements for the adoption of FinOps practices by all affected teams and their end-to-end penetration into the decentralized business processes of the organization
- New knowledge is needed, taking into account alternative deployment options and pricing models
- Higher demands on facilitation
- Much more wide implementation of multiple limits such as quotas, use of reserved capacity, throttles, etc.
- It may be necessary to relax some of the regulations previously adopted for regular cloud projects, especially for pilot and experimental projects
- New specific tools for tracking at the level of individual architecture components
- More frequently evolving tools with more challenges in their integration with each other
- Closer collaboration will be required while service usage is new and learning is going on within the organization
- Once governance, automation and established architectures are better understood the need to collaborate may decrease
- The speed of decision making and granularity of the data used in AI will require closer collaboration with Procurement and Allied Persona teams that track licenses, assets, and entitlements over the long term
- Increased engagement with IT Security will likely be long term
- The increased number of and small or new vendor nature of AI vendors will likely increase the need to coordinate with Procurement
Measures of Success
In addition to traditional measures of success in technology use, AI projects will often be evaluated on measures more specific to the goals or characteristics of AI services.
Strategic Outcome Alignment
- Degree to which an AI project aligns with stated organizational AI objectives
- Presumes a clearly defined set of AI objectives (or highlights the need to create them)
Training Efficiency
- When organizations need to perform training or related model development activities describes how training costs impact the project
- Total cost (or incremental cost) to train models vs. the resultant model’s performance metrics (e.g. accuracy, precision, specific outcome as defined)
Inference Efficiency
- Describes the efficiency of the normal operating costs of an AI model or agent
- Can be expressed in terms of the cost of a single inference event (prompt) or the cost to arrive at an anticipated outcome
- Tracks the efficiency of deployed or used models, particularly for high volume applications of AI
- Isolates the operating cost of the system, used in conjunction with total ROI metrics
Compliance Effectiveness
- AI projects use of and reliance upon data, and their more autonomous access to other computer systems means they require critical attention to compliance considerations, including”
- Data Privacy
- Intellectual Property
- Bias and Ethical Compliance
- Industry-specific Regulations (HIPAA, PCI, GDPR, Sovereignty, etc.)
- Data Retention
- Environmental Regulations
- AI-specific regulation
- Specific Measures of success might be set up in any of these areas that are critical to the success of the organization
Token Consumption Efficiency
- One of the primary cost meters for AI usage is tokens. Though there are other important cost drivers to include, token use spans models and can be a normalizing metric of usage
- Cost per Token calculated as the Total Cost of the system use over the number of tokens used
- When total cost includes other non-token costs, this can also take into account the variation in cost of different model purchasing or usage models (e.g. model serving platforms vs. direct SaaS model)
- API tools or token-based reporting (and the ingestion of token usage reporting by vendors) is often required to calculate token consumption effectively
Return on Investment (versus Expectations)
- Measures the financial value return generated by AI initiatives relative to their cost
- Should be defined as a template for AI projects by an AI investment council or similar body to achieve consistency between projects
- Deciding what costs to include in Financial benefits and Cost is the critical first step
Time to First Prompt
- As an organization becomes more experienced with AI projects and proofs of concept, the time it takes to move from inception to working prototype/production.
- First Prompt assumes the first use by a target audience
- Can be used to compare the performance of different AI teams developing features or systems
- May be one of several gate outcomes tracked by an AI investment council to incrementally fund or evaluate
Productivity Gain
- Describes the impact of an AI project on an existing or understood process or workflow
- May be expressed in terms of Developer Productivity (lines of code, commits, etc.) if used to improve development, or Incident Productivity (cases closed, tickets managed, etc.) if used in a service management domain, etc.
KPIs
Cost per Inference
Measures the cost incurred for a single inference (i.e., when an AI model processes an input and generates an output). Useful for applications like chatbots, recommendation engines, or image recognition systems. Used for tracking the operational efficiency of deployed AI models, especially for high-volume applications. Helps optimize resource allocation and identify cost spikes due to
Reporting & Analytics
Data Ingestion
Workload Optimization
Unit Economics
Cost per Inference
Measures the cost incurred for a single inference (i.e., when an AI model processes an input and generates an output). Useful for applications like chatbots, recommendation engines, or image recognition systems. Used for tracking the operational efficiency of deployed AI models, especially for high-volume applications. Helps optimize resource allocation and identify cost spikes due to inefficient code or infrastructure.
Formula
Cost Per Inference = Total Inference Costs / Number of Inference Requests
Candidate Data Sources:
- Cloud billing data
- Logs from AI platforms (e.g., OpenAI, Vertex AI).
Example:
- If the total inference cost is $5,000 and the system processes 100,000 inference requests, the cost per inference is:$5,000/100,000 = $0.05 per request.
Training Cost Efficiency
Measures the total cost to train a machine learning (ML) model divided by the model’s performance metrics (e.g., accuracy, precision). Training costs for large AI models like GPT can be significant. Measuring efficiency ensures cost-effective resource usage while maintaining acceptable performance.
Reporting & Analytics
Data Ingestion
Workload Optimization
Unit Economics
Training Cost Efficiency
Measures the total cost to train a machine learning (ML) model divided by the model’s performance metrics (e.g., accuracy, precision). Training costs for large AI models like GPT can be significant. Measuring efficiency ensures cost-effective resource usage while maintaining acceptable performance.
Formula
Training Cost Efficiency = Training Costs / Performance Metric
Candidate Data Sources:
- API usage reports
- Dashboards from AI platforms
- Logs from AI platforms
- Cloud billing data
Example:
- A 95% accurate model trained at $10,000 yields an efficiency of $105 per percentage point of accuracy.
Token Consumption Metrics
Measures the cost of token-based models (e.g., OpenAI GPT) based on input/output token usage. This KPI helps predict and control costs for LLMs, which charge per token. Facilitates prompt engineering to reduce token consumption without degrading output quality.
Reporting & Analytics
Data Ingestion
Workload Optimization
Unit Economics
Token Consumption Metrics
Measures the cost of token-based models (e.g., OpenAI GPT) based on input/output token usage. This KPI helps predict and control costs for LLMs, which charge per token. Facilitates prompt engineering to reduce token consumption without degrading output quality.
Formula
Cost Per Token = Total Cost / Number of Tokens Used
Candidate Data Sources:
- API usage reports
- Dashboards from AI platforms
- Logs from AI platforms
- Cloud billing data
Example:
- If the total cost for inference is $2,500 and the number of tokens processed is 1,000,000, the cost per token is: $2,500/1,000,000 = $0.0025 per token
Resource Utilization Efficiency
Measures the efficiency of hardware resources like GPUs and TPUs during AI training and inference. This KPI identifies underutilized or over-provisioned resources, ensuring cost savings, and tracks the performance of autoscaling mechanisms.
Reporting & Analytics
Data Ingestion
Workload Optimization
Unit Economics
Resource Utilization Efficiency
Measures the efficiency of hardware resources like GPUs and TPUs during AI training and inference. This KPI identifies underutilized or over-provisioned resources, ensuring cost savings, and tracks the performance of autoscaling mechanisms.
Formula
Resource Utilization Efficiency = Actual Resource Utilization / Provisioned Capacity
Candidate Data Sources:
- API usage reports
- Dashboards from AI platforms
- Logs from AI platforms
- Cloud billing data
Example:
- If the actual resource utilization is 800 GPU hours and the provisioned capacity is 1,000 GPU hours, the resource utilization efficiency is: 800/1,000 = 0.8 or 80%
Anomaly Detection Rate
Measures the frequency and cost impact of anomalies in AI spending, such as sudden cost spikes or unexpected usage patterns. This KPI enables proactive identification and mitigation of runaway costs.
Anomaly Management
Reporting & Analytics
Data Ingestion
Anomaly Detection Rate
Measures the frequency and cost impact of anomalies in AI spending, such as sudden cost spikes or unexpected usage patterns. This KPI enables proactive identification and mitigation of runaway costs.
Formula
Total Cost of Anomaly Spikes / Total AI Spend = Anomaly Cost %
where (adjust for your needs):
- Green (< 2%): Healthy. Normal fluctuations.
- Yellow (2-7%): Warning. Minor anomaly trend
- Red (> 7%): Critical. You have a “runaway” costs.
Candidate Data Sources:
- API usage reports
- Dashboards from AI platforms
- Logs from AI platforms
- Cloud billing data
Cost per API Call
Measures the average cost for each API call made to AI services. This KPI helps track the efficiency of managed AI services like AWS SageMaker or Google Vertex AI.
Unit Economics
Reporting & Analytics
Cost per API Call
Measures the average cost for each API call made to AI services. This KPI helps track the efficiency of managed AI services like AWS SageMaker or Google Vertex AI.
Formula
Cost Per API Call = Total API Costs / Number of API Calls
Candidate Data Sources:
- API usage reports
- Dashboards from AI platforms
- Logs from AI platforms
- Cloud billing data
Example:
- If the total API costs are $1,200 and the number of API calls made is 240,000, the cost per API call is: $1,200/240,000 = $0.005 per API call
Time to Achieve Business Value
Measures the time it takes to achieve measurable business value from AI initiatives. This KPI uses a “breakeven point” of doing a function with AI versus the cost of performing it some other way (like with labor). It provides the awareness around the forecasted days to achieve the full business benefit vs the actual business
Forecasting
Unit Economics
Reporting & Analytics
Planning & Estimating
Time to Achieve Business Value
Measures the time it takes to achieve measurable business value from AI initiatives. This KPI uses a “breakeven point” of doing a function with AI versus the cost of performing it some other way (like with labor). It provides the awareness around the forecasted days to achieve the full business benefit vs the actual business results achieved and understanding the opportunity costs and value per month.
Formula
Time to Value (days) = Total Value associated with AI Service / daily Cost of Alternative solution
Candidate Data Sources:
- API usage reports
- Dashboards from AI platforms
- Logs from AI platforms
- Cloud billing data
Example:
- If an AI initiative starts on January 1, 2024, and the model is successfully deployed on April 1, 2024, the Time to Value is: April 1, 2024−January 1, 2024=3 months.
- Forecast to get $100k/mo of business within 1 month, but it actually took 5 months and only achieved $50k/mo business benefit, 5 months was the time to business value metric to track and seek to improve.
Time to First Prompt
Measures the elapsed engineering calendar time it takes to ready a service for first use. Or time to get from POC/Experiment into production use. Mature AI patterns and tooling automations help engineers deliver more features faster, this KPI provides awareness of how fast your engineers can take ideas and user stories and turn them into
Reporting & Analytics
Planning & Estimating
Time to First Prompt
Measures the elapsed engineering calendar time it takes to ready a service for first use. Or time to get from POC/Experiment into production use. Mature AI patterns and tooling automations help engineers deliver more features faster, this KPI provides awareness of how fast your engineers can take ideas and user stories and turn them into production deliverables. Highlights the tradeoffs of using different methods of developing the service with accurate (quality) or less expensive (cost).
Formula
Time to First Prompt = Deployment Date – Start Date of Initiative Development
Example:
- If an AI initiative starts on January 1, 2024, and the model is successfully deployed on April 1, 2024, the Time to First Prompt is: April 1, 2024−January 1, 2024=3 months
Value for AI Initiatives
Measures the financial or value return generated by AI initiatives relative to their cost. This KPI helps to justify the investment in AI services and aligns them with business outcomes.
Benchmarking
Reporting & Analytics
Unit Economics
Value for AI Initiatives
Measures the financial or value return generated by AI initiatives relative to their cost. This KPI helps to justify the investment in AI services and aligns them with business outcomes.
Formula
Return On Investment = (Financial Benefits – Costs) / Costs * 100
Candidate Data Sources:
- API usage reports
- Dashboards from AI platforms
- Logs from AI platforms
- Cloud billing data
Example:
- If the financial benefits from an AI project are $50,000 and the total costs incurred are $20,000, the ROI is: (50,000−20,000)/20,000 * 100 = 150%
See the FinOps KPI Library for a comprehensive list of KPIs that could be considered for this Scope.
FOCUS-to-Scope Alignment
The FinOps Open Cost and Usage Specification (FOCUS) is an open specification that defines clear requirements for data providers to produce consistent cost and usage datasets. FOCUS makes it easier to understand all technology spending so you can make data-driven decisions that drive better business value.
AI usage includes many types of resource and services usage, so many of the component data of AI usage will be typical resource usage from public cloud cost and usage, SaaS billing data, and the like. However, AI token usage and service feature usage also includes abstracted meters not directly tied to hardware, such as tokens, API calls, resulting outcomes, etc. These elements rely upon data generators to produce usage data detailing the tokens used, calls made, outcomes achieved, and each of these also require appropriate reconciliation mechanisms and often bespoke ways of capturing usage metrics internally.
As a result, the data required to report upon, allocate, and perform other FinOps functions will generally be additive to existing public cloud or data center cost and usage data, and will likely be more granular and higher volume.
Several of the public cloud data generators already include service usage in tokens and by SKU for AI services, data clouds such as Snowflake generate FOCUS data also detailing usage of AI SKUs, and the project is seeing adoption from AI-specific cloud provides such as Nebius who are providing FOCUS formatted usage data for AI services.
In these cases, there are not specific columns related to AI, but rather SKU IDs indicating Token charges, Consumed Units specifying Tokens, and Consumed Quantity of tokens uses, for example. Over time, there may be a need to develop AI specific columns for the FOCUS Specification, in addition to gaining adoption from AI data generators to provide consistent usage data.
FOCUS ColumnsExpand allCollapse all
Cost allocated. Additional costs could be added in here for AI allocation. Generally zero for usage lines, can be the purchase price for purchase lines.
A Billing Account ID is a provider-assigned identifier for a billing account. Billing accounts are commonly used for scenarios like grouping based on organizational constructs, invoice reconciliation and cost allocation strategies.
A Billing Account Name is a display name assigned to a billing account. Billing accounts are commonly used for scenarios like grouping based on organizational constructs, invoice reconciliation and cost allocation strategies.
Billing Account Type is a provider-assigned name to identify the type of billing account. Billing Account Type is a readable display name and not a code. Billing Account Type is commonly used for scenarios like mapping FOCUS and provider constructs, summarizing costs across providers, or invoicing and chargeback.
Represents the charge currency of internal pricing (“USD”, “CAD”, “EUR”, …etc).
Billing Period Start represents the inclusive start bound of a billing period. For example, a time period where Billing Period Start is ‘2024-01-01T00:00:00Z’ and Billing Period End is ‘2024-02-01T00:00:00Z’ includes charges for January since Billing Period Start represents the inclusive start bound, but does not include charges for February since BillingPeriodEnd represents the exclusive end bound.
Charge Category represents the highest-level classification of a charge based on the nature of how it is billed. Charge Category is commonly used to identify and distinguish between types of charges that may require different handling – for example the classification of charges like “Usage”, “Purchase”, “Tax” corrections.
Commonly used to differentiate corrections from regularly incurred charges. Charge Class indicates whether the row represents a correction to a previously invoiced billing period. Charge Class is commonly used to differentiate corrections from regularly incurred charges.
A Charge Description provides a high-level context of a row without requiring additional discovery. This column is a self-contained summary of the charge’s purpose and price. It typically covers a select group of corresponding details across a billing dataset or provides information not otherwise available.
Indicates how often a charge will occur. The Charge Frequency is commonly used to understand recurrence periods (e.g., monthly, yearly), and differentiate between one-time and recurring fees for purchases.
The start/end time period for when the usage occurs (hourly or daily). Charge Period Start represents the inclusive start bound of a charge period. For example, a time period where Charge Period Start is ‘2024-01-01T00:00:00Z’ and Charge Period End is ‘2024-01-02T00:00:00Z’ includes charges for January 1 since Charge Period Start represents the inclusive start bound, but does not include charges for January 2 since Charge Period End represents the exclusive end bound.
Commitment Discount Category indicates whether the commitment discount identified in the CommitmentDiscountId column is based on usage quantity or cost (aka “spend”). The CommitmentDiscountCategory column is only applicable to commitment discounts and not negotiated discounts.
A Commitment Discount ID is the identifier assigned to a commitment discount by the provider. Commitment Discount ID is commonly used for scenarios like chargeback for commitments and savings per commitment discount. The CommitmentDiscountId column is only applicable to commitment discounts and not negotiated discounts.
A Commitment Discount Name is the display name assigned to a commitment discount. The CommitmentDiscountName column is only applicable to commitment discounts and not negotiated discounts.
Commitment Discount Quantity is the amount of a commitment discount purchased or accounted for in commitment discount related rows that is denominated in Commitment Discount Units. The aggregated Commitment Discount Quantity across purchase records, pertaining to a particular Commitment Discount ID during its term, represents the total Commitment Discount Units acquired with that commitment discount. For committed usage, the Commitment Discount Quantity is either the number of Commitment Discount Units consumed by a row that is covered by a commitment discount or is the unused portion of a commitment discount over a charge period. Commitment Discount Quantity is commonly used in commitment discount analysis and optimization use cases and only applies to commitment discounts, not negotiated discounts.
When CommitmentDiscountCategory is “Usage” (usage-based commitment discounts), the Commitment Discount Quantity reflects the predefined amount of usage purchased or consumed. If commitment discount flexibility is applicable, this value may be further transformed based on additional, provider-specific requirements. When CommitmentDiscountCategory is “Spend” (spend-based commitment discounts), the Commitment Discount Quantity reflects the predefined amount of spend purchased or consumed.
Commitment Discount Status indicates whether the charge corresponds with the consumption of a commitment discount identified in the CommitmentDiscountId column or the unused portion of the committed amount. The CommitmentDiscountStatus column is only applicable to commitment discounts and not negotiated discounts.
Commitment Discount Type is a provider-assigned name to identify the type of commitment discount applied to the row. The CommitmentDiscountType column is only applicable to commitment discounts and not negotiated discounts.
Commitment Discount Unit represents the provider-specified measurement unit indicating how a provider measures the Commitment Discount Quantity of a commitment discount. The CommitmentDiscountUnit column is only applicable to commitment discounts and not negotiated discounts.
The volume of a SKU associated with a resource or service used in vCPU-hours, GB-months, GB transferred, etc.
The Consumed Quantity represents the volume of a metered SKU associated with a resource or service used, based on the Consumed Unit. Consumed Quantity is often derived at a finer granularity or over a different time interval when compared to the Pricing Quantity (complementary to Pricing Unit) and focuses on resource and service consumption, not pricing and cost.
Represents the measurement unit of usage (like “GB”) for a SKU associated with a resource or service.
The Consumed Unit represents a provider-specified measurement unit indicating how a provider measures usage of a metered SKU associated with a resource or service. Consumed Unit complements the Consumed Quantity metric. It is often listed at a finer granularity or over a different time interval when compared to Pricing Unit (complementary to Pricing Quantity), and focuses on resource and service consumption, not pricing and cost.
Contracted Cost represents the cost calculated by multiplying contracted unit price and the corresponding Pricing Quantity. Contracted Cost is denominated in the Billing Currency and is commonly used for calculating savings based on negotiation activities, by comparing it with List Cost. If negotiated discounts are not applicable, the Contracted Cost defaults to the List Cost.
The Contracted Unit Price represents the agreed-upon unit price for a single Pricing Unit of the associated SKU, inclusive of negotiated discounts, if present, while excluding negotiated commitment discounts or any other discounts. This price is denominated in the Billing Currency. The Contracted Unit Price is commonly used for calculating savings based on negotiation activities. If negotiated discounts are not applicable, the Contracted Unit Price defaults to the List Unit Price.
Effective Cost represents the amortized cost of the charge after applying all reduced rates, discounts, and the applicable portion of relevant, prepaid purchases (one-time or recurring) that covered this charge. The amortized portion included should be proportional to the Pricing Quantity and the time granularity of the data. Since amortization breaks down and spreads the cost of a prepaid purchase, to subsequent eligible charges, the Effective Cost of the original prepaid charge is set to 0. Effective Cost does not mix or “blend” costs across multiple charges of the same service. This cost is denominated in the Billing Currency. The Effective Cost is commonly utilized to track and analyze spending trends.
An Invoice Issuer is an entity responsible for issuing payable invoices for the resources or services consumed. It is commonly used for cost analysis and reporting scenarios.
The cost calculated by multiplying ListUnitPrice and the corresponding PricingQuantity. Generally zero for usage lines, can be retail price for purchase lines.
A SKU specific suggested unit price for a single PricingUnit. Pricing at retail price for the FinOps Practitioner that wants to track retail pricing.
Pricing Category describes the pricing model used for a charge at the time of use or purchase. It can be useful for distinguishing between charges incurred at the list unit price or a reduced price and exposing optimization opportunities, like increasing commitment discount coverage.
Pricing Currency is the national or virtual currency denomination that a resource or service was priced in. Pricing Currency is commonly used in scenarios where different currencies are used for pricing and billing.
The Pricing Currency Contracted Unit Price represents the agreed-upon unit price for a single Pricing Unit of the associated SKU, inclusive of negotiated discounts, if present, while excluding negotiated commitment discounts or any other discounts. This price is denominated in the Pricing Currency. When negotiated discounts do not apply to unit prices and instead are applied to exchange rates, the Pricing Currency Contracted Unit Price defaults to the Pricing Currency List Unit Price. The Pricing Currency Contracted Unit Price is commonly used to calculate savings based on negotiation activities.
The Pricing Currency Effective Cost represents the cost of the charge after applying all reduced rates, discounts, and the applicable portion of relevant, prepaid purchases (one-time or recurring) that covered this charge, as denominated in Pricing Currency. This allows the practitioner to perform a conversion from either 1) a national currency to a virtual currency (e.g., tokens to USD), or 2) one national currency to another (e.g., EUR to USD).
The Pricing Currency List Unit Price represents the suggested provider-published unit price for a single Pricing Unit of the associated SKU, exclusive of any discounts. This price is denominated in the Pricing Currency. The Pricing Currency List Unit Price is commonly used for calculating savings based on various rate optimization activities.
The volume of a given SKU associated with a resource or service used or purchased (vCPU-hours, GB-months, GB transfer), based on the PricingUnit.
SKU level specified measurement unit (GB) for determining unit prices, indicating how the provider rates measured usage and purchase quantities.
A Provider is an entity that makes the resources or services available for purchase. It is commonly used for cost analysis and reporting scenarios.
A Publisher is an entity that produces the resources or services that were purchased. It is commonly used for cost analysis and reporting scenarios.
Note PublisherName has been deprecated in FOCUS v1.3 and will be removed in v1.4
A Region ID is a provider-assigned identifier for an isolated geographic area where a resource is provisioned or a service is provided. The region is commonly used for scenarios like analyzing cost and unit prices based on where resources are deployed.
Region Name is a provider-assigned display name for an isolated geographic area where a resource is provisioned or a service is provided. Region Name is commonly used for scenarios like analyzing cost and unit prices based on where resources are deployed.
A Resource ID is an identifier assigned to a resource by the provider. The Resource ID is commonly used for cost reporting, analysis, and allocation scenarios.
The Resource Name is a display name assigned to a resource. It is commonly used for cost analysis, reporting, and allocation scenarios.
Resource Type describes the kind of resource the charge applies to. A Resource Type is commonly used for scenarios like identifying cost changes in groups of similar resources and may include values like Virtual Machine, Data Warehouse, and Load Balancer.
Linked to layer (“Compute”, “Databases”, “Networking”). Broad classification, derived from the rate card.
The Service Category is the highest-level classification of a service based on the core function of the service. Each service should have one and only one category that best aligns with its primary purpose. The Service Category is commonly used for scenarios like analyzing costs across providers and tracking the migration of workloads across fundamentally different architectures.
Internal name of the offering or tier. Linked to layer (“Compute”, “Databases”, “Networking”).
A service represents an offering that can be purchased from a provider (e.g., cloud virtual machine, SaaS database, professional services from a systems integrator). A service offering can include various types of usage or other charges. For example, a cloud database service may include compute, storage, and networking charges.
The Service Name is a display name for the offering that was purchased. The Service Name is commonly used for scenarios like analyzing aggregate cost trends over time and filtering data to investigate anomalies.
A SKU ID is a provider-specified unique identifier that represents a specific SKU. SKUs are quantifiable goods or service offerings in a FOCUS dataset that represent specific functionality and technical specifications.
Each SKU ID represents a unique set of features that can be sold at different price points or SKU Prices. SKU ID is consistent across all pricing variations, which may differ based on multiple factors beyond the common functionality and technical specifications.
SKU ID should be consistent across pricing variations of a good or service to facilitate price comparisons for the same functionality, like where the functionality is provided or how it’s paid for. SKU ID can be referenced on a catalog or price list published by a provider to look up detailed information about the SKU. The composition of the properties associated with the SKU ID may differ across providers. SKU ID is commonly used for analyzing and comparing costs for the same SKU across different price details (e.g., term, tier, location).
SKU Meter describes the functionality being metered or measured by a particular SKU in a charge.
Providers often have billing models in which multiple SKUs exist for a given service to describe and bill for different functionalities for that service. For example, an object storage service may have separate SKUs for functionalities such as object storage, API requests, data transfer, encryption, and object management. This field helps practitioners understand which functionalities are being metered by the different SKUs that appear in a FOCUS dataset.
SKU Price Details represent a list of SKU Price properties (key-value pairs) associated with a specific SKU Price ID. These properties include qualitative and quantitative properties of a SKUs (e.g., functionality and technical specifications), along with core stable pricing properties (e.g., pricing terms, tiers, etc.), excluding dynamic or negotiable pricing elements such as unit price amounts, currency (and related exchange rates), temporal validity (e.g., effective dates), and contract- or negotiation-specific factors (e.g., contract or account identifiers, and negotiable discounts).
The composition of properties associated with a specific SKU Price may differ across providers and across SKUs within the same provider. However, the exclusion of dynamic or negotiable pricing properties should ensure that all charges with the same SKU Price ID share the same SKU Price Details, i.e., that SKU Price Details remains consistent across different billing periods and billing accounts within a provider.
SKU Price Details helps practitioners understand and distinguish SKU Prices, each identified by a SKU Price ID and associated with a used or purchased resource or service. It can also help determine the quantity of units for a property when it holds a numeric value (e.g., CoreCount), even when its unit differs from the one in which the SKU is priced and charged, thus supporting FinOps capabilities like unit economics. Additionally, the SKU Price Details may be used to analyze costs based on pricing properties such as terms and tiers.
SKU Price ID is a provider-specified unique identifier that represents a specific SKU Price associated with a resource or service used or purchased. It serves as a key reference for a SKU Price in a price list published by a provider, allowing practitioners to look up detailed information about the SKU Price.
The composition of properties associated with the SKU Price ID may differ across providers and across SKUs within the same provider. However, the exclusion of dynamic or negotiable pricing properties, such as unit price amount, currency (and related exchange rates), temporal validity (e.g., effective dates), and contract- or negotiation-specific elements (e.g., contract or account identifiers, and negotiable discounts), ensures that the SKU Price ID remains consistent across different billing periods and billing accounts within a provider. This consistency enables efficient filtering of charges to track price fluctuations (e.g., changes in unit price amounts) over time and across billing accounts, for both list and contracted unit prices. Additionally, the SKU Price ID is commonly used to analyze costs based on pricing properties such as terms and tiers.