GKE Metering

GCP
by Scott Lapish

Consideration needs to be given to the capacity requirements of the containers running in a pod. If not provisioned based on actual workload then the pod can request more than the required capacity which will lead to waste as the autoscaler scales to the load. Alternatively the pod can be under-provisioned resulting in poor performance.

Configuring GKE metering and developing request versus consumption dashboards can help teams to adjust configurations to match actual usage. Workloads where the requested resources are consistently more than what is actually consumed will result in larger overall shared cluster costs and individual team costs when redistributing the metered costs through the chargeback process.

Related Member Stories

A Guide for Adopting FinOps in Your Organization

AWS
Azure
GCP
Industry: Internet
Persona: FinOps Practitioner
by F2 Working Group, FinOps Foundation

One of the biggest challenges in starting a FinOps practice is getting broad executive support and buy-in to dedicate the time and resources needed for the cultural change.

Read more

Architecting Cloud Workloads for Financial Reporting

AWS
Azure
GCP
Industry: Information Technology & Services
Persona: FinOps Practitioner
by Rich Hoyer, SADA

A list of best practices for cloud architects to design systems to optimize FinOps.

Read more

Runaway Cost in BigQuery Capacity Commitments

GCP
Industry: Telecommunications
Persona: FinOps Practitioner
by Scott Lapish, Telus

Failure to purchase org level capacity commitments for BigQuery can result in runaway costs due to on-demand query costs. Purchasing an org level capacity commitment and enabling idle capacity at the org level can ensure stable BigQuery costs across the organization. Consideration also needs to be given to whether the...

Read more