This work is licensed under CC BY 4.0 - Read how use or adaptation requires attribution
FinOps X 2023, San Diego June 27-30 - Early Bird Registration Ends April 3

Rate Optimization Options from Google Cloud Platform

by Pathik Sharma, Google Cloud Platform

“The better optics you have, the more equity you can pinpoint.”

Many rate optimization exercises can be quick wins, while others are transformative, long-term initiatives. As for CUDs, SUDs, and BigQuery reservations, understanding and better utilizing them are quick wins any GCP user can utilize.

Talking CUDs and SUDs

Any FinOps practitioner interested in rate optimization needs to learn some basics on CUDs, SUDs, and PVMs. There’s a practical side of it as well, as GCP users need to monitoring CUDs and analyzing their performance in the GCP self-serve console. For example, you can create cost breakdowns between CUD and SUDs from this view.

FinOps practitioners should also consider Preemptible VMs (PVMs), similar to Spot Instances on AWS. They’re very affordable (as high as an 80% discount), ideal for batch jobs and fault-tolerant workloads, and with an average preemption rate that can vary between 5-15% per day, per project.

Cost optimization considerations for GCP Big Query

GCP’s BigQuery is a serverless, highly scalable, and cost-effective multi-cloud data warehouse designed for business agility. Its pricing and performance is different than running databases on typical VMs. You can use BigQuery on-demand, but pay a much higher rate. Ideally, users want to use flat-rate discounts for the best cost efficiency.

Many expert GCP users and FinOps pracitioners at scale recommend solving for how your cluster load behaves. It can be a combination of different reservations/commitments. If you need help, you can use Active Assist for rate optimization recommendations. It helps with committed use discounts to help identify bigquery slow reservations. Note that the tool is in its alpha stage, so be aware of that before creating actionable recommendations in GCP.