7 Tips for Cutting Down Your AWS Kubernetes Bill
Running your Kubernetes workloads in Amazon Web Services (AWS) isn’t a walk in the park, and neither is controlling costs. Did you know that AWS has over 150 EC2 instance types and sizes available?
If looking at your AWS Kubernetes bill makes you squirm each month, you’re not alone. Companies frequently report going over their cloud services budgets by 23%, according to Flexera.
To control your budget better and drive down those cloud costs, here are seven AWS tips that work whether or not you run your clusters on EKS.
1. Watch out for These Pricing Traps
On-Demand Instances
Despite the pay-as-you-go model, on-demand instances are the most expensive option AWS offers. They also make controlling your budget harder. Use them only for unpredictable workloads with fluctuating traffic spikes.
Reserved Instances
Buying capacity up front with a huge discount sounds great — but you have to commit to a given instance or family without the ability to change later on. Will these resources make sense for your company in one or three years? There’s no way to tell. You get no flexibility in scaling or handling traffic seasonality. You’re also running the risk of locking yourself in with the cloud vendor.
Savings Plans
In this model, you commit to a specific amount of computing power for one or three years. You’ll be forced to commit to even more when your requirements change or end up with wasted capacity.
2. Decipher Your Cloud Bill
Let’s be frank — your cloud bill is bound to be long and hard to understand. Just like this one:
Every service uses a specific billing metric. While some services in Amazon S3 are charged by requests, others are charged by GB. Truthfully, the bill doesn’t provide enough information to understand your usage and analyze your costs fully. Instead, look into the AWS console. AWS Billing and Cost Management Dashboard are helpful, but to get a more granular view of your costs, go to the Cost Explorer — this is your best AWS resource on costs.
Tip: To make billing more transparent, group and report on costs by specific attributes (such as region or service).
3. Budget for the Cloud
AWS offers budgeting tools that help control the resources used in a project or by a team. If you don’t use these controls, you face overrunning your budget or become another startup that almost went broke on just one day of testing the cloud.
When budgeting for the cloud, be aware of these common budget overrun causes:
- Lack of knowledge about system requirements upfront
- Wrong assumptions about how the system features are going to work and scale
- Discovering expensive requirements after formal discovery is over
- Lack of Kubernetes autoscaling design in your applications
- Poor provisioning logic in IaC that makes it spin out of control
- Using serverless functions without considering parallel scaling
- Badly configured notifications and alerts.
- No attention to cloud budget (aka nobody watching).
4. Handle Multiple Teams in One AWS Account
It’s common for multiple teams or departments to contribute to a single AWS bill. Use the mechanisms AWS provides for categorizing expenses by accounts, organizations, or projects. This is the best method to keep team spending under control.
- Organizations: this feature allows you to manage and govern your environment while scaling resources. Start by creating a new AWS account, then allocate resources and organize your workflows by grouping accounts. Finally, come up with budgets and policies for these account groups.
- Tagging resources: use resource tags in the Cost Explorer. Create tags for every team, environment, application, and service — all the way down to individual features. Turn cost grouping on for those tags and generate reports in the billing console. (Remember that some services/components/resources can’t be tagged in AWS.)
5. Forecast Cloud Costs
Your AWS bill will fluctuate based on usage, so forecasting your expenses is extra painful. But understanding your future requirements is critical for keeping costs in check.
Here are three useful forecasting methods:
- Analyzing usage reports: you need to have clear visibility, so start by monitoring your resource usage reports on a regular basis. Set up relevant alerts to control the situation.
- Modeling your cloud costs: using modeling, calculate the total cost of ownership of cloud resources, analyze AWS pricing models, and plan future capacity requirements. Measure application and workload-specific costs to create a cost plan at this level. Make sure you have one location for aggregating all the data to understand it better.
- Identifying peak resource usage scenarios: take advantage of periodic analytics and generate reports on your usage data to detect these scenarios. Use other data sources, too, such as seasonal customer demand patterns. If you see that they correlate with your peak resource usage, you’ll be able to prepare for them in advance.
6. Choose the Right VM Instance Type
Compute costs are the biggest item on your cloud bill, so it’s smart to take some time when picking the EC2 instances where you’ll be running your clusters. Here are some tips to guide your choices:
- Start by defining your requirements and order the resources your workload needs across these dimensions:
- CPU count and architecture
- Memory
- Storage
- Network
- See an affordable instance? Think twice — your workload might deliver poor performance if it’s memory-intensive, impacting your brand reputation and customers.
- Consider the difference between CPU and GPU-dense instances. GPU will give you better results if you’re looking for an instance to run machine learning training.
- AWS provides 150+ instance types with various CPU, memory, storage, and networking capacity combinations. Understanding which is the right one is hard if you can’t compare them. The best way to do that is by benchmarking: dropping the same workload on different machine types and checking its performance — here’s an example.
7. Take Advantage of Spot Instances
Spot instances help to achieve cost savings by up to 90% off the on-demand pricing. Before you start using them, you need to know whether spot instances match your workload.
Here’s how to tell if your Kubernetes workload is spot-ready. Ask yourself:
- How much time does it need to finish the job?
- Is your workload mission- and time-critical?
- Can it handle interruptions gracefully?
- Is it tightly coupled between instance nodes?
- Are you prepared to move your workload quickly when AWS pulls the plug?
Consider applying these spot instance tips:
- When choosing from different spot instances, go for slightly less popular ones — they’re also less likely to get interrupted. You can check an instance’s frequency of interruption in AWS Spot Instance Advisor.
- Set the maximum price for the spot instance you pick at the on-demand price level. Otherwise, your workload might get interrupted when the instance price goes higher than the one you set.
- Set up groups of spot instances (AWS Spot Fleets) to increase your chances of securing spot instances. They allow you to request multiple instance types simultaneously.
Wrapping-up: Drive Down Your Cloud Costs
All seven tips apply no matter your setup — whether you handle Kubernetes clusters on your own or use AWS EKS.
But to drive cloud costs down seriously, you need an intelligent platform capable of selecting the right instance size and type, autoscaling your setup as needed, and managing infrastructure dependencies for you.
This is what we’re working on at CAST AI. We’re launching our EKS optimization tool soon, so stay tuned.
*This article was updated on December 14, 2022.