AWS provides a plethora of service to meet everyone’s technical requirements. Despite using a pay-as-you-go model, some of services will certainly leave a sizeable hole in the pockets of most each month if not managed properly. Having implemented AWS services in various projects over the last few years, I wanted to share a few tricks of the trade regarding some of the more expensive services, and how a few adjustments can drastically lower your costs.
Recently we worked with a client on a deployment and management project for a performance testing suite capable of generating load for up to 7 million users. Using the methods I will show, we were able to reduce costs by up to 90% compared to the previous month.
The two AWS services I will be focusing on in this article are:
- Elastic Compute Cloud (EC2)
- File storage services, EFS and FSx for Windows File Server.
EC2 doesn’t have to be expensive
There are two major aspects related to EC2 which drive the cost: instance type and the usage time.
It goes without saying that you should only buy what you need if you wish to save any money. The same applies to EC2 instances. Choosing the right instance which meets your needs is imperative. If you already know the exact instance which suits your purpose you can skip this section, but if you don’t, avoid guess work and let the data tell you what you need to do.
AWS provides various tools for you to monitor in detail the internals of your instance. You can use the steps below to choose the right instance for you.
- Choose a reasonably large instance and set it up.
- Install CloudWatch Agent on your instance using this guide, Cloudwatch agent.
- Configure the CloudWatch Agent to collect metrics, ensure to select the CPU and memory usage as these will be the main metrics used to select the instance. You can monitor other resources such as network performance if this is important to you.
- Hit your instances with the full load a few times.
- Monitor the metrics in CloudWatch. These will tell you whether your machine had resources left over even on full load or whether it was struggling.
- Change the size of the machine accordingly, ample idle resources = reduction of machine size, not enough resources = vice versa.
- You may have to repeat the steps 4 to 6 a few times to get the perfect instance. You do not need to setup a new instance each time, you can just stop your instance (make sure not to terminate!) and simply change the instance type in the settings.
Now moving onto managing the usage of the machines. EC2 has various pricing models, and the steps below can help with all of them, not only as a way of saving money but also providing an infrastructure for automatically turning on and off resources.
The tool we are using for this are developed by AWS, The AWS Instance Scheduler. With this tool schedules can be created for your EC2 and RDS instances; instances not in use can be stopped and started when capacity is needed. For example, stopping instances outside of business hours.
The entire guide on how to setup this solution is available here.
Storage is cheap, but access comes at a price
AWS provides a few file storage solutions: EFS and FSx. The real cost of these storage solutions is not really dictated by how much you wish to store but at the speeds at which you wish to access your data. This difference can be seen in the pricing tables for both EFS and FSx for Windows File Server. Storage of a Gigabyte of data per month is fractional compared to the provisioned throughput capacity of a Megabyte per month.
To choose the correct throughput capacity, the same principles when choosing the instance type can be used.
- Choose a reasonable throughput capacity
- Hit your file storage with full load
- Monitor the metrics in CloudWatch
- Change the throughput capacity accordingly
However, one important point to note here is that do not select the same throughput capacity as the peak rate you monitor in CloudWatch. Both storage solutions allow the throughput capacity to burst to a higher level from the provisioned capacities for a short period of time. So, the provisioned throughput capacity should be similar to the rate at which the file storage is used most of the time and for which the peak transfer can also be supported.
Details for the burst capacity can be found at the following links:
So those are two quick fire ways to drastically reduce you costs when working with AWS!!!