Sergey Pronin, Product Owner, Percona explains what issues arise with enterprise storage management and which solutions to use to address these issues.
An increasing number of companies are adopting containers to deploy their implementations. Containers simplify the software delivery process as they make it easier to move those application images around. You can use Kubernetes to control those containers and automate processes around them. This makes it simpler to run containers and create the resources they need. However, automation doesn’t mean you can just ‘set and forget’ those containers. Without close management, costs can spiral.
When you implement containers, there are two main areas where costs will be incurred: Computing resources and storage. As you add more containers to cope with demand, compute and storage will be automatically provisioned to each image. Kubernetes can do this automatically to deal with failure events or if you get a spike in demand. Public cloud services then bill for all the storage that gets provisioned, rather than how much you are actually using.
Many developers see an increase in cloud costs as they expand their use of containers. It’s therefore important to track the resources you use to avoid unexpected costs. This involves getting the right data on your containers, as well as understanding how much those containers use over time.
To get this data you need to implement observability tools that can gather information from each container and deliver it back to a central repository. Without this data, it is difficult to make the right decisions to avoid waste. For example, Prometheus is an open source project that can deliver that information centrally, and you can use a tool like Percona Monitoring and Management to create a dashboard to analyse the data.
You can start by auditing your current position by creating a dashboard using the data from your containers. To do this, you can measure how your clusters perform and what is taking place over time. This will be useful for two reasons – firstly, it will show you how healthy your applications are in general. Secondly, you can look at how many resources your clusters are given at the start and work out how much they actually use. Depending on the utilisation you see in practice, you could reduce the amount of storage you have in place per container and cut your costs.
Overprovisioning and wastage can occur in many places throughout your containers. Create a similar visualisation for CPU and Memory to find out where spending can be reduced. If you find that utilisation rates are low, look at other factors in your applications and edit the setup accordingly, such as running your nodes with more memory and less CPU.
After this high-level analysis you can also look at your namespaces and tune each request. This should ensure each container gets the appropriate resources for its workload, reducing waste and costs.Click below to share this article