Circle Opinion

Using cloud tooling to measure efficiency & embed into the lifecycle process

Lee Griffiths

cloud tooling

This piece rounds up our 3-part blog series that has explored the topic of Green IT. Part 1 explored Green IT and it’s importance, and why businesses should follow it. Part 2 explored another aspect of Green IT, by looking into green cloud computing. To round up this blog series, I will now discuss cloud tooling.

What is cloud tooling & what are examples of cloud tools?

Cloud tooling allows you to manage hybrid and multi-cloud services and resources. There are a variety of analytical tools available for you to utilise to determine their carbon footprint within a cloud environment. A couple of the notable tools include the AWS Customer Carbon Footprint Tool, aws-fpga, and turbostat.

AWS Customer Carbon Footprint Tool

The AWS Customer Carbon Footprint Tool is one of Amazon’s tools supplied to users. While it’s great that AWS do supply a Carbon Footprint tool, it unfortunately doesn’t provide sufficient granularity to measure carbon emissions on a per service basis or per hour basis, which complicates the distinguishing of areas for potential optimisation. Its data is also affected by the percentage of renewable energy at the time of recording, and its emission figures are affected based on the location you are hosting from and any changes in the energy supply on Amazon’s end.

Aws-fpga & turbostat

While these tools allow for nearly real-time collection of data, it is only on certain systems. For turbostat, it must be a Unix OS and FPGA for running on an EC2. They can determine the KW/h which can then be converted and compared to the AWS Customer Carbon Footprint Tool for verification. With the use of some pipelines, you can also use them to develop a Realtime analysis tool to measure the efficiency of your designed solutions.

What is efficiency & what metrics can be used to track energy efficiency?

Energy Use/Useful work done

One metric to track energy efficiency is “Energy Use/Useful work done”. Energy use is one of the clearest metrics that can show the costs of running a system. This can be converted to greenhouse gas emissions if the nature of the power supply is known. This metric can also assess the other effects that energy sources have on the environment, such as hydropower changing how rivers flow, and the costs of other infrastructures that are needed to buffer energy delivery.

Useful work done in an hour

The old saying “time = money” applies to this rating system, where only the fastest wins. Usually, intensive algorithms push processors harder, which increases the energy requirements of the chip and leads to more heat being released by the chip. The hotter the chip gets, the harder the cooling systems must work, and these typically require energy. In data centre settings, cooling can amount to a significant percentage of the energy budget and will not always show in the statistics for device usage, as they can be separate systems responsible for cooling thousands of server racks.

Hardware utilisation

Another measure of efficiency can be the utilisation of the hardware. If you focus on the effects of the cloud and increasing internet reliance, you should consider how these systems operate. Typically, these are large racks of thin computers with higher processing power density, where cooling is managed both on a machine and device level.

The consensus seems to be that running your services via scalable VM’s will yield the greatest power reductions of a service. This is due to allowing the unused computing power of the physical servers to be distributed to other services. Through this, you can assess the energy intake that will be split between the VM’s operating on the devices. There have been great leaps in improving the scalability of VM’s across physical systems and enhancing their time and energy efficiency to reduce the overhead of this approach. Over the last decade, cloud computing has put vast efforts into these processes to reduce the hardware needed by these companies.

Cost per hour

Defining efficiency by cost per hour can be measured from AWS via user panels and may be convertible to an estimate of power usage. It can be affected greatly by market prices; therefore, it is advised that it is not relied upon. When it comes to green cloud computing, this is the least effective method to gauge efficiency, as costings are separate from the effects in the environment.

How does CACI define and measure efficiency?

We measure efficiency by focusing on the following:

kW/h power draw (on the VM Level)

time per action (the speed of the algorithm in comparison to the theoretical max efficiency in big O notation, where possible).

Embedding cloud tools into the software development lifecycle (SDLC) process

During a software development lifecycle (SLDC), we often re-evaluate efficiencies in performance and development. However, going green is not always a prominent consideration as part of this lifecycle. There are, in fact, several ways in which you can embed green methodologies and practices at crucial stages of the software development lifecycle (SLDC) to enhance sustainability and environmental friendliness.

Step 1: Planning

The first step in the embedding process should be setting clear goals and determining what the smallest carbon footprint of your project should be. This must remain a driving force in all decision-making throughout the software development lifecycle and not just at the initial planning stage.

A critical step in any project is deciding the technical tooling and infrastructure to use, from the cloud provider to the hardware and programming language. When choosing a language, you should consider current knowledge, speed of the language and suitability. You should also assess how energy efficient a language is. Research has shown how energy efficient the most popular languages are and has demonstrated a relatively close link between speed and energy efficiency.

Step 2: Development and testing


At CACI, we conduct several application tests, from profile testing for efficiency to bottle necking. Energy efficiency should also be identified as an area to test and optimise. We recommend you reference the following tooling list– which can be used in various environments– to help you gather and process energy usage:

Intel powerlog

Using these tools, certain components can be tested to see which has the highest energy consumption. When linked with an algorithmic analysis, this can create a cohesive picture of where the priority and optimisation focus should be.

Calculating an algorithms efficiency:

When testing an algorithm’s efficiency, you must rely on a comparative analysis using the same external factors or hardware. For that, you should use Energy Per Useful work done as a metric. Energy efficiency calculations can be defined as:

EnergyEfficiency = n / UsedEnergy

Where ‘n’ is useful work done.

An example using sorting algorithm as the subject, we would define useful work done as sorted items.

Energy Efficiency = SortedItems / UsedEnergy

Step 3: Deployment (H2)

Most modern applications are deployed through cloud tooling. You must ensure the adoption of a green-focused server or architecture is met, as this is an integral step to lowering your energy consumption and carbon footprint.

Virtualisation of your application will lower the number of physical servers needed and lower emissions as a result. Ensuring that good scaling principles are implemented, particularly scaling down services when demand is lower, will not only save you money, but will lower emissions. Finally, serverless computing efficiently shares infrastructure to run functions only on demand, such as AWS Lambda or Microsoft functions. As billing is determined by the length of execution time, an optimised application will not only cut costs, but also energy.


Learn more about how you can leverage cloud tooling to strategically plan for your own SDLC process by contacting us today.

Contact us now
Lee Griffiths