Sustainability in the cloud era
When thinking about cloud computing, we immediately think about technology.
Have we ever stopped to think about how much energy this sort of technology requires to operate an average cloud data center, and what is the environmental effect of running such huge data centers around the world?
Data centers generate around 1% of the energy consumed around the world, daily.
Data centers consume a lot of energy — electricity (for running the servers) and water (for cooling the servers).
The more energy a common data center consumes, the bigger its carbon footprint (the total amount of greenhouse gases that is generated by running a data center).
In the past couple of years, there is a new concept for professionals working with cloud services, with high environmental awareness called cloud sustainability.
The idea behind it (from a cloud provider’s point of view) is to achieve 100% renewable energy — replace fuel-based electricity with wind and solar power, within a few years.
All major cloud providers (AWS, Azure, and GCP) put a lot of effort into building a new data center to be powered by green energy and making changes to the existing data center to lower their emissions as much as possible and use green energy as well.
To remain transparent to their customers, the major cloud providers have created carbon footprint tools:
· AWS customer carbon footprint tool
· Microsoft Sustainability Calculator
· GCP Carbon Footprint
· Cloud Carbon Footprint (Open source) tool
Indeed, most of the responsibility for keeping the cloud data centers green is under the responsibility of the cloud providers, since they build and maintain their data centers, but what is our responsibility as consumers?
As an example, here is AWS’s point of view regarding the shared responsibility model, in the context of sustainability:
How to act as responsible cloud consumers?
Review business requirements (compliance, latency, cost, service, and features), and pay attention to regions with a low carbon footprint.
· AWS — What to Consider when Selecting a Region for your Workloads
· Carbon-free energy for Google Cloud regions
· Measuring greenhouse gas emissions in data centers: the environmental impact of cloud computing
Architecture design considerations
Use cloud-native design patterns:
· Microservices — use containers (and Kubernetes) to deploy your applications and leverage the scaling capabilities of the cloud
· Serverless — use serverless (or function as a service) whenever you can decouple your applications into small functions
· Use message queues as much as possible, to decouple your applications and lower the number of requests between the various services/components
· Use caching mechanisms to lower the number of queries to backend systems
Embed the following as part of your infrastructure considerations:
· Right-sizing — when using VMs, always remember to right-size the VM size to your application demands
· Use up-to-date hardware — when using VMs, always use the latest VM family types and the latest block storage type, to suit your application demands
· ARM-based processors — consider using ARM processors (such as AWS Graviton Processor, Azure Ampere Altra Arm-based processors, GCP Ampere Altra Arm processors, and more), whenever your application supports the ARM technology (for better performance and lower cost)
· Idle hardware — monitor and shut down (or even delete) unused or idle hardware (VMs, databases, etc.)
· GPU — use GPUs only for tasks that are considered more efficient than CPUs (such as machine learning, rendering, transcoding, etc.)
· Spot instances — use spot instances, whenever your application supports sudden interruptions
· Schedule automatic start and stop of VMs — use scheduling capabilities (such as AWS Instance scheduler, Azure Start/Stop VMs, GCP start and stop virtual machine (VM) instances, etc.) to control the behavior of your workload VMs
· Managed services — prefer to use PaaS or managed services (from databases, storage, load-balancers, and more)
· Data lifecycle management — use object storage (or file storage) lifecycle policies to archive or remove unused or unnecessary data
· Auto-scaling — use the cloud built-in capabilities to scale horizontally according to your application load
· Content Delivery Network — use CDN (such as Amazon CloudFront, Azure Content Delivery Network, Google Cloud CDN, etc.) to lower the amount of customer traffic to your publicly exposed workloads
Sustainability and green computing are here to stay.
Although the large demand for cloud services has a huge environmental impact, I strongly believe that the use of cloud services is much more environmentally friendly than any use of legacy data center, for the following reasons:
· Efficient hardware utilization (nearly 100% of hardware utilization)
· Fast hardware replacement (due to high utilization)
· Better energy use (high use of renewable energy sources to support the electricity requirements)
I advise all cloud customers, to put sustainability higher in their design considerations.
Additional reading materials
· AWS Well-Architected Framework — Sustainability Pillar
· Microsoft Azure Well-Architected Framework — Sustainability
· Google Cloud — Design for environmental sustainability
About the Author
Eyal Estrin is a cloud and information security architect, the owner of the blog Security & Cloud 24/7 and the author of the book Cloud Security Handbook, with more than 20 years in the IT industry.
You can connect with him on Twitter and LinkedIn.