31 May 2024

Six ways to modernise apps in AWS to make them more sustainable

Migrating and modernising your applications with AWS can help you hit your sustainability targets, as well as unlocking a range of business benefits.

In this article we’re talking about how you can modernise your applications post-migration to maximise the benefits you get from the cloud, including sustainability. 

Ideally, in any cloud migration where modernisation is a key objective, you will do as much of that modernisation during the migration as possible. But, of course, that’s not always possible. Some applications are too complex to fully modernise in one go, and others aren’t important enough to tackle in the timescales of your project. And all of that means that, once the migration proper is completed, you will have further opportunities to modernise your applications and so make even more headway on your sustainability targets.

In this article, we’ll identify and explain some of the key technologies that you can leverage once you’re in the cloud. We’ll also be concluding the story of our mostly-fictional-but-terribly-realistic company who has partnered with Claranet to migrate its estate into AWS, to show you examples of how what we’re talking about works in real life.

What does modernising apps mean?

In short, running an app in AWS means you can run it in different ways compared to how you would run that app on an on-premise server, or even how you’d run it on a private cloud. You have access to new services and pre-built technologies that you can incorporate into your apps, which can achieve the following things:

  • They can operate faster, giving users a better experience (be those your employees, customers, or partners).
  • They can be more resilient to outages or other problems, keeping your business running when previously things might have ground to a halt.
  • They can operate more efficiently, using computing resources only when needed.
  • That last point especially is what makes modernising apps so important to a sustainability plan. Just by being in AWS you will already have reduced your carbon footprint; by modernising your apps, you can ensure you’re not using resources (and therefore energy) that you don’t need to, making your IT even more sustainable. And, of course, if you’re lowering energy consumption by creating modern apps that are faster and use less resources, you’re likely to see a reduction in your costs, too.

So, what options do you have when modernising applications – and what do they do for you?

Autoscaling

This is possibly the most fundamental cloud technology, and as such depending on how your apps have been migrated it’s likely already been activated. It’s the technology that allows AWS to automatically increase or decrease the amount of computing resource your applications use based on what you need. Autoscaling ensures that you’re never using or paying for resources you don’t need, making it great for cost control (a major benefit of moving to the cloud) and sustainability.

Autoscaling is appropriate for any application you run on the cloud, whether it’s running on EC2, Fargate, Aurora, or EKS, as long as that application has been built to allow autoscaling (so if you are migrating a very old or custom application, make sure you check this beforehand). It’s also free to implement, so there’s no reason not to build this into as many of your applications as possible – though, of course, you will pay for the additional computing resource you use when you scale up.

SNS and SQS

Simple Notification Service (SNS) and Simple Queue Service (SNS) are another fundamental technology that drives a lot of the benefits that cloud applications can bring. Simple Notification Service allows applications to message each other, or people such as users or customers via email, SNS, or push notifications. Simple Queue Service is a fully managed message queuing service for those messages.

SNS and SQS work together to allow applications to talk to each other – be those your own applications, your applications and the AWS platform, or even different microservices that make up an application (see monoliths to microservices, below). Without it, the ability to easily and loosely couple applications – which drives much of the resilience, automation, and intelligence in the cloud – is lost.

Refactoring an application to use SNS and SQS makes many of the innovations – and so the sustainability benefits they bring – possible. As with all AWS services, it can also autoscale, meaning you never use resources or energy that you don’t need, while making sure your apps can do their jobs 24/7, and talk to each other securely.

Spot Instances

Spot Instances are essentially spare computing resource that AWS auctions off cheap. It’s specific to AWS EC2 instances, so it’s ideal for any applications or services that run on EC2, and using spot instances can save you up to 90% on the cost of your instances. And thinking about our goal of sustainability, spot instances decrease your carbon footprint by using resources that have already been provisioned – a bit like making your dishwasher run more efficiently by making sure it’s completely full before you start a cycle.

You do need to be careful with Spot Instances, though. Because you are using spare resource, these is a risk that it gets turned off or taken away if AWS needs it elsewhere. So Spot Instances are not suitable for activities that require resource to be available 24/7, such as storing data. Batch processing data that is held on another instance is a good example of an activity that is suitable for using Spot Instances though, as the process can be paused without losing the data, and be resumed when another spot instance becomes available.

Monoliths to microservices

In the past, applications tended to be built as monoliths – that is, everything the app needed to run existed as a single unit. By contrast, in the cloud you have the opportunity to build apps that are collections of microservices (essentially smaller apps that handle specific functions of the larger app) which work together to deliver the application’s function to the end user. In fact, SQS, SNS, and Lambda (see below) are all examples of microservices that are available on AWS.

Microservices apps have the potential to be much more sustainable than monolithic apps, and are much more modern:

  • If you want to scale up a monolithic application, you have to scale up all of it – even if it’s just one section of the app that’s getting busy, such as a payment gateway. That’s both costly and uses a lot more resources than you need. If your app uses microservices, each microservice can scale up or down as needed, letting you use resources much more efficiently.
  • If you want to update one part of a monolithic application, you must take the whole thing offline to update it. With microservices, you can update individual components while the rest of the app continues to function.
  • If one part of a monolithic application breaks, the whole app goes down. If a microservice fails, the rest of the app keeps going, giving users a better service and letting you make best use of your resources (as you’re not hosting an app that’s doing nothing).

Rearchitecting an app from a monolith to microservices is no small undertaking – so you should carefully evaluate your in-house skillset and look at getting a trusted partner to help you with this. In terms of which apps are best suited to microservices, look for applications that perform multiple functions, An HR application would be a good example of this, as it handles functions including holiday management, payslips, timesheets and more – all of those could be decoupled from each other and driven by microservices which come together to make the application work. Apps that are good candidates for a microservices architecture were likely simply rehosted in the initial migration, meaning that they are running in AWS but function exactly the same as they did before – or they weren’t migrated at all, due to their size and complexity.

Serverless computing (AWS Lambda)

Serverless computing is the next layer of abstraction above the cloud. In the cloud, you don’t have to operate the infrastructure your computing resource runs on, but you still provision it yourself, with a greater or lesser degree of automation. In serverless computing, you don’t even provision resource. You specify code that you want to run, and AWS automatically provisions resource to run that code as and when necessary using Lambda, its serverless computing service.

Using Lambda takes you another rung up the sustainability ladder by giving you another level of scalability. You only pay for the resource you use, down to the millisecond, and that means of course that you can use resources that add to your carbon footprint even more efficiently. By giving AWS control over what resources it uses to run your code and how it does it, the way your code uses resource can also get more efficient.

Lambda is ideal for event-driven activities, where you aren’t running code 24/7, and therefore only really need resource ready for whenever that event happens – but don’t know when it will happen. A user uploading a photo to an application you run is the example that AWS uses, where the upload triggers an event in Lambda which automatically resizes the photo for the user.

Well-Architected reviews

AWS is invested in making sure everyone using its cloud is getting the most out of it. To that end, it created the Well-Architected framework, which enables you to review how your cloud operations are set up and what you can do to improve them. Unsurprisingly, sustainability is one of the pillars of this framework, and so reviewing your operations using the framework is a great way to uncover more opportunities to modernise and boost sustainability.

A Well-Architected review can be performed for a single application or an entire estate, but crucially it’s best delivered by an accredited partner such as Claranet. That ensures that the framework and its associated tool are being used to their full effect, and therefore that you’re getting the most out of them.

Working through a real-world example

There are other services and products that AWS offers to help you modernise your applications, but these are the key technologies to consider that will give you the greatest benefits fastest. Now that we’ve covered them in a little detail, it’s time to see how they work in real life. To that end, it’s time to conclude the story of Charlie Loud Migrations, our incredibly lifelike example of a company that’s partnering with Claranet to migrate its IT estate to AWS. CLM is in the business of tracking animal migrations around the world using the latest technologies in the endeavour, and we join them now as they move into the final phase of their migration.

C. Loud Migrations has completed its initial migration and is already enjoying a considerably smaller carbon footprint now that is has been able to turn off most of its on-premise architecture. However, Charlie Loud, CTO and founder of the business, knows that modernising their applications will not only give them the best sustainability benefits now, but also help them grow more sustainably in the future.

The first port of call is the business’ core application, which manages all the data feeds it receives about the animals it tracks, and turns those into migration insights. During the initial migration, the application was rehosted into EC2, but with no changes to how it ran. The application is a monolith, and so the Cloud Centre of Excellence that CLM created at the start of the migration recommends that the team now takes the time to refactor the application to use a microservices architecture. Claranet does the bulk of the work on this project as the ones with the most technical knowledge, but a small team of CLM’s developers are involved too, with the goal of bringing those skills in-house for the future.

Next, the team looks at the application which handles photo and NFC uploads. One of the ways C.Loud Migrations tracks the movement of animals is by providing NFC-enabled tags to conservation groups. Once the animals have been tagged, whenever another conservation group encounters the animal, they can take a picture of it and scan its NFC tag; this data is sent to CLM, along with the user’s device location, processed by the app, and then passed to the custom application for analysis. During the migration, the app was replatformed to run using Amazon DynamoDB as a data repository; the team now decides to rebuild this app to include AWS Lambda. Instead of permanently running on an EC2 instance, the application is now event-driven, only running code and using resource when a new dataset is uploaded. Lambda handles the processing of the image and NFC data, sends it to DynamoDB, and calls the central application to begin analysis of the new data.

CLM also looks at its disaster recover (DR) environment. Pre-migration, DR consisted of magnetic tape backups, and an on-premise instance of Zerto for instant recovery. During the migration, the magnetic tapes were copied over to Amazon S3 Glacier using AWS Snowball with Tape Gateway; it was decided to switch from Zerto to AWS Elastic Disaster Recovery (EDR). CLM’s DR capabilities were initially left as they were in Zerto, which maintained a full multi-site environment that was costly and used a lot of energy. Now, the CCOE recommends that AWS EDR is reconfigured so that multisite failover is only available for the company’s core application; everything else uses a pilot light system instead, meaning that the standby region is kept running at a very small scale, ready to rapidly scale up if the primary region goes down. As a result, CLM’s DR uses far less energy while still giving it the level of service it needs to keep operational.

Charlie and the team also decide to host their payroll system and similar applications on spot instances. Because these applications only run infrequently and tend to process batches of data, they are well-suited to spot instances and allow CLM to save an average of 70% on the cost of their EC2 instances.

There is, of course, a variety of other apps that CLM modernises, using the expertise of Claranet and its CCOE. At the end of the process, the team looks back at its initial objectives and compares those with the results it has achieved. The goal was to reduce power consumption in its IT estate by 85%. The initial migration reduced power consumption by 40%; with the modernisation of the apps completed, the team now calculates that the company’s power usage has in fact shrunk by a total of 87%. On top of this, CLM’s applications are now running faster than before, require less maintenance from the internal IT team, and with the new expertise they have brought in house Charlie and the IT team are planning a number of exciting new updates to their applications. The project – and the business – is a success.

What’s next?

This is the final article in our series. I hope you’ve enjoyed reading it, and that the exploits of C. Loud Migrations have helped you picture how an AWS Cloud migration might play out in real life.

Of course, even when they go well, migration projects like this are complex and require specialist knowledge to get the most out of them. Claranet has helped (real) organisations just like CLM to move their applications and databases into AWS to gain both a modern set of applications and tools, and to further their sustainability agenda. In fact, earlier this year we announced a strategic collaboration with AWS that involved founding a global Cloud Centre of Excellence to make sure everyone we support gets the very best out of AWS. If you’d like to explore whether we could help you meet your sustainability targets and get the most out of your applications by migration to AWS, I’d love to chat with you – just click here for more details or use the form below to get in touch.