Farewell EC2-Traditional, it’s been swell

EC2-Classic in a museum gallery

Retiring companies isn’t one thing we do at AWS. It’s fairly uncommon. Firms depend on our choices – their companies actually reside on these companies – and it’s one thing that we take severely. For instance SimpleDB remains to be round, though DynamoDB is the “NoSQL” DB of alternative for our prospects.

So, two years in the past, when Jeff Barr introduced that we’d be shutting down EC2-Traditional, I’m positive that there have been a minimum of just a few of you that didn’t imagine we’d truly flip the change — that we’d let it run eternally. Effectively, that day has come. On August 15, 2023, we shut down the final occasion of Traditional. And with all the historical past right here, I believe it’s price celebrating the unique model of one of many companies that began what we now know as cloud computing.

EC2 has been round for fairly some time, virtually 17 years. Solely SQS and S3 are older. So, I wouldn’t blame you when you had been questioning what makes an EC2 occasion “Traditional”. Put merely, it’s the community structure. Once we launched EC2 in 2006, it was one big community of 10.0.0.0/8. All situations ran on a single, flat community shared with different prospects. It uncovered a handful of options, like safety teams and Public IP addresses that had been assigned when an occasion was spun up. Traditional made the method of buying compute useless easy, though the stack working behind the scenes was extremely advanced. “Invent and Simplify” is without doubt one of the Amazon Management Rules in any case…

In case you had launched an occasion in 2006, an m1.small, you’ll have gotten a digital CPU the equal of a 1.7 GHz Xeon processor with 1.75 GB of RAM, 160 GB of native disk, and 250 Mb/second of community bandwidth. And it will have price simply $0.10 per clocked hour. It’s fairly unbelievable the place cloud computing has gone since then, with a P3dn.24xlarge offering 100 Gbps of community throughput, 96 vCPUs, 8 NVIDIA v100 Tensor Core GPUs with 32 GiB of reminiscence every, 768 GiB of whole techniques reminiscence, and 1.8 TB of native SSD storage, to not point out an EFA to speed up ML workloads.

However 2006 was a distinct time, and that flat community and small assortment of situations, just like the m1.small, was “Traditional”. And on the time it was really revolutionary. {Hardware} had turn out to be a programmable useful resource that you can scale up or down at a second’s discover. Each developer, entrepreneur, startup and enterprise, now had entry to as a lot compute as they needed, at any time when they needed it. The complexities of managing infrastructure, shopping for new {hardware}, upgrading software program, changing failed disks — had been abstracted away. And it modified the way in which all of us designed and constructed functions.

In fact the very first thing I did when EC2 was launched was to maneuver this weblog to an m1.small. It was working Moveable Sort and the this occasion was ok to run the server and the native (no RDS but) database. Finally I turned it right into a highly-available service with RDS failover, and so on., and it ran there for five+ years till the Amazon S3 Web site function was launched in 2011. The weblog has now been “serverless” for the previous 12 years.

Like we do with all of our companies, we listened to what our prospects wanted subsequent. This led us to including options like Elastic IP addresses, Auto Scaling, Load Balancing, CloudWatch, and varied new occasion varieties that will higher swimsuit completely different workloads. By 2013 we had enabled VPC, which allowed every AWS buyer to handle their very own slice of the cloud, safe, remoted, and outlined for his or her enterprise. And it grew to become the brand new normal. It merely gave prospects a brand new stage of management that enabled them to construct much more complete techniques within the cloud.

We continued to assist Traditional for the following decade, whilst EC2 developed and we carried out a wholly new virtualization platform, Nitro — as a result of our prospects had been utilizing it.

Ten years in the past, throughout my 2013 keynote at re:Invent, I instructed you that we needed to “assist right now’s workloads in addition to tomorrow’s,” and our dedication to Traditional is the very best proof of that. It’s not misplaced on me, the quantity of labor that goes into an effort like this — however it’s precisely the sort of work that builds belief, and I’m happy with the way in which it has been dealt with. To me, this embodies what it means to be buyer obsessed. The EC2 workforce saved Traditional working (and working properly) till each occasion was shut down or migrated. Offering documentation, instruments, and assist from engineering and account administration groups all through the method.

It’s bittersweet to say goodbye to one among our unique choices. However we’ve come a great distance since 2006 and we’re not completed innovating for our prospects. It’s a reminder that constructing evolvable techniques is a technique, and revisiting your architectures with an open thoughts is a should. So, farewell Traditional, it’s been swell. Lengthy reside EC2.

Certificate of achievement

Now, go construct!

Really useful posts

Leave a Reply

Your email address will not be published. Required fields are marked *