Dealing with Cloud Lock In: 4 Strategies to Set Yourself Free

Dealing with Cloud Lock In
Reading Time: 7 minutes

In this article:

  • What is cloud vendor lock-in and why do companies struggle with it?
  • 4 Strategies to avoid lock in, and situations that are appropriate for each solution
  • Pros and cons of each strategy
  • Data considerations
  • Weighing cost and speed

Many engineering leaders are struggling with the challenge of running their applications in a single cloud provider. They are nervous that something could change with the provider and that their app and company could be in jeopardy. As a result, many would prefer the insurance policy of being able to run anywhere. At least, that is, until they see the premium for that plan. Then they ask: is there a middle ground here? One that, while it may cost a little bit more, will provide the freedom and flexibility to move my workloads and apps anywhere?  

I see mid to larger size companies preferring a multi-cloud solution.  And while this involves the usage of multiple public clouds for their organization, it is typically not for the purpose of having the capability to run a single application across different providers.     

This article will focus on different strategies to have an application run in different providers, thereby avoiding the lock in for any single provider. Different situations call for different strategies. I’ve outlined four possible strategies below.  

Strategy #1 – All In with Cloud Native

Although the term can mean different things to different people, when I use the term “Cloud Native” I’m referring to the products and services native to that cloud provider –  the full ecosystem of components that we as software developers stitch together to create applications in record time. Public cloud providers understand this and try to get “stickiness” by providing more and more features in the form of services designed to make you more likely to commit to that environment and less likely to depart.  And some of them are doing a great job at just that.   

Strategy #1 consists of embracing the Cloud Native approach, risks and all. It is a viable strategy to commit to a single cloud provider, using the components in their ecosystem to build new applications in record time. After all, time to market is a key success factor for any new product or service, so erring on the side of speed is never a bad choice.  

Pros: 

  • Speed – teams can develop apps super fast 
  • Single platform to know – having team members specializing in a single cloud platform is always going to be easier than having to staff for multiple 

Cons: 

  • Greater risk – Not able to easily move to another platform if needed 

Strategy #2 – Write Once Run Anywhere (WORA) 

For companies that want the insurance policy of being able to easily move workloads from one 

vendor to another, there are a few different options. But before we explore these, is Write Once Run Anywhere even possible?   

Writing code in a way that can be more Cloud portable became a lot easier with containers. If you are generating container images from your builds, you can decide which container orchestrator solution makes the most sense––there’s no shortage of them in the market today.    But what if your app needs a database, as most do?  A message queue?  A distributed cache?  While your code is portable by running in containers, it’s still not completely portable if you are relying on services from a particular cloud provider to fill these gaps.     

Instead of relying on cloud native services, Open Source components give you the capability of being more portable across clouds. However, this comes at the expense of complexity in development, administration and observability. For instance, instead of using AWS Simple Queue Service to avoid this dependency, you decide to deploy and manage a RabbitMQ cluster instead. While this is certainly an option, it does introduce other challenges like installing, managing, upgrading, securing, etc.; everything involving the cluster that can not be underestimated. In most cases, you would need to hire additional resources with skills in the particular open source technologies that you plan to use.  

For those implementing serverless solutions, in my experience they are very portable as well. Even though cloud providers have different events that are capable of activating a serverless function, the actual code inside the function can be moved fairly trivially to another cloud provider most of the time ( shortly we will publish another article where we demonstrate a serverless function working in 4 different cloud providers, unchanged.)

Pros: 

  • Containerized and severless workloads are more portable across clouds

Cons:

  • Extra time to develop, test, and maintain  
  • Team skill set crosses many areas 
  • Extra complexity required to wire many different open source products together 

Strategy #3 –  Able to move between providers, but not that quickly. 

There’s a difference between running anywhere and being able to run anywhere with some effort. One such approach to help with this is the Facade pattern in software development.  In this method, you create interfaces that front the implemented technology. In the case of a message queue for example, you may have interfaces for put_item and get_item and the implementation of those are technology specific, with commands translated to that provider. When you need to change providers, you create a new implementation for that service.    

Pros:

  • Can achieve some portability with not too much extra effort 

Cons:

  • Extra development and testing time  
  • Can get complicated the more extensive a service is (dozens or more interfaces required) 

Strategy #4 – Third party products/solutions  

There are vendors with products in the market that run containers and serverless functions on Kubernetes clusters and support it on multiple clouds including on-prem environments. They claim if you use their product that you will have the WORA you wanted and can avoid cloud vendor lock in. But, wouldn’t you then be locked into that solution? Weren’t you trying to avoid lock in altogether, but are now locked in to a third party solution? Is that an improvement over being locked into a cloud provider?      

These vendors will emphasize avoiding lock in as the main driver for using their product.  It reminds me of the Kenny Chesney song:  “Everybody wants to go to heaven, but nobody wants to go now”.  In this case, everybody wants to talk about lock in, but nobody wants to talk about theirs.  

At some point, you just need to choose where you are willing to be locked in.  

Pros: 

  • Could run containerized workloads in all clouds supported by the third party solution 

Cons:

  • Cost – can be very expensive 
  • Can be hard to maintain (some will offer to maintain it for you, adding to the expense)  
  • Tied into the third party solution provider instead 

But wait…  There’s an elephant in the room and it’s the data.  

Here’s the big question: where is your database? You are probably using one of the SQL or NoSQL DBaaS offered by the cloud provider, and there’s nothing wrong with that until you decide it’s time to move. While you can move applications around fairly easily (unless they leverage the cloud native ecosystem), moving the data brings new challenges. Of course if you are not using a Database As A Service (DBaaS) and your database software involves contracts and licensing fees, then you have a bigger cost of moving unless you are at the end of the contract period.  

The data is certainly moveable but egress charges of the cloud provider can make it costly to do so–– it depends on how much data you have. Cloud providers have ingress (incoming) and egress (outgoing) billing capability. There’s no charge for inbound data, and why do you think that is? That’s the lure to get you to load all of your data into their platform with the hope that will make you stay awhile.  

Here’s a recent example of data egress charges breakdown.  

Example: The cost of transferring 25 TB aggregated across all AWS services to Internet in a month in the Asia Pacific (Tokyo) AWS region is 9.999 TB at $0.114 per GB + (25 TB — 9.999 TB — 1 GB) at $0.089 per GB = 10,238.98 GB at $0.114 per GB + 15,360.02 GB at $0.089 per GB = $2,534.29.” 

Given the improvement in network speeds and reduced latency, it’s normal to wonder if it makes sense to host your database on prem in order to avoid egress charges. But you really don’t avoid them unless the database is read only, since any data inserted from the app to the DB would still count as data moving outside the network of your cloud environment. That leads to other considerations around breaking the application into multiple pieces — part stays on prem and part is in the cloud, each with a different purpose. Before you take something like that on, however, realize it’s going to significantly raise the complexity that you will be dealing with for some time.  

An alternative approach is to be very selective in your database decision and commit to it. Let the cloud provider manage the hard parts for you (backup, recovery, resiliency) while you and your team continue to focus on the application.   

Conclusion 

I’ve outlined four different strategies to deal with vendor lock in that cover the spectrum of speed vs. risk. Any option could work to meet your requirements, as long as you understand the pros and cons of each.  

At the end of the day, you don’t really get extra points for making the same thing work in five different places. You DO get extra points for delivering a game changing application or product  in record time. If speed is the critical factor to competing in the marketplace, then you might have to take on a little more risk because I don’t see it being possible to have most apps  “run anywhere” without a serious amount of engineering effort.  

A common theme here is to make sure you choose your cloud provider wisely. What cloud is best for your workloads? What cloud is best given your team’s skill sets? What features exist in one cloud that could provide a strategic advantage for you against your competition? Put a large emphasis into your selection of your cloud provider and then commit to it. Make it work the way you need it to work. If you choose the right one, the need to move is greatly diminished and you can sleep peacefully at night.    

If you have questions or comments about this Cloud lock in article or would like to speak with one of our cloud experts directly about cloud solutions, please fill out the form and we will respond.

About Dave Moore

Dave Moore, CIO

Dave Moore is GAP’s Chief Innovation Officer. He is a seasoned technology executive with more than 25 years of experience in conceptualization and crafting innovative solutions that provide scalability, widespread end-user adoption, and substantially increased revenue. Dave’s experience has given him unique insight into building diverse teams, and expert knowledge of microservices, Serverless, cloud optimization, CI/CD, security, big data and open-source technologies. You can connect with Dave on LinkedIn, or send him an email.