scroll

Since 2016, M2A Media has been in the business of delivering video workflows predominantly in public cloud. In that time, our work has been deployed almost exclusively using Amazon Web Services, and yet our strategy is to be cloud agnostic.

The reality is that our actual tactics working on our products will be dominated by the balance between what our customers need, and the existing solutions we have available to adapt to those needs, for example:

  • Customers frequently already use AWS
  • Extending our existing AWS-based services is the path of least resistance

The major motivation for us to walk the ‘cloud agnostic walk’ is that we wouldn’t want to cut ourselves off from new business opportunities where Google Cloud Platform or Azure Cloud are a prerequisite (we aren’t particularly worried about AWS going away any time soon, and we’re unlikely to remove services from AWS).

The current state of play

The reality for us right now is that the opportunity cost of actually delivering all product features across multiple cloud platforms would be too great – we need to spend our engineering resources delivering new product features, rather than migrating existing features to a cloud platform where we don’t currently have customers.

But when we get a credible request from a prospective customer, what will our engineering team actually do?

First it’s worth taking a step back and thinking about what a customer’s motivations might be in requesting deployment to a non-AWS cloud provider.  These might be, for example:

  • That they are simply unable to work with AWS.  Maybe Amazon is a competitor, or for some other reason they are antithetical to using AWS.
  • They already have existing usage of another cloud platform, and want to bundle the majority of the cloud costs that M2A incur into their existing bill.
  • They want to integrate into systems already deployed onto another cloud platform, and want to optimise data transfer by putting ‘compute’ and ‘data’ relatively close to each other.

The first of those three cases would be a challenging case to deal with; we would need to transpose every aspect of the product of interest over to another platform, so that the customer’s data / metadata never touches AWS systems.  That would be quite a lot of work, and is probably unrealistic for our first major use of a particular alternate cloud platform.

On the other hand, in light of the second and third cases, it’s worth categorising the kinds of implementation component we generally deploy:

  • Workflow orchestration
    • Deals mostly in ‘metadata’
    • Lightweight, for compute / network / storage
    • Typically stateless
    • Frequently coupled closely to AWS value-added services like SNS/SQS/DynamoDB
    • Often implemented using AWS Lambda functions
  • Media manipulation
    • Mostly deals with processing of the media
    • Heavyweight use of compute / network / storage
    • Often stateful
    • Fewer fundamental reliances on AWS services (*)
    • Implemented using EC2 / containers etc.

With these attributes in mind, our plan to move one of our services quickly to a new cloud platform would be, initially, to bring the service online with the media manipulation components moved across, but much of the workflow orchestration remaining on AWS.  This would give a customer the opportunity to use our services in their cloud of choice while allowing us to consider if and how to move the other pieces.

We provide integration between our different products and our approach is to implement that integration through the same externally-accessible APIs that our customers might use – we hope not to be in a position where we would have to migrate all our products to a new platform just for a customer to be able to use one of them!

In a multi-cloud world

Imagining a world were one of our products was already deployed to more than one cloud platform, the next obvious question to ask is, how do we avoid having to maintain two totally separate codebases for the same high-level product featureset?

My sense today is that we would strongly avoid any attempt to ‘abstract away’ the differences between platforms as this would leave us with the lowest common denominator options — a treatment likely worse than the disease.

I predict that the orchestration ‘glue’ that holds the systems together is likely to also be the area that by necessity needs to work in different ways on different platforms.  Our opportunities to avoid duplicate effort will be:

  • Externally-accessible product APIs, which hide many specific implementation details, like the specific cloud platform
  • Management user interfaces / consoles, which are already built on top of those same APIs
  • Those systems which already have their main dependency satisfied by ‘being able to run on a linux system’.

The only way to be cloud agnostic is to start on the journey and tackle each use case as it arises.   Complete parity across clouds may be difficult to achieve but offering alternatives to our customers is certainly on our roadmap.

* It’s worth acknowledging the elephant in the room here, that some of the media manipulation we do is indeed done using AWS services too, e.g. Media Live / AWS Elemental Live — however we aren’t exclusively wed to AWS Media products even in our existing AWS deployments, making use of other systems all the way from the ubiquitous ffmpeg through to the Unified Streaming Platform.

About the author

David Holroyd is Technical Architect at M2A Media.  A BBC alumnus, David shapes and delivers effective, technical solutions to meet the needs of our broadcaster clients.

Stay Informed

Sign up to our newsletter to receive regular updates

Products

Live Streaming

Highly reliable, low latency,
highest quality live streams
Find out more
Products

Live Capture

Your live content into VOD
assets in a matter of minutes
Find out more
Products

VOD Workflow

The best-in-class VOD
workflow services
Find out more