UNDERSTANDING CACHING FOR LIVE STREAMING Posted on February 19th, 2019 | Posted by Sean Greep

Caching Cat Pics

CACHING YOUR CAT PICTURES…

 

Caching is one of the most important components of any large-scale web service, it is the only way to make many systems scalable yet still affordable. And in the world of live video streaming, even more so.

I’m sure most of you reading this can probably skip over this part, but for those where caching is relatively new, this will give a basic idea of why we’re caching at all.

Let’s take a relatively simple example, you have a picture of your cat stored on a server, for the simplicity of some math this file is 10MB in size.  We can then figure out how fast the file can be read from disk.

About 0.066 seconds, doesn’t sound very long does it?

Now if we upgrade to an SSD that can be even less.

About 0.02 seconds, even less!

So these read times are really very low, so why bother with caching at all?  Well, you probably don’t have just 1 picture of your cat, say if you had 10,000, how long would it take to read all of them from disk?

  • HDD About 11 minutes
  • SSD About 3 minutes

Ok, so this is where the numbers are now making a difference.  Storage (HDD and SSD) is almost always going to be the slowest component of any system, you will always be waiting for storage.  So, we want to offload from that as much as possible.  This is where caching comes into play.

Caching is all done in memory (RAM) which is significantly faster than any storage.  Let’s take our example above, how long would it take to load all 10,000 of our lovely cat pictures from memory?

6.25 seconds

Obviously, the example above is not that realistic, I don’t care how great your cat looks nobody wants to see 10,000 pictures of it on a single page.  But in the real world, you will have all sorts of different users all requesting different content randomly.

The key thing here is randomness; all those numbers above are calculated based upon sequential reading.  What this means is that it reads File1, then File2 and so on.  However, in the real world that is not the case, it will be more like File9 then File8,311 then File245.  This is where storage falls down, depending of the technology an SSD will be 1/3rd the speed in random reads or less compared to sequential.  HDD the story is even worse, some can offer as little as 1% of the performance under random load compared to sequential!

So I’ve managed to convince you that caching is a good idea.  But what should you be caching?  The simple answer is anything idempotent plus some other bits.

Ok, so now you’re going to ask what idempotent data is.  Simply put, it is anything that is not going to change, no matter how or when it is requested.  So, in our previous example, each picture of your cat is Idempotent, as they are a fixed picture, and they will not change.  A more widely used term would be a static asset, pictures, HTML files, pdf’s, CSS, to name a few.

The great thing about idempotent (static) assets is that they can be cached forever. Once read from a disk they are never going to change, so you could keep them in memory forever and never read them again.

This is where it gets more complicated, let’s take a simple example, you have a webpage that counts down the number of days to New Year.  Let’s say its 122 days to new year, for the whole day it will be 122 days, no more, no less.  But tomorrow it will be 121 days.  You can obviously cache this also, but only for 24 hours.  But caching can be used on a much smaller time frame also.

Any request that will result in the same response for a period can be cached, so even if you change a picture on a site every 10 seconds, if you have enough users on a site, you will want to cache that picture for the 10 seconds to ease the burden on the storage.

So far, we’ve only talked about static file but we can cache dynamic content just as easily, be it an API, web page, or anything else.  To give you an example, you run a web shop, your products change frequently to keep up with the latest items.  In your web shop you probably have a “latest items” or something similar. These items will change over time but will be fixed for a length of time also, meaning that for that length of time, the content can be cached also.

We have used our knowledge of caching to deliver live & catch up services for many clients. Consider a live stream coming from an Origin such as the Unified Streaming Platform. Every request for content puts large demands upon storage, which we know is our slowest component.

A live stream is made up of two components, the manifest and the fragments.  The manifest is constantly being updated with new content, meaning it cannot be cached for very long.  But it *can* be cached.

The fragments are unique, and idempotent, once a fragment is requested it will never change.  So, you can happily cache them forever.  The same is also true for subtitles; thanks to timestamps they will also be static.

The manifest is where it gets much more complicated.  In the world of Live, using the Unified Streaming Platform, you have several types of manifest, each of which are different.

  • Live manifest
  • Watch from Start manifest
  • Instant VOD Manifest

Thankfully we can treat the live manifest the same as a watch from start manifest.  A watch from start manifest being one that has a single t= or just a vbegin query.

These manifests will continually update with new content however, there is a safe amount of time you can cache for, despite the content being live.  Because the content is fragmented it is only updated every time a complete fragment is submitted to the Unified Streaming Platform server, so you can safely cache the content for the duration of a single fragment.  While this may only be 1 second in some cases, that single second of offload can be many thousands of requests that the Unified Streaming Platform server doesn’t have to process.

This leads us onto the much more complicated one – Instant VOD.
This is a manifest request with a query that has both a start and end time either vbegin and vend or t= with two timestamps.  Now if the instant VOD request is from the past, meaning that we have already passed the end timestamp, this can be considered idempotent.

However, if the end timestamp is still in the future, the manifest will function just like a watch from start manifest, until the end timestamp passes.  This makes caching of the instant VOD manifests much more complicated.

Caching is important, especially for systems where the computation of a response is demanding on any resource, CPU, Hard disk, or another system which may be much slower.

If you respond to the same request identically, more than a few times, caching will benefit you.