This technology is an improvement on caching algorithms to help ensure fast content delivery over networks. Content providers, such as YouTube, Netflix, or iTunes, need to make sure popular content is available to users. One way to do this is to cache popular content (or data that is expected to become popular). However, there is a tremendous amount of content with user demand constantly increasing, and demand can change suddenly, so deciding which content needs to be cached needs to be at least partially automated. So, algorithms for allocating content to a cache need to be improved upon. This technology is a step in that direction.
Combining the best methods
This algorithm builds on existing algorithms to adaptively use the best methodology for caching content. While theory supports the algorithm’s improved performance, it was also tested on large amounts of YouTube request data and was found to outperform existing algorithms such as Least-Frequently-Used and K-Least-Frequently-Used algorithms. This is relevant to any content provider, including providers of video content, software updates, text content, and a large variety of other fields.
- Caching in content distribution networks, such as news media sites and software providers (e.g. Steam)
- Caching for video content, such as YouTube and Netflix videos
- Caching on computer systems, including cloud systems, such as Google Cloud
- Flexible layered architecture to partition memory into caches and meta-caches
- Simple parameter setting to help with the trade-off between accuracy and speed of caching popular content
- Memory partitioning and layering can be adapted in real-time
You can read more about this exciting technology here: https://arxiv.org/pdf/1701.02214v1.pdf