Office of Technology Transfer – University of Michigan

Mitigating cache contention in data centers via dynamic compilation

Technology #6384

Questions about this technology? Ask a Technology Manager

Download Printable PDF

Categories
Researchers
Jason Mars
Managed By
Jessica Soulliere
Digital Technologies Licensing Specialist 734.647.9926
Patent Protection
US Patent Pending
US Patent Pending 2016-0170727

The recent boom in online and cloud services has greatly raised the requirements for efficient data center operations. This efficiency is focused on maximizing server utilization, which often involves co-locating multiple applications on a single server. However, this may lead to contention for resources that adversely impact the Quality of Service (QoS) for user-facing jobs that demand low latency such as search or media streaming. Protean Code is a new method of dynamic compilation that detects changes in server utilization and adjusts resource demands of batch applications in order to maintain QoS for user applications.

Specialized compiler and runtime allow dynamic adjustment to machine workloads

Built on LLVM compilers, Protean code converts some program branches and calls into indirect operations. The programs are combined with a runtime that monitors QoS of critical applications and recompiles alternate program subfunctions that have reduced processor cache occupancy. The indirect operations can then be diverted by the runtime based on the rising or falling cache requirements of the monitored applications. Finally, the runtime can also throttle programs to guarantee QoS. This method allows for mitigation of cache contention, and has been demonstrated in benchmark simulations to obtain utilization improvements averaging 1.5 times the previous state-of-the-art while meeting 98% QoS targets in high-priority co-running applications.

Applications

  • Optimize user facing and batch application efficiency in warehouse data centers.
  • Maintain Quality of Service requirements

Advantages

  • Easily deployed without additional hardware and little additional programmer effort
  • Embedded metadata from the special compiler allows flexible runtime optimization
  • Less than 1% overhead with seamless variant introduction
  • Continuous monitoring and adjustment of programs to match server loads
  • 1.5 times greater average utilization efficiency of batch applications than state-of-the-art while meeting 98% high-priority QoS

For an exciting video, please visit: https://www.youtube.com/watch?v=5vaU_nkKNWs&feature=youtu.be