Office of Technology Transfer – University of Michigan

Avoiding Latency in Public Clouds

Technology #5740

Researchers at the University of Michigan have developed an efficient resource analysis and allocation tool to reduce public cloud latency, yielding faster response times for latency-sensitive tasks. The ability of businesses, academic institutions and individuals to access computational resources without possessing in-house infrastructure has lead to the proliferation of cloud-based computing. When using public clouds, as compared to dedicated data centers, however, significant latency penalties can result. The response time of a public cloud, for instance at the 99.9% level, can often be 200-400% longer than their dedicated data center counterparts. This means 1 in 1000 customers experiences significant lag times, which adversely affects customer experience and thus is which is detrimental to businesses in this competitive landscape. The research team’s analysis and allocation method reduces public cloud response time, increasing computational efficiency and enhancing user experience.

Resource allocation for latency-sensitive computing in public clouds The research team’s technology, named Bobtail, has demonstrated significant response-time reductions for micro-benchmarks, as well as sequential and partition/aggregate workloads. Based on Amazon’s EC2 platform, reductions in response times for 10, 20 and 40 server requirements have demonstrated the tool’s robust capabilities in managing workload requirements. Consistent reductions of 40% for sequential and 20% for partition/aggregated workloads have been realized. These demonstrations have significant implications for enabling more efficient public cloud parallel computing and latency-sensitive tasks including internet and mobile applications.

Applications • Latency-sensitive tasks • Internet applications

Advantages • Reduced response time • Enhanced sequential and partition-aggregated workload performance • Reductions across workload requirements from 10-40 servers