Seminars & WorkshopsAutomatic Computation Offload for Native Applications
AbstractAlthough mobile applications become complicated requiring huge computing power, the performance of mobile devices is lagging behind the performance of servers. Computation offloading bridges the performance gap by allowing a mobile device to remotely execute heavy tasks at servers. However, to overcome architectural differences between mobile devices and cloud servers, most existing computation offloading systems rely on virtual machines, so they cannot offload native applications. Some computation offloading systems can offload native mobile applications, but their applicability is still limited to simple applications due to imprecise static alias analysis. This work presents the first fully-automatic cross-platform computation offloading framework for general-purpose native applications, called Native Offloader, which consists of a partitioning compiler and a seamless migration runtime. The compiler automatically finds heavy tasks without any annotation, and generates partitioned native binaries for a mobile device and a server. The runtime minimizes migration overheads via a unified virtual address space and communication optimization. Native Offloader efficiently offloads 6 native C applications, and achieves a geomean speedup of 3.17x and power saving of 84%. Short bioHanjun Kim is an assistant professor at the departments of creative IT engineering (CITE) and computer science and engineering (CSE), POSTECH. He obtained his B.S. in Electrical Engineering from Seoul National University, and M.A. and Ph.D. in Computer Science from Princeton University. He was awarded the Intel Ph.D. fellowship and Siebel scholarship in 2012. His research focuses on compiler techniques to improve the performance of programs on various computing environments ranging from mobile devices to cluster servers.
|
This will be shown to users with no Flash or Javascript.
|