|
AbstractAsymmetric multiprocessor systems provide opportunities for exploiting dif- ferences between differernt programs to achieve power efficiency and perfor- mance. Unfortunately, heterogeneity in multicore processor systems creates significant challenges in effectively mapping these programs to diverse type of cores. To avoid requiring a static assignment of an application to a particular core type by the programmer, various approaches have been proposed to dynam- ically schedule applications across cores with heterogeneous sets of capabilities. Typical scheduling approaches require either sampling through a permutation of thread schedules to find the optimal mapping or use rough heuristics for predicting the performance of an application on a particular core type. We instead introduce a new, systematic approach to automate thread assignment. We construct a reinforcement-learning-based scheduler to assign threads to the best performing core given the state of the program and the processor cores. We use tile coding and artificial neural networks(ANNs) to represent system features as states and explore linear and nonlinear relationships between states and performance estimation. We present results demonstrating the promise of this approach for single-ISA heterogeneous multicore processors using mul- tiprogram workloads from SPEC CPU2006 benchmarks. Our initial tile cod- ing based function approximation learning experiments are encouraging, and our reinforcement-learning-based scheduler with ANNs function approximator delivers 1.77%(ignoring the overhead of switching) and 4.1%(considering the overhead of switching) better performance and 41.4% (ignoring the overhead of switching) and 41.7%(considering the overhead of switching) improvement to- wards an ideal performance estimation over heuristic-sampling-based scheduler and 6% better performance than the average of all possible static schedules for a four-core heterogeneous system. We also show the great potential of this approach using either online or offline learning. Finally, we discuss the imple- mentation of our model, its impact on results and propose future directions for improving the reinforcement-learning-based scheduling approach. Thesis
|
Created by amcgovern [at] ou.edu.
Last modified June 12, 2017 12:57 PM