out-of-core programs, parallel language, HPF, compiler, code optimization
This paper describes techniques for translating out-of-core programs written in a data parallel language like HPF to message passing node programs with explicit parallel I/O. We describe the basic compilation model and various steps involved in the compilation. The compilation process is explained with the help of an out-of-core matrix multiplication program. We first discuss how an out-of-core program can be translated by extending the method used for translating in-core programs. We demonstrate that straightforward extension of in-core compiler does not work for out-of-core programs. We then describe how the compiler can optimize the code by (1) estimating the I/O costs associated with different array access patterns, (2) reorganizing array accesses, (3) selecting the method with the least I/O cost, and (4) allocating memory according to access cost for competing out-of-core arrays. These optimizations can reduce the amount of I/O by as much as an order of magnitude. Performance results on the Intel Touchstone Delta are presented and analyzed.
Bordawekar, Rajesh; Choudhary, Alok; and Thakur, Rajeev, "Data Access Reorganizations in Compiling Out-of-core Data Parallel Programs on Distributed Memory Machines" (1994). Electrical Engineering and Computer Science. Paper 23.