Document Type

Working Paper




out-of-core programs, parallel language, HPF, compiler, code optimization




Computer Sciences


This paper describes techniques for translating out-of-core programs written in a data parallel language like HPF to message passing node programs with explicit parallel I/O. We describe the basic compilation model and various steps involved in the compilation. The compilation process is explained with the help of an out-of-core matrix multiplication program. We first discuss how an out-of-core program can be translated by extending the method used for translating in-core programs. We demonstrate that straightforward extension of in-core compiler does not work for out-of-core programs. We then describe how the compiler can optimize the code by (1) estimating the I/O costs associated with different array access patterns, (2) reorganizing array accesses, (3) selecting the method with the least I/O cost, and (4) allocating memory according to access cost for competing out-of-core arrays. These optimizations can reduce the amount of I/O by as much as an order of magnitude. Performance results on the Intel Touchstone Delta are presented and analyzed.

Additional Information

NPAC Technical Report SCCS-622, Sept. 1994

Creative Commons License

Creative Commons Attribution 3.0 License
This work is licensed under a Creative Commons Attribution 3.0 License.