ANDF: Finally an UNCOL After 30 Years

Authors:Benitez, M, Department of Computer ScienceUniversity of Virginia Chan, Paul, Department of Computer ScienceUniversity of Virginia Davidson, Jack, Department of Computer ScienceUniversity of Virginia Holler, Anne, Department of Computer ScienceUniversity of Virginia Meloy, Sue, Department of Computer ScienceUniversity of Virginia Santhanam, Vatsa, Department of Computer ScienceUniversity of Virginia

In the late l95()'s it was proposed that a Universal Computer Oriented Language (UNCOL) be developed to facilitate the development of language processors for various architectures. While an UNCOL was never realized, the use of some type of intermediate language for supporting the construction of compilers has found widespread use. Popular examples include P-code which is used to support Pascal, U-code, a descendant of P~code, which has been used to support several languages, OCODE which was used as the intermediate language for BCPL, and EM-1 which is used in the Amsterdam Compiler Kit and also supports several languages. These are only a few of the more well-known and widely used intermediate languages. This paper describes an intermediate language developed in response to the Open Software Foundation's request for the development of an Architecture Neutral Distribution Format (ANDF). The intermediate language, called HPcode-Plus, permits the distribution of a single version of an application that, without modification, will run on any hardware platform. The intermediate language and the accompanying translators demonstrate that an UNCOL is now technologically feasible. Clearly, if accepted in the marketplace, such an intermediate language will have tremendous benefits for end-users. 1. introduction The acronym UNCOL Ggniversal Qomputer _O_riented Language) is well-known to the compiler construction community[AH086, FISC88, TREM85]. in the late l950's, UNCOL was proposed as a way to reduce the effort to construct compilers for new languages and new architectures[STEE61, STRO59]. The classic argument was that if there were M languages and N machines, MXN compilers would be required to make each language available on all the machines. The creators of the UNCOL concept noted that only M+N translators would be required if a language could be constnrcted to serve as a bridge between the languages and the architecture. For each language, a source-languageto~UNCOL translator would be constructed. To implement the language on any machine would simply require the construction of an UNCOL-to - machine-language translator. ‘ g Conceptually the approach is quite appealing. In addition to reducing the cost of developing compilers, programs written in a language where a source - language - to-UNCOL translator exists could be immediately moved to any machine for which there was also an UNCOL-to-machinelanguage translator. This would, of course, include the translators themselves. Unfortunately, despite the benefits, UNCOL was never realized. There were a number of technological problems that could not be overcome. A major problem was that the UNCOL process could not produce executable code that was as fast as the executable code produced by a compiler that was designed specifically for the target architecture. Consequently, applications produced using UNCOL translators would run much slower than those produced using a conventional compiler. The primary reason for this loss of performance was existing code generation and optimization technologies were not able to efficiently map a language-independent, architecture - independent intermediate language onto the range of architectures available. Another problem was the inability to design an intemiediate language and construct the accompanying sourcelanguage-to-UNCOL translators that avoided assumptions about the target architecture. Typical source-language translators are written with knowledge of various key characteristics of the target architecture. For example, most source-language translators or front ends are written with knowledge of the sizes and alignment requirements of the basic data types supported by the target architecture. Such information permits the front end to compute sizes of records and structures, determine offsets of variables, properly initialize locations in memory, and in some cases decide the most appropriate operations to use. In order to minimize the effort to move these translators to different architectures, such information is usually isolated and parameterized so that it is easy to change. Nonetheless, this information as well as other information about the target architecture is used and its ramifications appear in the intermediate language the front end produces. This paper describes the design of an intermediate language and accompanying translators that addresses the problem UNCOL attempted to solve 30 years ago. The intermediate language was developed in response to the Open Software Foundation's Request for Technology to produce an Architecture Neutral Distribution Format (ANDF)[OSF90]. The intermediate language, called HPcode-Plus, contains no architecture dependencies. Application programs compiled into HPcode-Plus can be moved to any architecture that has a HPcode-Plus-to - machine - code translator. To demonstrate the feasibility of the process, we have constructed a front end that translates ANSI C programs to I-IPcode-Plus as well as translators for three machines (Motorola 68020, Hewlett-Packard PA-RISC, and Intel 80386/80387) that translate HPcode-Plus programs to machine code. Many applications totaling over 500,000 lines of code have been compiled and the resulting HPcode-Plus files have been moved to and installed on the three machines. The paper focuses on the features of the intermediate language that allow architecture dependencies to be S avoided.
Note: Abstract extracted from PDF file via OCR

All rights reserved (no additional license for public reuse)
Source Citation:

Benitez, M, Paul Chan, Jack Davidson, Anne Holler, Sue Meloy, and Vatsa Santhanam. "ANDF: Finally an UNCOL After 30 Years." University of Virginia Dept. of Computer Science Tech Report (1991).

University of Virginia, Department of Computer Science
Published Date: