As a construction system must be oriented towards the management of the construction of large complex systems on multiple platforms simultaneously, a major feature of the system is dealing with the complexity of the construction process. qef uses a structured, layered approach to deal with this complexity. In many ways, this architecture parallels the use of structured programming and high-level programming languages to deal with and create large programs. Abstraction and information-hiding is used to control the amount of information that one must deal with at any level.
In some ways, qef is similar to make. There is a text file that contains the construction control script; the user invokes qef with arguments specifying files or constructs to be created; ultimately, commands are invoked, often by a make-like process, to create the required objects or perform the required tasks.
As such, qef can be viewed as a replacement for make. However, the most important features of qef are for configuring and controlling processes and preparing the input for the back-end.
Thus qef is primarily a driver of other processes. Rather than attempting to solve all the problems using a monolithic program, qef provides facilities and structures to select, configure and control other processes. As will be seen, this approach provides flexibility in configuring the construction process, while ensuring that there is a single universal interface.
qef's processing is roughly divided into three stages: construction of the build parameters/configuration database, script preparation, and back-end processing.
The Parameter/Configuration Database Construction
The first stage invokes a program called lclvrs, which prepares a database of the build parameters and configuration information for use by other programs. The information is represented in a hierarchical database at the top of the current tree and distributed through the source tree. Configuration files at a particular level of the tree apply to all sublevels of the tree. This parallels the lexical scoping used in programming languages -- configuration information is only visible where it is required.
lclvrs finds and processes the relevant files for the current directory and outputs a binary file that can be directly loaded by other programs to retrieve the various parameters, options and controls provided via the lclvrs files. Parameters are used to specify search paths, process controls, build options, the names of special files and directories, tool names and flags, and so on. In this and other documents the convention used to specify the use of a lclvrs variable's value is either @Variable or @Array[value], "@" being the lclvrs precursor variable escape.
The major purpose of the script preparation stage is to transform an as simple as possible specification of the processing to be done, into the back-end command to be run and the input to that command. This transformation can range from the naming of a back-end process and its input file via lclvrs parameters, to the more common three-stage process of the creation of a list of source files, script generation using the source list as arguments or input, and the macro processing of the generated script. Practically any command may serve as a script generator, but two programs, qsg and qefdirs, are used most of the time. qsg, qefdirs and the macro processor are described briefly at a later point.
The third stage is the back-end which usually does the real work. In most instances, this will be a shell or make-like program. Some back-ends that are specifically designed for use by qef are discussed in later sections. The actual back-end to be run is specified by lclvrs variable or a preprocessor symbol.
The Commonest Implementations
While this architecture allows a wide range of processing models, in practice, two models are used in the majority of directories that contain conventional processing.
In directories of directories the user provides a list of the directories that contain constructions, the type of processing the directories support, and their dependencies on other directories. The list may also partition sets of directories into individually selectable sets. The script preparation stage invokes the program qefdirs which transforms this information into a make-like script which provides labels (i.e., targets) to perform aggregate operations (e.g., All, Install, Post, Test), or processing for named directories or sets of directories. The recipes for an operation are usually just to invoke qef in the named directory with the appropriate argument.
The most common model (used in approximately 75% of 1200 directories examined in various Toronto sites) is used for directories that contain source files to be processed. Once the configuration database has been assembled, a snapshot of the file system is generated. This generation uses configuration information to determine the search paths to be used and the suffixes of relevant files. The source database is typically input to the script generation stage.
That this database is created at the beginning of processing is significant. It has benefits both in terms of debugging the construction as the initial conditions are preserved and it is also efficient. Tools that combine view-pathing and rule-inference result in many unnecessary accesses to the file-system meta-data.
The script preparation is done using qsg, an algorithmic programming language. The configuration and source databases, plus a qsg script, are processed to generate the necessary recipes to do the construction. qsg's output is processed then by the macro processor, the output of which is sent as input to the back-end qmk, a make-like program described at a later point.
Although the above may appear complicated, most users are unaware of the actual processing. A qeffile that invokes the above is often as simple as:
Begin qsg -M
This example is far-fetched; the average size of the 1,200-sample qeffiles was seven lines.