Background

This chapter focuses on the construction of a family of simple chips called Boolean gates. Since Boolean gates are physical implementations of Boolean functions, we start with a brief treatment of Boolean algebra. We then show how Boolean gates implementing simple Boolean functions can be interconnected to deliver the functionality of more complex chips. We conclude the background section with a description of how hardware design is actually done in practice, using software simulation tools.

Boolean Algebra

Boolean algebra deals with Boolean (also called binary) values that are typically labeled true/false, 1/0, yes/no, on/off, and so forth. We will use 1 and 0. A Boolean function is a function that operates on binary inputs and returns binary outputs. Since computer hardware is based on the representation and manipulation of binary values, Boolean functions play a central role in the specification, construction, and optimization of hardware architectures. Hence, the ability to formulate and analyze Boolean functions is the first step toward constructing computer architectures.

Truth Table Representation

The simplest way to specify a Boolean function is to enumerate all the possible values of the function’s input variables, along with the function’s output for each set of inputs. This is called the truth table representation of the function, illustrated in Figure 1.1.

xyzf(x,y,z)f(x, y, z)
0000
0010
0101
0110
1001
1010
1101
1110

Figure 1.1 Truth table representation of a Boolean function (example).

The first three columns of Figure 1.1 enumerate all the possible binary values of the function’s variables. For each one of the 2n2^n possible tuples v1...vnv_1 ... v_n (here n=3n = 3), the last column gives the value of f(v1...vn)f(v_1 ... v_n).

Boolean Expressions

In addition to the truth table specification, a Boolean function can also be specified using Boolean operations over its input variables. The basic Boolean operators that are typically used are:

  • And (x And y is 1 exactly when both xx and yy are 1),
  • Or (x Or y is 1 exactly when either xx or yy or both are 1),
  • and Not (Not x is 1 exactly when xx is 0).

We will use a common arithmetic-like notation for these operations: xyx \cdot y (or xyxy) means x And y, x+yx + y means x Or y, and xˉ\bar x means Not x.

To illustrate, the function defined in Figure 1.1 is equivalently given by the Boolean expression f(x,y,z)=(x+y)zˉf(x, y, z) = (x + y) \cdot \bar z. For example, let us evaluate this expression on the inputs x=0,y=1,z=0x = 0, y = 1, z = 0 (third row in the table). Since yy is 1, it follows that x+y=1x + y = 1 and thus 10ˉ=11=11 \cdot \bar 0 = 1 \cdot 1 = 1. The complete verification of the equivalence between the expression and the truth table is achieved by evaluating the expression on each of the eight possible input combinations, verifying that it yields the same value listed in the table’s right column.

Canonical Representation

As it turns out, every Boolean function can be expressed using at least one Boolean expression called the canonical representation. Starting with the function’s truth table, we focus on all the rows in which the function has value 1. For each such row, we construct a term created by And-ing together literals (variables or their negations) that fix the values of all the row’s inputs. For example, let us focus on the third row in Figure 1.1, where the function’s value is 1. Since the variable values in this row are x=0x = 0, y=1y = 1, z=0z = 0, we construct the term xˉyzˉ\bar x y \bar z. Following the same procedure, we construct the terms xyˉzˉx \bar y \bar z and xyzˉxy \bar z for rows 5 and 7. Now, if we Or-together all these terms (for all the rows where the function has value 1), we get a Boolean expression that is equivalent to the given truth table. Thus the canonical representation of the Boolean function shown in Figure 1.1 is f(x,y,z)=xˉyzˉ+xyˉzˉ+xyzˉf(x, y, z) = \bar x y \bar z + x \bar y \bar z + xy \bar z. This construction leads to an important conclusion: Every Boolean function, no matter how complex, can be expressed using three Boolean operators only: And, Or, and Not.

Two-Input Boolean Functions

An inspection of Figure 1.1 reveals that the number of Boolean functions that can be defined over nn binary variables is 22n{2^2}^n. For example, the sixteen Boolean functions spanned by two variables are listed in Figure 1.2. These functions were constructed systematically, by enumerating all the possible 4-wise combinations of binary values in the four right columns. Each function has a conventional name that seeks to describe its underlying operation.

Functionyzf(x,y,z)f(x, y, z)
0000
0010
0101
0110
1001
1010
1101
1110

Figure 1.2 All the Boolean functions of two variables

Here are some examples: The name of the Nor function is shorthand for Not-Or: Take the Or of xx and yy, then negate the result. The Xor function – shorthand for “exclusive Or” – returns 1 when its two variables have opposing truth-values and 0 otherwise. Conversely, the Equivalence function returns 1 when the two variables have identical truth-values. The If-xx-then-yy function (also known as xyx \rightarrow y, or "xx Implies yy") returns 1 when xx is 0 or when both xx and yy are 1. The other functions are self-explanatory.

The Nand function (as well as the Nor function) has an interesting theoretical property: Each one of the operations And, Or, and Not can be constructed from it, and it alone (e.g., x Or y = (x Nand x) Nand (y Nand y). And since every Boolean function can be constructed from And, Or, and Not operations using the canonical representation method, it follows that every Boolean function can be constructed from Nand operations alone. This result has far-reaching practical implications: Once we have in our disposal a physical device that implements Nand, we can use many copies of this device (wired in a certain way) to implement in hardware any Boolean function.

Gate Logic

A gate is a physical device that implements a Boolean function. If a Boolean function ff operates on nn variables and returns mm binary results (in all our examples so far, mm was 1), the gate that implements ff will have nn input pins and mm output pins. When we put some values v1...vnv_1 ... v_n in the gate’s input pins, the gate’s ‘‘logic’’ – its internal structure – should compute and output f(v1...vn)f(v_1 ... v_n). And just like complex Boolean functions can be expressed in terms of simpler functions, complex gates are composed from more elementary gates. The simplest gates of all are made from tiny switching devices, called transistors, wired in a certain topology designed to effect the overall gate functionality.

Although most digital computers today use electricity to represent and transmit binary data from one gate to another, any alternative technology permitting switching and conducting capabilities can be employed. Indeed, during the last fifty years, researchers have built many hardware implementations of Boolean functions, including magnetic, optical, biological, hydraulic, and pneumatic mechanisms. Today, most gates are implemented as transistors etched in silicon, packaged as chips. In this course we use the words chip and gate interchangeably, tending to use the term gates for simple chips.

The availability of alternative switching technology options, on the one hand, and the observation that Boolean algebra can be used to abstract the behavior of any such technology, on the other, is extremely important. Basically, it implies that computer scientists don’t have to worry about physical things like electricity, circuits, switches, relays, and power supply. Instead, computer scientists can be content with the abstract notions of Boolean algebra and gate logic, trusting that someone else (the physicists and electrical engineers—bless their souls) will Figure out how to actually realize them in hardware. Hence, a primitive gate (see Figure 1.3) can be viewed as a black box device that implements an elementary logical operation in one way or another—we don’t care how. A hardware designer starts from such primitive gates and designs more complicated functionality by interconnecting them, leading to the construction of composite gates.

Standard symbolic notation of some elementary logic gates. Figure 1.3 Standard symbolic notation of some elementary logic gates.

Composite implementation of a three-way And gate. The rectangle on the right defines the conceptual boundaries of the gate interface. Figure 1.4 Composite implementation of a three-way And gate. The rectangle on the right defines the conceptual boundaries of the gate interface.

Primitive and Composite Gates

Since all logic gates have the same input and output semantics (0’s and 1’s), they can be chained together, creating composite gates of arbitrary complexity. For example, suppose we are asked to implement the 3-way Boolean function And(aa, bb, cc). Using Boolean algebra, we can begin by observing that abc=(ab)ca \cdot b \cdot c = (a \cdot b) \cdot c, or, using prefix notation, And(aa, bb, cc) = And(And(aa, bb), cc). Next, we can use this result to construct the composite gate depicted in Figure 1.4.

The construction described in Figure 1.4 is a simple example of gate logic, also called logic design. Simply put, logic design is the art of interconnecting gates in order to implement more complex functionality, leading to the notion of composite gates. Since composite gates are themselves realizations of (possibly complex) Boolean functions, their “outside appearance” (e.g., left side of Figure 1.4) looks just like that of primitive gates. At the same time, their internal structure can be rather complex.

We see that any given logic gate can be viewed from two different perspectives: external and internal. The right-hand side of Figure 1.4 gives the gate’s internal architecture, or implementation, whereas the left side shows only the gate interface, namely, the input and output pins that it exposes to the outside world. The former is relevant only to the gate designer, whereas the latter is the right level of detail for other designers who wish to use the gate as an abstract off-the-shelf component, without paying attention to its internal structure.

Let us consider another logic design example—that of a Xor gate. As discussed before, Xor(aa, bb) is 1 exactly when either aa is 1 and bb is 0, or when aa is 0 and bb is 1. Said otherwise, Xor(aa, bb) = Or(And(aa, Not(bb)), And(Not(aa), bb)). This definition leads to the logic design shown in Figure 1.5.

Xor gate, along with a possible implementation. Figure 1.5 Xor gate, along with a possible implementation.

Note that the gate interface is unique: There is only one way to describe it, and this is normally done using a truth table, a Boolean expression, or some verbal specification.

This interface, however, can be realized using many different implementations, some of which will be better than others in terms of cost, speed, and simplicity. For example, the Xor function can be implemented using four, rather than five, And, Or, and Not gates. Thus, from a functional standpoint, the fundamental requirement of logic design is that the gate implementation will realize its stated interface, in one way or another. From an efficiency standpoint, the general rule is to try to do more with less, that is, use as few gates as possible.

To sum up, the art of logic design can be described as follows: Given a gate specification (interface), find an efficient way to implement it using other gates that were already implemented. This, in a nutshell, is what we will do in the rest of this chapter

Actual Hardware Construction

Having described the logic of composing complex gates from simpler ones, we are now in a position to discuss how gates are actually built. Let us start with an intentionally naïve example.

Suppose we open a chip fabrication shop in our home garage. Our first contract is to build a hundred Xor gates. Using the order’s downpayment, we purchase a soldering gun, a roll of copper wire, and three bins labeled “And gates,” “Or gates,” and “Not gates,” each containing many identical copies of these elementary logic gates. Each of these gates is sealed in a plastic casing that exposes some input and output pins, as well as a power supply plug. To get started, we pin Figure 1.5 to our garage wall and proceed to realize it using our hardware. First, we take two And gates, two Not gates, and one Or gate, and mount them on a board according to the figure’s layout. Next, we connect the chips to one another by running copper wires among them and by soldering the wire ends to the respective input/output pins. Now, if we follow the gate diagram carefully, we will end up having three exposed wire ends. We then solder a pin to each one of these wire ends, seal the entire device (except for the three pins) in a plastic casing, and label it Xor. We can repeat this assembly process many times over. At the end of the day, we can store all the chips that we’ve built in a new bin and label it “Xor gates”. If we (or other people) are asked to construct some other chips in the future, we’ll be able to use these Xor gates as elementary building blocks, just as we used the And, Or, and Not gates before.

As the reader has probably sensed, the garage approach to chip production leaves much to be desired. For starters, there is no guarantee that the given chip diagram is correct. Although we can prove correctness in simple cases like Xor, we cannot do so in many realistically complex chips. Thus, we must settle for empirical testing: Build the chip, connect it to a power supply, activate and deactivate the input pins in various configurations, and hope that the chip outputs will agree with its specifications. If the chip fails to deliver the desired outputs, we will have to tinker with its physical structure—a rather messy affair. Further, even if we will come up with the right design, replicating the chip assembly process many times over will be a time-consuming and error-prone affair. There must be a better way!

Hardware Description Language (HDL)

Today, hardware designers no longer build anything with their bare hands. Instead, they plan and optimize the chip architecture on a computer workstation, using structured modeling formalisms like Hardware Description Language, or HDL (also known as VHDL, where V stands for Virtual). The designer specifies the chip structure by writing an HDL program, which is then subjected to a rigorous battery of tests. These tests are carried out virtually, using computer simulation: A special software tool, called a hardware simulator, takes the HDL program as input and builds an image of the modeled chip in memory. Next, the designer can instruct the simulator to test the virtual chip on various sets of inputs, generating simulated chip outputs. The outputs can then be compared to the desired results, as mandated by the client who ordered the chip built.

In addition to testing the chip’s correctness, the hardware designer will typically be interested in a variety of parameters such as speed of computation, energy consumption, and the overall cost implied by the chip design. All these parameters can be simulated and quantified by the hardware simulator, helping the designer optimize the design until the simulated chip delivers desired cost/performance levels.

Thus, using HDL, one can completely plan, debug, and optimize the entire chip before a single penny is spent on actual production. When the HDL program is deemed complete, that is, when the performance of the simulated chip satisfies the client who ordered it, the HDL program can become the blueprint from which many copies of the physical chip can be stamped in silicon. This final step in the chip life cycle - from an optimized HDL program to mass production - is typically outsourced to companies that specialize in chip fabrication, using one switching technology or another.

Example: Building a Xor Gate

As we have seen in Figures 1.2 and 1.5, one way to define exclusive Or is Xor(aa, bb) = Or(And(aa, Not(bb)), And(Not(aa), bb)). This logic can be expressed either graphically, as a gate diagram, or textually, as an HDL program. The latter program is written in the HDL variant used throughout this course. See Simulator 1.1 for the details.

Simulator 1.1 Xor gate, along with a possible implementation.

Explanation

An HDL definition of a chip consists of a header section and a parts section. The header section specifies the chip interface, namely the chip name and the names of its input and output pins. The parts section describes the names and topology of all the lower-level parts (other chips) from which this chip is constructed. Each part is represented by a statement that specifies the part name and the way it is connected to other parts in the design. Note that in order to write such statements legibly, the HDL programmer must have a complete documentation of the underlying parts’ interfaces. For example, the code in Simulator 1.1 assumes that the input and output pins of the Not gate are labeled in and out, and those of And and Or are labeled a, b and out. This API-type information is not obvious, and one must have access to it before one can plug the chip parts into the present code.

Inter-part connections are described by creating and connecting internal pins, as needed. For example, consider the bottom of the gate diagram, where the output of a Not gate is piped into the input of a subsequent And gate. The HDL code describes this connection by the pair of statements Not(..., out=nota) and And(a=nota, ...). The first statement creates an internal pin (outbound wire) named nota, feeding out into it. The second statement feeds the value of nota into the a input of an And gate. Note that pins may have an unlimited fan out. For example, in Simulator 1.1, each input is simultaneously fed into two gates. In gate diagrams, multiple connections are described using forks. In HDL, the existence of forks is implied by the code.

Testing

Rigorous quality assurance mandates that chips be tested in a specific, replicable, and well-documented fashion. With that in mind, hardware simulators are usually designed to run test scripts, written in some scripting language. For example, the test script in the tab “Tests” of Simulator 1.1 is written in the scripting language understood by the embedded hardware simulator.

Let us give a brief description of the test script from Simulator 1.1. First, the script tells the simulator to get ready to print the values of selected variables. Next, the script lists a series of testing scenarios, designed to simulate the various contingencies under which the Xor chip will have to operate in “real-life” situations. In each scenario, the script instructs the simulator to bind the chip inputs to certain data values, compute the resulting output, and record the test results. In the case of simple gates like Xor, one can write an exhaustive test script that enumerates all the possible input values of the gate. The resulting output (tab “Output” in Simulator 1.1) can then be viewed as a complete empirical proof that the chip is well designed. The luxury of such certitude is not feasible in more complex chips, as we will see later.

Hardware Simulation

Since HDL is a hardware construction language, the process of writing and debugging HDL programs is quite similar to software development. The main difference is that instead of writing code in a language like Java, we write it in HDL, and instead of using a compiler to translate and test the code, we use a hardware simulator. The hardware simulator is a computer program that knows how to parse and interpret HDL code, turn it into an executable representation, and test it according to the specifications of a given test script. There exist many commercial hardware simulators on the market, and these vary greatly in terms of cost, complexity, and ease of use. This website provides a simple (and free) hardware simulator that is sufficiently powerful to support sophisticated hardware design projects. In particular, the simulator provides all the necessary tools for building, testing, and integrating all the chips presented in the course, leading to the construction of a general-purpose computer.