# A Thinking Person's Guide to Programmable Logic

(End of Page)

## Introduction, Venn Diagrams

The basic algebra of binary elements is called Boolean Algebra. As noted in Wikipedia, it is named for George Boole (1915-1864), an English mathematician pioneer in logic. His book "The Laws of Thought" (1854) lays out the algebra of thought, or reasoning.

Before getting into the details of Boolean algebra, we can first consider a more general visual description of sets and set theory, and how elements and sets are related. To begin, consider the following depiction of the "Universe":

Any element in the Universe can be in set $A$, $B$, or neither. This visualization is called a "Venn Diagram", originated by John Venn (1834-1923), who invented the diagrams in 1880. He was an Anglican Priest and a Fellow of the Royal Society. His work was so well appreciated that Caius College at Cambridge honored him with a stained glass window:

Now here's where it gets interesting. Imagine that the sets $A$ and $B$ can intersect. For instance, $A$ is the set of all females and $B$ is the set of all Republicans. It might look like the following, where $C$ denotes the set of females who are also Republicans:
So we can write that $C=A$ and $B$, or more concisely $C=A\&B$ (and sometimes you will see $C=A\cdot B$), or in the jargon of set theory, $C=A\cap B$, where the symbole $\cap$ means "intersection".

One can also form the "union", or in other words $C=A$ or $B$, or more commonly $C=A+B$, $C=A|B$, $C=A\cup B$:

Finally, one can define $C$ as being $A$ or $B$ but not both:

This is called the "exclusive or" (or "xor"), and is denoted as $C=A\oplus B$. In the world of programmable logic, we will use the following notation:

andorxor
$A\cdot B$$A+B$$A\oplus B$

## Brief Excursion to Bayesian Statistics

We can think about these sets in terms of probability. To set this up, let's use
Figure 1, and imagine that "The Universe", $U$, consists of all Republicans, $A$ is the set of all Republicans who support Donald Trump, and $B$ is the set of all Republicans who will actually vote in the election. Imagine now that we want to understand the probability that a Republican will vote for Donald Trump. If we divide the area $A$ by the area of "The Universe" ($U$) we can define $P(A)=A/U$. Similarly, $P(B)=B/U$.

Now let's consider the ratio $C/U$. This is the probability that a Republican will vote for Donald. But what if we want to consider a slightly different probability, the probability that Republicans who actually vote will vote for Donald. That probability is labeled $P(A|B)$, which means the probability of $A$ given $B$, and is given by the ratio of $C/B$ since $B$ consist of the set of people who will vote.

$P(A|B)=\frac{C}{B}=\frac{A\cap B}{B}$

Next we want to calculate the probability that a Republican who supports Donald will actually vote. This will be given by $C/A$ since $A$ consists of the set of people who support Donald Trump:

$P(B|A)=\frac{C}{A}=\frac{A\cap B}{A}$

We can take these 2 formula and eliminate $A\cap B$ to get:

$P(B|A)\cdot A=P(A|B)\cdot B$

And if we divide both sides by $U$, we get the equation: $$\frac{P(B|A)}{P(B)}=\frac{P(A|B)}{P(A)}$$ This famous equation is called Bayes' Theorem, first described by Rev. Thomas Bayes (1701-1761) and updated by Pierre-Simon Laplace in 1812. It describes a way of understanding statistical probabilities given prior information, and is extremely important in many fields of science that heavily rely on statistics. As usual, the article in Wikipedia is quite good and worth reading. Back to top

## Boolean Algebra

From the above diagrams, it is easy to see that these 3 operations are all related by the equation: $$(A\cdot B)+(A\oplus B)=A+B$$ The algebra formed by these sets and operations has many of the usual properties of algebra:
• Commutative:
• $A+B=B+A$
• $A\cdot B=B\cdot A$
• Associative:
• $A+(B+C)=(A+B)+C=A+B+C$
• $A\cdot(B\cdot C)=(A\cdot B)\cdot C=A\cdot B\cdot C$
• Distributive:
• $A+(B\cdot C)=(A+B)\cdot(A+C)$
• $A\cdot (B+C)=(A\cdot B)+(A\cdot C)$
These properties are easily proven with Venn diagrams as above. For example, the next diagram shows the 3 sets $A$, $B$, and $C$:

Let's consider $A+(B\cdot C)$ and see if we can prove that it is the same as $(A+B)\cdot(A+C)$. The first part is seen as the cross hatched area in the following diagram:

The next two parts $(A+B)\cdot(A+C)$ are shown next:
 and =

Voila! This can be useful in simplying complex equations. For instance:

 $A\cdot B + (B\cdot C)\cdot (B+C)$ $=$ $(A\cdot B) + (B\cdot C)\cdot B + (B\cdot C)\cdot C$ $=$ $(A\cdot B)+(B\cdot C)+(B\cdot C)$ $=$ $(A\cdot B)+(B\cdot C)$ $=$ $(B\cdot A)+(B\cdot C)$ $=$ $B\cdot (A+C)$

## The Digital World

Digital elements are things that have 2 states: 0 or 1, yes or no, true or false, and so on. There are 4 important basic symbols for representing operations on these digital elements:

 and: $C=A\cdot B$ or: $C=A+ B$ xor: $C=A\oplus B$ not: $C=\bar A$
Along with this pictorial representation of such "gates", we can also form "truth tables" that represent the functional relationship between the inputs, here $A$ and $B$, and the outputs $C$:

$A$$B A\cdot B$$A+B$ $A\oplus B$$\bar A 00 00 01 01 01 11 10 01 10 11 11 00 The truth table for showing the validity of the distributive property, given for example A\cdot (B+C)=(A\cdot B)+(A\cdot C) would be: A$$B$$C B+C$$A\cdot(B+C)$ $A\cdot B$$A\cdot C (A\cdot B)+(A\cdot C) 000 00 000 001 10 000 010 10 000 011 10 000 100 00 000 101 11 011 110 11 101 111 11 111 You can see in the above truth table that the distributive property holds up. To see the utility of the distributive property, let's form the "network" of gates that implements A\cdot (B+C) and (A\cdot B)+(A\cdot C). First, A\cdot (B+C): Next, (A\cdot B)+(A\cdot C) Clearly, the former might be preferred as it uses fewer gates. Back to top ## Boolean Properties of Gates Back to top The following tabulates many of the more important properties of Boolean gates. Note that from now on, we will use AB to mean A\cdot B to keep from writing the "dot" so many times: • \bar{\bar A} = A is called "double inversion", aka "involution" • A+A=A and A\cdot A=A is called "idempotency", which means a function that maps into itself • A+\bar A=1, A\cdot \bar A=0, A+0=A and A\cdot 1=A • A+(AB)=(A+A)(A+B)=A(A+B)=A. This might seem surprising at first, but when you look at a Venn diagram you will see why this is the case: if you take the "or" of A and B, and then "and" it into A, the result has to be A! This is sometimes referred to as "absorption". • A+(\bar A B)=(A+\bar A)(A+\bar B)=A+B, and A(\bar A+B)=(A\bar A)+ (AB)=AB. This is sometimes referred to as "simplification", for obvious reasons. A more interesting, and extremely useful property, involves the relationship between "and", "or", and "inversion". For example, imagine you take the operation A+B and invert the result: C=\overline{A+B}. In words, you are asking for the set that is "not" in the union of A with B, that is "not (A or B)". Clearly if we are looking for the set that is "(A or B)", it will look like this: with A+B being in the blue area. If we "invert" - not(A+B) - then that would be everything in the white dotted region. But that region can also be described as being notA and notB, or \bar A \cdot \bar B, which means:$$\overline{A+B} = \bar{A}\bar{B}$$or equivalenty, the following two logic circuits give the same result: This equivalence is called "DeMorgan's Law" named after the British mathematician August DeMorgan (1806-1871). Stated in the language of logic gates, this example above says that if you take a gate and change the OR's into And's' and invert all of the inputs and outputs, you get the same logic result. This also works for the case:$$\overline{AB}=\bar A + \bar B$$which has the following circuit equivalence: Just for fun...in 19th Century English, the law states: The negation of a conjunction is the disjunction of the negations. The negation of a disjunction is the conjunction of the negations. but in plain English: Swap all OR and AND gates Invert all inputs and outputs We can use DeMorgan's laws to simply many circuits. For instance, consider the XOR circuit A\oplus B. This circuit says "A or B but not both", which means A\oplus B=(A+B)\overline{AB} We can simplify \overline{AB}=\bar{A}+\bar{B} to get A\oplus B=(A+B)(\bar{A}+\bar{B}) Now we use the distributive propery and write the above as A\oplus B=(A+B)(\bar{A}+\bar{B})=A\bar{A}+A\bar{B}+B\bar{A}+B\bar{B}= A\bar{B}+B\bar{A} or$$A\oplus B=A\bar{B}+B\bar{A}$$We can also investigate$$\overline{A\oplus B}=\overline{A\bar B+B\bar A}= (\overline{A\bar B})(\overline{B\bar A})=(\bar A+B)(A+\bar B) =\bar{A}A+BA+\bar{A}\bar{B}+B\bar{B} =AB+\bar{A}\bar{B}\label{xorbar}$$which means we can write$$\overline{A\oplus B}=A\oplus\bar{B}=\bar{A}\oplus B$$Does something like C+(A\oplus B) distribute to (C+A)\oplus(C+B)? When you work out the logic using DeMorgan's theorem, you will find that it does not. ## Networks of Gates Back to top We can start with a bunch of gates connected into a circuit (called a "network"), and construct the truth table directly. But it is often the case that one has a truth table specified, and we want to turn the truth table into a network. How can we do this? To begin, let's be slightly formal and define a 2-input function F(x,y) as representing the following truth table: xyF 001 010 100 111 F is "true" (1) when x and y are the same (both false, 0, or both true, 1) otherwise F is "false" (0). (Let's use 0 and 1 from now on to make it simpler.) This tells us how to construct the network: combine the terms x and y such that F is 1. In this example, we can see easily that F(x,y)=\bar x\bar y + xy. Each "miniterm" (here \bar x\bar y and xy) is a product ("and") and you "sum" the products to find where the function is "true" (1), hence we call this technique the "sum of products", or "SOP" for shorthand. The following diagram shows the gate network that maps to F(x,y): The SOP technique is a basic and useful prescription for constructing a network of gates from a truth table. As an example, here's another truth table: xyzF 0000 0011 0101 0111 1000 1011 1100 1110 The miniterms are constructed from where F=1, which means the rows where xyz=001 (\bar x\bar y z), 010 (\bar x y \bar z), 011 (\bar x yz), and 101 (x\bar y z). The SOP is therefore: F(x,y,z)= \bar x\bar y z + \bar x y \bar z + \bar x yz + x\bar y z This can be simplified by using the above rules for Boolean logic:  F(x,y,z) = \bar x\bar y z + \bar x y \bar z + \bar x yz + x\bar y z = (\bar x+x)\bar y z + (\bar z+z)\bar x y = \bar y z + \bar x y where we have used the fact that \bar x+x=1 and \bar z+z=1. The gate network is shown next: Going back to the first function F(x,y)=\bar x\bar y+xy, we can apply Demorgan's rule (change all sums to products and invert all inputs and outputs) to get F(x,y)=\bar x\bar y+xy = \overline{x+y}+\overline{\bar x+\bar y} =\overline{(x+y)(\bar x+\bar y)} Note that we can invert F and simplify to get \bar F(x,y)=(x+y)(\bar x+\bar y)=x\bar x+y\bar x+x\bar y+y\bar y =\bar y x+\bar x y Notice that \bar xy+\bar yx are the two terms where F(x,y)=0, which is a new way to construct networks: form the product of sums where F=0. So we have gone from representing where F=1 by a sum of products to a product of sums (POS). It turns out that either SOP or POS works, and whether you one or the other may depend on details of the network. Most people think that the rule of thumb is to use the one with the fewest "miniterms": use SOP if the number of terms where F=1 is less than where F=0, or use POS if the other way around. And of course, always simplify afterwards! The following is an example of where a POS works well: xyzF 0001 0010 0101 0111 1000 1011 1101 1110 We can write down F(x,y,z) using the product of sums (POS, F=0) and simplify:  F(x,y,z) = (\bar x+\bar y+z)(x+\bar y+\bar z) (x+y+z) = (\bar x+\bar y+z)[xx+xy+xz+\bar yx+\bar yy+\bar yz+\bar zx+ \bar zy+\bar zz] = (\bar x+\bar y+z)[x+xy+xz+\bar yx+\bar yz+\bar zx+\bar zy] = (\bar x+\bar y+z)[x+x(y+\bar y)+x(z+\bar z)+\bar yz+\bar zy] = (\bar x+\bar y+z)[x+\bar yz+\bar zy] = \bar xx+\bar x(y\oplus z)+\bar yx+\bar y\bar yz+\bar y\bar zy+ zx+z\bar yz+z\bar zy = \bar x(y\oplus z)+x(\bar y+z)+(y\oplus z) = x(\bar y+z)+(y\oplus z) with the following network of gates: There are various other methods that people have employed in the past for going from a truth table to a network of gates. For instance, Karnough maps is another method of going from truth tables to gates (see the article in Wikipedia. It does not add enough to warrant more here, but suffice it to say that all of these techniques will be useful by the software that eventually builds the code that runs in programmable logic devices such as FPGAs. Back to top ## Binary, Octal, Decimal, Hexadecimal Back to top The language of computers is digital, so it is worth understanding how to do translations between binary (base 2), octal (base 8), decimal (base 10), and hexadecimal (base 16). The latter is actually the most important but let's start with binary. To set the context, a regular every-day decimal number is written in base 10, and the digits tell you how many of that power of 10. For instance, the number 3282_{10} = 2\times 10^0 + 8\times 10^1 + 2\times 10^2 + 3\times 10^3. To convert to base 2, we will need to know how to represent 3282_{10} in terms of the amount of 2^0, 2^1, 2^2, and so on. So it is worth memorizing (don't worry about it, if you use enough programmable logic you will end up remembering this by heart) the various powers of 2: n$$2^n$
$0$1
$1$2
$2$4
$3$8
$4$16
$5$32
$6$64
$7$128
$8$256
$9$512
$10$1024
$11$2048
$12$4096
$16$65536
One algorithm you can use to convert from decimal to binary is to start with the biggest power of 2 that will fit, subtract the difference, and iterate. For instance, the closest smaller power of 2 to $3282_{10}$ is $2048_{10}=2^{11}$. The remainder is $3282-2048=1234$. The closest smaller power of 2 to $1234$ is $1024=2^{10}$. The remainder there is $210$. We subtract $128=2^7$ and get $82$, subtract $64=2^6$ to get $18$, subtract $16=2^4$ to get $2$, subtract $2=2^1$ to get $0$ So the final binary number would have a 1 in the place holder for $2^11$, $2^10$, $2^7$, $2^6$, $2^4$, and $2^1$ and a 0 in all other places, giving $110011010010_2$ as the binary representation of $3282_{10}$.

This is kind of klunky, but a computer algorithm can do this easily. Here's the trick: the using the least significant bit (LSB) of the resulting binary number will be determined by whether the decimal number to convert is odd or even. So if you divide it by $2$, then the remainder will be the LSB of the target binary number. Then you take the result of $3282/2$, and whether that is odd or even will determine the next bit of the target binary number, and so on. So the following outlines the calculation using division and remainder:

 $3282/2$ = $1641$ remainder $0$ $1641/2$ = $820$ remainder $1$ $820/2$ = $410$ remainder $0$ $410/2$ = $205$ remainder $0$ $205/2$ = $102$ remainder $1$ $102/2$ = $51$ remainder $0$ $51/2$ = $25$ remainder $1$ $25/2$ = $23$ remainder $1$ $12/2$ = $6$ remainder $0$ $6/2$ = $3$ remainder $1$ $3/2$ = $1$ remainder $1$ $1/2$ = $0$ remainder $1$

Then you read off the binary number with the most significant bit (MSB) from the bottom of the above stack, and the LSB at the top: $3282_{10}=111011010010_2$.

Octal representations are in base 8, which means you only need 8 digits: $0-7$. The largest digit will be a 7, and that can be represented by the binary number $111$ since $7=4+2+1$ and the 3 digits tell us how many $4$, $2$, and $1$'s are in the number. Similarly, $6=110$, $5=101$, $4=100$, $3=011$, $2=010$, and $1=001$. Since 8 is a power of 2, there's a nice trick on how to go between binary and octal. For instance, let's take $110011010010_2$ and convert to octal by grouping 3 successive bits in a row like this: $110,011,010,010_2$. We can then read off the octal representation of the sets of 3: $110=6$, $011=3$, and $010=2$, so we get $110,011,010,010_2=6322_8$.

Hexadecimal is just as easy. Base 16 means we will need 16 digits, so traditionally we use $0-9,A,B,C,D,E,F$. The following table shows the hexadecimal, decimal, and binary representation for the digits:

Hex DigitDecimalBinary
000000
110001
220010
330011
440100
550101
660110
770111
881000
991001
A101010
B111011
C121100
D131101
E141110
F151111

To convert a binary number to hexadecimal, we use the same prescription as for octal but group in units of 4 and read off. For instance, $110011010010_2$ is written as $1100,1101,0010_2$, so the hex representation will be given by $CD2_{16}$. Back to top

## Integers in Binary Form

Given $n$ bits, the largest number we can hope to represent would be $2^n-1$ (remember we have to start at 0). For example, if $n=3$ then the largest number we can represent will be $111_2$, which is $7_{10}$.

However, this assumes all positive integers. What about negative numbers? One possibility would be to use the MSB for the sign, and the rest of the bits for the magnitude, and below there are several ways to do this. This will of course limit the largest absolute value we can represent, however there's no getting around it, we need to someone convey the sign information.

The simplest way is to just assign the MSB to the sign and use the remaining $n-1$ bits to magnitude. For example, the binary number $1000,0001=81_{16}=129_{10}$ as an unsigned number. If you assign the MSB to the sign, then this becomes $-1_{10}$. A small problem, however, occurs when considering that $1000,0000$ and $0000,0000$ seem to represent the same integer (since $-0=0$). This is not such a big deal but it's ugly and wastes precision (slightly). It is also difficult for machines to deal with (more below).

Another possibility is to use what is called the "1's complement" method. Here we complement (invert) the botton $n-1$ bits when the MSB=1. So to construct the 8-bit binary number for $-1$, you start with the bottom $7$ bits for 1, $000,0001$, complement it to $111,1110$ and add the MSB=1 to get $1111,1110$. This turns out to be better as far as integer arithmetic by machines go, however it still wastes precision since we still have the problem that $1111,1111$ and $0000,0000$ both represent $0=-0$.

A third possibility is called "2's complement". This is the same as the "1's complement" but you add a 1 at the end. So for instance, the 8-bit number $-1$ is constructed by taking the 1's complement of the 7-bit number 1 ($111,1110$), adding 1 ($111,1111$, and setting the MSB (8th bit) to get $1111,1111$. To go from binary to hex, if the MSB is set you subtract 1 and take the 1's complement. For instance, $1011,0101$ is a negative 7-bit number $011,0101$ which is the 1's complement of $100,1010$ which is $4A_{16}=74_{10}$, so $1011,0101=-74_{10}$. In this method, $0$ has a single representation ($0000,0000$), and machines can take advantage of the fact that addition and subtraction works the same on 1's complement numbers.

The following table summarizes the various techniques for a 4-bit number.

HexBinaryMSB 1's2's
$0$$0000$$0$ $0$$0 1$$0001$$1 1$$1$
$2$$0010$$2$ $2$$2 3$$0011$$3 3$$3$
$4$$0100$$4$ $4$$4 5$$0101$$5 5$$5$
$6$$0110$$6$ $6$$6 7$$0111$$7 7$$7$
$8$$1000$$-0$ $-7$$-8 9$$1001$$-1 -6$$-7$
$A$$1010$$-2$ $-5$$-6 B$$1011$$-3 -4$$-5$
$C$$1100$$-4$ $-3$$-4 D$$1101$$-5 -2$$-3$
$E$$1110$$-6$ $-1$$-2 F$$1111$$-7 -0$$-1$

## Computer Arithmetic

Let $x$ and $y$ be 1-bit numbers, and add them to form $S=x+y$. Best to look at the truth table:

$x$$y$$S$
000
011
101
112
Clearly $S=2$ is not going to work with regards to 1-bit numbers, however there's no getting around $1+1=2$. So we add a bit $C$ and form the truth table:
1010set
0101reset
1010set
X101reset