KEMBAR78
COM 112 Lecture Note | PDF | Logic Gate | Binary Coded Decimal
0% found this document useful (0 votes)
4 views42 pages

COM 112 Lecture Note

Com 112

Uploaded by

Samist Fabusiwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views42 pages

COM 112 Lecture Note

Com 112

Uploaded by

Samist Fabusiwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 42

NUMBER SYSTEMS, BASE AND CODE CONVERSION

Number Digit

A digit is an element of a set that, taken as a whole, comprises a system of numeration. Thus,
a digit is a number in a specific context. In the decimal (base-10) Arabic numbering system,
the digits are the elements of the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}.

Decimal and Binary Numbers

When we write decimal (base 10) numbers, we use a positional notation system. Each digit is
multiplied by an appropriate power of 10 depending on its position in the number:

For example:

843 = 8 x 102 + 4 x 101 + 3 x 100

= 8 x 100 + 4 x 10 + 3 x 1

= 800 + 40 + 3

For whole numbers, the rightmost digit position is the one’s position (10 0 = 1). The numeral
in that position indicates how many ones are present in the number. The next position to the
left is ten’s, then hundred’s, thousand’s, and so on. Each digit position has a weight that is ten
times the weight of the position to its right.

In the decimal number system, there are ten possible values that can appear in each digit
position, and so there are ten numerals required to represent the quantity in each digit
position. The decimal numerals are the familiar zero through nine (0, 1, 2, 3, 4, 5, 6, 7, 8, 9).

In a positional notation system, the number base is called the radix. Thus, the base ten
systems that we normally use have a radix of 10. The term radix and base can be used
interchangeably.

When writing numbers in a radix other than ten, or where the radix isn’t clear from the
context, it is customary to specify the radix using a subscript. Thus, in a case where the radix
isn’t understood, decimal numbers would be written like this:

12710 1110 567310

1|Page
Generally, the radix will be understood from the context and the radix specification is left off.
The binary number system is also a positional notation numbering system, but in this case,
the base is not ten, but is instead two. Each digit position in a binary number represents a
power of two. So, when we write a binary number, each binary digit is multiplied by an
appropriate power of 2 based on the position in the number:

For example:

1011012 = 1 x 25 + 0 x 24 + 1 x 23 + 1 x 22+ 0 x 21 + 1 x 20

= 1 x 32 + 0 x 16 + 1 x 8 + 1 x 4 + 0 x 2 + 1 x 1

= 32 + 8 + 4 + 1

In the binary number system, there are only two possible values that can appear in each digit
position rather than the ten that can appear in a decimal number. Only the numerals 0 and 1
are used in binary numbers. The term ‘bit’ is a contraction of the words ‘binary’ and ‘digit’,
and when talking about binary numbers the terms bit and digit can be used interchangeably.

The following are some additional examples of binary numbers:

1011012 112 101102

Conversion between Decimal and Binary

Converting a number from binary to decimal is quite easy. All that is required is to find the
decimal value of each binary digit position containing a 1 and add them up.

For example: convert 101102 to decimal.

10110

___________1 x 21 = 2

____________1 x 22 = 4

______________1 x 24 =16

22

Another example: convert 110112 to decimal

2|Page
11011

________1 x 20 = 1

__________1 x 21 = 2

_____________1 x 23 = 8

______________1 x 24 = 16

27

The method for converting a decimal number to binary is one that can be used to convert
from decimal to any number base. It involves using successive division by the radix until the
dividend reaches 0. At each division, the remainder provides a digit of the converted number,
starting with the least significant digit.

An example of the process: convert 3710 to binary

37 / 2 = 18 remainder 1 (least significant digit)

18 / 2 = 9 remainder 0

9 / 2 = 4 remainder 1

4 / 2 = 2 remainder 0

2 / 2 = 1 remainder 0

1 / 2 = 0 remainder 1 (most significant digit)

The resulting binary number is: 1001012

Another example: convert 9310 to binary

93 / 2 = 46 remainder 1 (least significant digit)

46 / 2 = 23 remainder 0

23 / 2 = 11 remainder 1

11 / 2 = 5 remainder 1

5 / 2 = 2 remainder 1

3|Page
2 / 2 = 1 remainder 0

1 / 2 = 0 remainder 1 (most significant digit)

The resulting binary number is: 10111012

Hexadecimal Numbers

In addition to binary, another number base that is commonly used in digital systems is base
16. This number system is called hexadecimal, and each digit position represents a power of
16. For any number base greater than ten, a problem occurs because there are more than ten
symbols needed to represent the numerals for that number base. It is customary in these cases
to use the ten decimal numerals followed by the letters of the alphabet beginning with A to
provide the needed numerals. Since the hexadecimal system is base 16, there are sixteen
numerals required. The following are the hexadecimal numerals:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F

The following are some examples of hexadecimal numbers:

1016 4716 3FA16 A03F16

The reason for the common use of hexadecimal numbers is the relationship between the
numbers 2 and 16. Sixteen is a power of 2 (16 = 2 4). Because of this relationship, four digits
in a binary number can be represented with a single hexadecimal digit.

For example: Convert the binary number 10110101 to a hexadecimal number

Divide into groups of 4 digits 1011 0101

Convert each group to hex digit B 5 = B516

Another example: Convert the binary number 0110101110001100 to hexadecimal

Divide into groups of 4 digits 0110 1011 1000 1100

Convert each group to hex digit: 6 B 8 C = 6B8C16

To convert a hexadecimal number to a binary number, convert each hexadecimal digit into a
group of 4 binary digits.

Example: Convert the hex number 374F into binary

4|Page
3 7 4 F

Convert the hex digits to binary: 0011 0111 0100 1111 = 00110111010011112

There are several ways in common use to specify that a given number is in hexadecimal
representation rather than some other radix. In cases where the context makes it absolutely
clear that numbers are represented in hexadecimal, no indicator is used. In much written
material where the context doesn’t make it clear what the radix is, the numeric subscript 16
following the hexadecimal number is used.

Binary Coded Decimal Numbers

Another number system that is encountered occasionally is Binary Coded Decimal. In this
system, numbers are represented in a decimal form; however each decimal digit is encoded
using a four bit binary number.

For example: The decimal number 136 would be represented in BCD as follows:

136 = 0001 0011 0110

1 3 6

Conversion of numbers between decimal and BCD is quite simple. To convert from decimal
to BCD, simply write down the four bit binary pattern for each decimal digit. To convert
from BCD to decimal, divide the number into groups of 4 bits and write down the
corresponding decimal digit for each 4 bit group.

There are a couple of variations on the BCD representation, namely packed and unpacked.
An unpacked BCD number has only a single decimal digit stored in each data byte. In this
case, the decimal digit will be in the low four bits and the upper 4 bits of the byte will be 0. In
the packed BCD representation, two decimal digits are placed in each byte. Generally, the
high order bits of the data byte contain the more significant decimal digit.

An example: The following is a 16 bit number encoded in packed BCD format:

01010110 10010011

This is converted to a decimal number as follows:

5|Page
0101 0110 1001 0011

5 6 9 3

The value is 5693 decimal

Another example: The same number in unpacked BCD (requires 32 bits)

00000101 00000110 00001001 00000011

5 6 9 3

Adding and Subtracting Binary Numbers

It is possible to add and subtract binary numbers in a similar way to base 10 numbers.

For example, 1 + 1 + 1 = 3 in base 10 becomes 1 + 1 + 1 = 11 in binary.

In the same way, 3 – 1 = 2 in base 10 becomes 11 – 1 = 10 in binary.

When you add and subtract binary numbers you will need to be careful when 'carrying' or
'borrowing' as these will take place more often.

Example 1

Calculate, using binary numbers:

(a) 111 + 100 1011

(b) 101 + 110 1011

(c) 1111 + 111 10110

6|Page
Example 2

Calculate the binary numbers:

(a) 111 – 101 10

(b) 110 – 11 11

(c) 1100 – 101 111

Two’s Complement:

There are several ways that signed numbers can be represented in binary, but the most
common representation used today is called two’s complement. The term two’s complement
is somewhat ambiguous, in that it is used in two different ways. First, as a representation,
two’s complement is a way of interpreting and assigning meaning to a bit pattern contained in
a fixed precision binary quantity. Second, the term two’s complement is also used to refer to
an operation that can be performed on the bits of a binary quantity. As an operation, the two’s
complement of a number is formed by inverting all of the bits and adding 1. In a binary
number being interpreted using the two’s complement representation, the high order bit of the
number indicates the sign. If the sign bit is 0, the number is positive, and if the sign bit is 1,
the number is negative.

For example: Find the 2’s complement of the following 8 bit number

00101001

11010110 First, invert the bits (i.e 1s changes to 0s and vice versa)

+ 00000001 then, add 1

= 11010111

The 2’s complement of 00101001 is 11010111

Another example: Find the 2’s complement of the following 8 bit number

10110101

01001010 Invert the bits

7|Page
+ 00000001 then add 1

= 01001011

The 2’s complement of 10110101 is 01001011

Excess-three Code: A number code in which the decimal digit n is represented by the four-
bit binary equivalent of n + 3.
Example: write convert 5 in decimal to its equivalent binary number in excess-3
Solution: remember that 5 in decimal equals 01012 and 3 = 0011.
Adding the two gives 10002

Seven Segment Display Code:


Binary numbers are necessary, but very hard to read or interpret. A seven-segment (LED)
display is used to display binary to decimal information. A seven-segment display may have
7, 8, or 9 leads on the chip. Usually leads 8 and 9 are decimal points. The figure below is a
typical component and pin layout for a seven segment display.
The diode placement of a seven segment display is shown below and the PIN assignment:

The image below is your typical seven segment display with each of the segments labeled
with the letters A through G. To display digits on these displays you turn on some of the
LEDs. For example, when you illuminate segments B and C for example your eye perceives
it as looking like the number "1." Light up A, B, and C and you will see what looks like a "7."

8|Page
LOGIC GATES AND FUNCTIONS

Basic Logic Gates

While each logical element or condition must always have a logic value of either "0" or "1",
we also need to have ways to combine different logical signals or conditions to provide a
logical result. For example, consider the logical statement:

"If we move the switch on the wall up, the light will turn on."

At first glance, this seems to be a correct statement. However, if we look at a few other
factors, we realize that there's more to it than this. In this example, a more complete statement
would be:

"If we move the switch on the wall up and the light bulb is good and the power is on, the
light will turn on." If we look at these two statements as logical expressions and use logical
terminology, we can reduce the first statement to:

Light = Switch

This means nothing more than that the light will follow the action of the switch, so that when
the switch is up/on/true/1 the light will also be on/true/1. Conversely, if the switch is
down/off/false/0 the light will also be off/false/0. Looking at the second version of the
statement, we have a slightly more complex expression:

Light = Switch and Bulb and Power

Normally, we use symbols rather than words to designate the AND function that we're using
to combine the separate variables of Switch, Bulb, and Power in this expression. The symbol
normally used is a dot, which is the same symbol used for multiplication in some
mathematical expressions.

9|Page
Using this symbol, our three-variable expression becomes:

Light = Switch Bulb Power

When we deal with logical circuits (as in computers), we not only need to deal with logical
functions; we also need some special symbols to denote these functions in a logical diagram.
There are three fundamental logical operations, from which all other functions, no matter
how complex, can be derived. These functions are named and, or, and not. Each of these has
a specific symbol and a clearly-defined behavior, as follows:

The AND Gate

The AND gate implements the AND function. With the gate shown above, both inputs must
have logic 1 signals applied to them in order for the output to be logic 1. With either input at
logic 0, the output will be held to logic 0. There is no limit to the number of inputs that may
be applied to an AND function, however, for practical reasons, commercial AND gates are
manufactured with 2, 3, or 4 inputs. A standard Integrated Circuit (IC) package contains 14 or
16 pins, for practical size and handling. A standard 14-pin package can contain four 2-input
gates, three 3-input gates, or two 4-input gates, and still have room for two pins for power
supply connections. The truth table for a two-input AND gate is shown below.

A B A.B
0 0 0
0 1 0
1 0 0
1 1 1

The OR Gate

10 | P a g e
The OR gate is sort of the reverse of the AND gate. The OR function, like its verbal
counterpart, allows the output to be true (logic 1) if any one or more of its inputs are true.
Verbally, we might say, "If it is raining OR if I turn on the sprinkler, the grass will be wet."
Note that the grass will still be wet if the sprinkler is on and it is also raining.

This is correctly reflected by the basic OR function. In symbols, the OR function is


designated with a plus sign (+). In logical diagrams, the symbol above designates the OR
gate. As with the AND function, the OR function can have any number of inputs, however,
practical commercial OR gates are limited to 2, 3, and 4 inputs, as with AND gates. The truth
table for a two-input OR gate looks like

A B A+B
0 0 0
0 1 1
1 0 1
1 1 1

The NOT Gate, or Inverter

The inverter is a little different from AND and OR gates in that it always has exactly one
input as well as one output. Whatever logical state is applied to the input, the opposite state
will appear at the output. The NOT function is necessary in many applications and highly
useful in others. A practical verbal application might be: The door is NOT locked = you
may enter

The NOT function is denoted by a horizontal bar over the value to be inverted, as shown in
the figure below. In the inverter symbol, the triangle actually denotes only an amplifier,
which in digital terms means that it "cleans up" the signal but does not change its logical
sense. It is the circle at the output which denotes the logical inversion. The circle could have
been placed at the input instead, and the logical meaning would still be the same. The truth
table for the NOT gate is shown below

11 | P a g e
A A
0 1
1 0
The logic gates shown above are used in various combinations to perform tasks of any level
of complexity.

Derived Logic Functions and Gates

Some combinations of basic functions have been given names and logic symbols of their
own. The first is called NAND, and consists of an AND function followed by a NOT
function. The second, as you might expect, is called NOR. This is an OR function followed
by NOT. The third is a variation of the OR function, called the Exclusive-OR, or XOR
function. Each of these derived functions has a specific logic symbol and behavior, which we
can summarize as follows:

The NAND Gate

The NAND gate implements the NAND function, which is exactly inverted from the AND
function. Both inputs must have logic 1 signals applied to them in order for the output to be
logic 0. With either input at logic 0, the output will be held to logic 1. The circle at the output
of the NAND gate denotes the logical inversion, just as it did at the output of the inverter.
The over-bar over both inputs shows that the AND function itself that is inverted, rather than
each separate input. There is no limit to the number of inputs that may be applied to a NAND
function, however, for practical reasons, commercial NAND gates are manufactured with 2,
3, or 4 inputs, to fit in a 14-pin or 16-pin package. The truth table for a two-input NAND gate
looks like

A B A.B
0 0 1
0 1 1
1 0 1
1 1 0

12 | P a g e
The NOR Gate

The NOR gate is an OR gate with the output inverted. Where the OR gate allows the output
to be true (logic 1) if any one or more of its inputs are true, the NOR gate inverts this and
forces the output to logic 0 when any input is true.

In symbols, the NOR function is designated with a plus sign (+), with an over-bar over the
entire expression to indicate the inversion. This is an OR gate with a circle to designate the
inversion. The NOR function can have any number of inputs, but practical commercial NOR
gates are mostly limited to 2, 3, and 4 inputs, as with other gates in this class, to fit in
standard IC packages. The truth-table for a two-inputs NOR gate looks like:

A B A+B
0 0 1
0 1 0
1 0 0
1 1 0

BOOLEAN ALGEBRA
The mathematical system of binary logic is known as BOOLEAN ALGEBRA OR
SWITCHING ALGEBRA. In 1854, George Boole (1815-1864), a British mathematician
published his paper “An investigation of the laws of Thought” on which the mathematical
theories of logic and probabilities are focused.
Boolean algebra had no practical application until 1938, when Cloude Shannon – the father
of Information Theory applied Boole’s work to the analysis and design of telephone
switching circuits.

In our previous unit we discovered that logic systems can become quite complex if we don’t
use any simplification techniques. In this topic you will be introduced to the first method of
simplification, Boolean algebra.

BOOLEAN OPERATIONS AND EXPRESSIONS

Variable, complement, and literal are terms used in Boolean algebra. A variable is a symbol
used to represent a logical quantity. Any single variable can have a 1 or a 0 value. The
complement is the inverse of a variable and is indicated by a bar over variable (over-bar). For

13 | P a g e
example, the complement of the variable A is A. If A = 1, then A = 0. If A = 0, then A = 1.
The complement of the variable A is read as "not A" or "A bar." Sometimes a prime symbol
rather than an over bar is used to denote the complement of a variable; for example, B'
indicates the complement of B. A literal is a variable or the complement of a variable.

Boolean Addition
Recall that Boolean addition is equivalent to the OR operation. In Boolean algebra, a sum
term is a sum of literals. In logic circuits, a sum term is produced by an OR operation with no
AND operations involved. Some examples of sum terms are A + B, A + B, A + B + C, and A
+ B + C + D.
A sum term is equal to 1 when one or more of the literals in the term are 1. A sum term is
equal to 0 only if each of the literals is 0.
Example: Determine the values of A, B, C, and D that make the sum term A + B + C + D
equal to 0.

Boolean Multiplication
Also recall that Boolean multiplication is equivalent to the AND operation. In Boolean
algebra, a product term is the product of literals. In logic circuits, a product term is produced
by an AND operation with no OR operations involved. Some examples of product terms are
AB, AB, ABC, and ABCD.
A product term is equal to 1 only if each of the literals in the term is 1. A product term is
equal to 0 when one or more of the literals are 0.

Example Determine the values of A, B, C, and D that make the product term ABCD equal to
1.

Laws and Rules of Boolean Algebra

■ Laws of Boolean Algebra

The basic laws of Boolean algebra-the commutative laws for addition and multiplication, the
associative laws for addition and multiplication, and the distributive law-are the same as in
ordinary algebra.

 Commutative Laws

14 | P a g e
Commutative law of addition: (A+B = B+A)

This law states that the order in which the variables are ORed makes no difference.
Remember, in Boolean algebra as applied to logic circuits, addition and the OR operation are
the same. The figure below illustrates the commutative law as applied to the OR gate and
shows that it doesn't matter to which input each variable is applied. (The symbol ≡ means
"equivalent to.").

Commutative law of multiplication: (A.B = B.A)

This law states that the order in which the variables are ANDed makes no difference. The
figure below, il1ustrates this law as applied to the AND gate.

 Associative Laws

►The associative law of addition is written as follows for three variables:


A + (B + C) = (A + B) + C
This law states that when ORing more than two variables, the result is the same regardless of
the grouping of the variables.

►The associative law of multiplication is written as follows for three variables:


A(BC) = (AB)C
This law states that it makes no difference in what order the variables are grouped when
ANDing more than two variables.

15 | P a g e
 Distributive Law
►The distributive law is written for three variables as follows: A(B + C) = AB + AC
This law states that ORing two or more variables and then ANDing the result with a single
variable is equivalent to ANDing the single variable with each of the two or more variables
and then ORing the products. The distributive law also expresses the process of factoring in
which the common variable A is factored out of the product terms, for example,
AB + AC = A(B + C).
The figure below illustrates the distributive law in terms of gate implementation.

RULES OF BOOLEAN ALGEBRA


The table below lists 12 basic rules that are useful in manipulating and simplifying Boolean
expressions. Rules 1 through 9 will be viewed in terms of their application to logic gates.
Rules 10 through 12 will be derived in terms of the simpler rules and the laws previously
discussed.

16 | P a g e
Rule 1: A + 0 = A

A variable ORed with 0 is always equal to the variable. If the input variable A is 1, the output
variable X is 1, which is equal to A. If A is 0, the output is 0, which is also equal to A. This
rule is illustrated below, where the lower input is fixed at 0.

Rule 2: A + 1 = 1

A variable ORed with 1 is always equal to 1. A 1 on an input to an OR gate produces a 1 on


the output, regardless of the value of the variable on the other input.

Rule 3: A.0=0

A variable ANDed with 0 is always equal to 0. Any time one input to an AND gate is 0, the
output is 0, regardless of the value of the variable on the other input. This rule is illustrbelow,
where the lower input is fixed at 0.

Rule 4: A.1=A
A variable ANDed with 1 is always equal to the variable. If A is 0 the output of the AND
gate is 0. If A is 1, the output of the AND gate is 1 because both inputs are now 1s. This rule
is shown below, where the lower input is fixed at 1.

Rule 5: A+A=A

17 | P a g e
A variable ORed with itself is always equal to the variable. If A is 0, then 0 + 0 = 0; and if A
is 1, then 1 + 1 = 1. This is shown below, where both inputs are the same variable.

Rule 6: A + A = 1

A variable ORed with its complement is always equal to 1. If A is 0, then 0 + 0 = 0 + 1 = 1. If


A is l, then 1 + 1 = 1+ 0 = 1. See the figure below, where one input is the complement of the
other.

Rule 7: A.A=A

A variable ANDed with itself is always equal to the variable. If A = 0, then 0.0 = 0; and if A
= 1, then 1.1 = 1. The figure below illustrates this rule.

Rule 8. A.A=0

A variable ANDed with its complement is always equal to 0. Either A or A will always be 0:
and when a 0 is applied to the input of an AND gate. The output will be 0 also. The figure
below illustrates this rule.

Rule 9: A=A

18 | P a g e
The double complement of a variable is always equal to the variable. If you start with the
variable A and complement (invert) it once, you get A. If you then take A and complement
(invert) it, you get A, which is the original variable. This rule is shown in Fig.(4-14) using
inverters.

Rule 10: A + AB = A

This rule can be proved by applying the distributive law, rule 2, and rule 4 as follows: A +
AB = A( 1 + B) Factoring (distributive law) = A . l, Rule 2: (1 + B) = 1 = A, Rule 4: A . 1 =
A

Rule 11: A + AB = A + B

This rule can be proved as follows:


A + AB = (A + AB) + AB Rule 10: A = A + AB
= (AA + AB) + AB Rule 7: A = AA
=AA +AB +AA +AB Rule 8: adding AA = 0
= (A + A)(A + B) Factoring
= 1. (A + B) Rule 6: A + A = 1
=A + B Rule 4: drop the 1

Rule 12. (A + B)(A + C) = A + BC

This rule can be proved as follows:


(A + B)(A + C) = AA + AC + AB + BC Distributive law
= A + AC + AB + BC Rule 7: AA = A
= A( 1 + C) + AB + BC Rule 2: 1 + C = 1
= A. 1 + AB + BC Factoring (distributive law)
= A(1 + B) + BC Rule 2: 1 + B = 1
= A. 1 + BC Rule 4: A . 1 = A
= A + BC

19 | P a g e
DEMORGAN'S THEOREMS
DeMorgan, a mathematician who knew Boole, proposed two theorems that are an important
part of Boolean algebra. In practical terms, DeMorgan's theorems provide mathematical
verification of the equivalency of the NAND and negative-OR gates and the equivalency of
the NOR and negative-AND gates, which were discussed earlier.

Theorem 1: The complement of a product of variables is equal to the sum of the


complements of the variables,
Stated another way,
The complement of two or more ANDed variables is equivalent to the OR of the complements
of the individual variables.
The formula for expressing this theorem for two variables is XY = X + Y

Theorem 2: The complement of a sum of variables is equal to the product of the


complements of the variables.
Stated another way,
The complement of two or more ORed variables is equivalent to the AND of the complements
of the individual variables.
The formula for expressing this theorem for two variables is X+Y=XY

The figures below show the gate equivalencies and truth tables for the two equations above.

20 | P a g e
As stated, DeMorgan's theorems also apply to expressions in which there are more than two
variables. The following examples illustrate the application of DeMorgan's theorems to 3-
variable and 4-variable expressions.

Example
Apply DeMorgan's theorems to the expressions XYZ and X + Y + Z

XYZ = X + Y + Z

X+Y+Z=XYZ

Example
Apply DeMorgan's theorems to the expressions WXYZ and W + X + Y + Z.

WXYZ = W + X + Y + Z

W+X+Y+Z=WXYZ

Applying DeMorgan's Theorems


The following procedure illustrates the application of DeMorgan's theorems and Boolean
algebra to the specific expression

Step l. Identify the terms to which you can apply DeMorgan's theorems, and think of each
term as a single variable. Let A + BC = X and D(E + F) = Y.

Step 2. Since X + Y = X Y,

= (A + BC) (D(E + F))

Step 3. Use rule 9 (A = A) to cancel the double bars over the left term (this is not part of
DeMorgan's theorem).

(A + BC) (D(E + F)) = (A + BC) (D(E + F ))

21 | P a g e
Step 4. Applying DeMorgan's theorem to the second term,

(A + BC) (D(E + F)) = (A + BC)(D + (E + F ))

Step 5. Use rule 9 (A = A) to cancel the double bars over the E + F part of
the term.
(A + BC)(D + E + F) = (A + BC)(D + E + F)

EXERCISE

Apply DeMorgan's theorems to each of the following expressions:

(a) (A + B + C)D (b) ABC + DEF (c) AB + CD + EF

SIMPLIFICATION USING BOOLEAN ALGEBRA

A simplified Boolean expression uses the fewest gates possible to implement a given
expression.

Example
Using Boolean algebra techniques, simplify this expression: AB + A(B + C) + B(B + C)

Solution
Step 1: Apply the distributive law to the second and third terms in the expression, as follows:
AB + AB + AC + BB + BC

Step 2: Apply rule 7 (BB = B) to the fourth term. AB + AB + AC + B + BC

Step 3: Apply rule 5 (AB + AB = AB) to the first two terms. AB + AC + B + BC

Step 4: Apply rule 10 (B + BC = B) to the last two terms. AB + AC + B


22 | P a g e
Step 5: Apply rule 10 (AB + B = B) to the first and third terms. B+AC

At this point the expression is simplified as much as possible.

KARNAUGH MAP MINIMIZATION


A Karnaugh map provides a systematic method for simplifying Boolean expressions and, if
properly used, will produce the simplest SOP or POS expression possible, known as the
minimum expression.
As you have seen, the effectiveness of algebraic simplification depends on your familiarity
with all the laws, rules, and theorems of Boolean algebra and on your ability to apply them.
The Karnaugh map, on the other hand, provides a "cookbook" method for simplification.
A Karnaugh map is similar to a truth table because it presents all of the possible values of
input variables and the resulting output for each value. Instead of being organized into
columns and rows like a truth table, the Karnaugh map is an array of cells in which each cell
represents a binary value of the input variables. The cells are arranged in a way so that
simplification of a given expression is simply a matter of properly grouping the cells.
Karnaugh maps can be used for expressions with two, three, four and five variables. Another
method, called the Quine-McClusky method can be used for higher numbers of variables.
The number of cells in a Karnaugh map is equal to the total number of possible input variable
combinations as is the number of rows in a truth table. For three variables, the number of
cells is 23 = 8. For four variables, the number of cells is 24 = 16.

The 3-Variable Karnaugh Map


The 3-variable Karnaugh map is an array of eight cells as shown in the below. In this case, A,
B, and C are used for the variables although other letters could be used. Binary values of A
and B are along the left side (notice the sequence) and the values of C are across the top. The
value of a given cell is the binary values of A and B at the left in the same row combined

23 | P a g e
with the value of C at the top in the same column. For example, the cell in the upper left
corner has a binary value of 000 and the cell in the lower right corner has a binary value of
101. The figure below shows the standard product terms that are represented by each cell in
the Karnaugh map.

The 4-Variable Karnaugh Map


The 4-variable Karnaugh map is an array of sixteen cells, as shown. Binary values of A and B
are along the left side and the values of C and D are across the top. The value of a given cell
is the binary values of A and B at the left in the same row combined with the binary values of
C and D at the top in the same column.

For example, the cell in the upper right corner has a binary value of 0010 and the cell in the
lower right corner has a binary value of 1010. The figure below shows the standard product
terms that is represented by each cell in the 4-variable Karnaugh map.

Cell Adjacency
The cells in a Karnaugh map are arranged so that there is only a single variable change
between adjacent cells. Adjacency is defined by a single variable change. In the 3-variable
map the 010 cell is adjacent to the 000 cell, the 011 cell, and the 110 cell. The 010 cell is not
adjacent to the 001 cell, the 111 cell, the 100 cell, or the 101 cell.

24 | P a g e
KARNAUGH MAP SOP MINIMIZATION
For an SOP expression in standard form, a 1 is placed on the Karnaugh map for each product
term in the expression. Each 1 is placed in a cell corresponding to the value of a product term.
For example, for the product term ABC, a 1 goes in the 10l cell on a 3-variable map.
Example
Map the following standard SOP expression on a Karnaugh map:

KARNAUGH MAP POS MINIMIZATION


In this section, we will focus on POS expressions. The approaches are much the same except
that with POS expressions, 0s representing the standard sum terms are placed on the
Karnaugh map instead of 1s. For a POS expression in standard form, a 0 is placed on the
Karnaugh map for each sum term in the expression. Each 0 is placed in a cell corresponding
to the value of a sum term. For example, for the sum term A + B + C, a 0 goes in the 0 1 0
cell on a 3-variable map. When a POS expression is completely mapped, there will be a
number of 0s on the Karnaugh map equal to the number of sum terms in the standard POS
expression. The cells that do not have a 0 are the cells for which the expression is 1. Usually,

25 | P a g e
when working with POS expressions, the 1s are left off. The following steps and the
illustration in Fig.(5-10) show the mapping process.
Step 1. Determine the binary value of each sum term in the standard POS expression. This is
the binary value that makes the term equal to 0.
Step 2. As each sum term is evaluated, place a 0 on the Karnaugh map in the corresponding
cell.

Example:
Map the following standard POS expression on a Karnaugh map:

Solution:

Karnaugh Map Simplification of POS Expressions


The process for minimizing a POS expression is basically the same as for an SOP expression
except that you group 0s to produce minimum sum terms instead of grouping 1s to produce
minimum product terms. The rules for grouping the 0s are the same as those for grouping the
1s that you learned before.

Example:

26 | P a g e
Use a Karnaugh map to minimize the following standard POS expression: Also, derive the
equivalent SOP expression.

Solution

BASIC ADDERS
In order to design a circuit capable of binary addition one would have to look at all of the
logical combinations. You might do that by looking at the following four sums:
0 0 1 1
+0 +1 + 0 + 1
=0 =1 =1 =10
That looks fine until you get to 1 + 1. In that case, you have a carry bit to worry about. If
you don't care about carrying (because this is, after all, a 1-bit addition problem), then you
can see that you can solve this problem with an XOR gate. But if you do care, then you might
rewrite your equations to always include 2 bits of output, like this:
0 0 1 1
+0 +1 +0 +1
00 01 01 10

The Half-Adder
A half-adder adds two bits and produces a sum and a carry output. Adders are important in
computers and also in other types of digital systems in which numerical data are processed.
Recall the basic rules for binary.

27 | P a g e
0+ 0= 0
0+ 1= 1
1+ 0= 1
1 + 1 = 10
The operations are performed by a logic circuit called a half-adder. The half-adder accepts
two binary digits on its inputs and produces two binary digits on its outputs, a sum bit and a
carry bit.
A half-adder is represented by the logic symbol below. Half-Adder Logic: From the
operation of the half-adder as stated in Table 7-1, expressions can be derived for the sum and
the output carry as functions of the inputs. Notice that the output Carry (C out) is a 1 only when
both A and B are 1s: therefore. Cout can be expressed Cout = AB

Now observe that the sum output ( Σ ) is a 1 only if the input variables A and B, are not
equal. The sum can therefore be expressed as the exclusive-OR of the input variables.
Σ=AB

The Full-Adder
The second category of adder is the full-adder. The full-adder accepts two input bits and an
input carry and generates a sum output and an output carry. The basic difference between a
full-adder and a half-adder is that the full-adder accepts an input carry. A logic symbol for a
full-adder is shown below, and the truth table beside it shows the operation of a full-adder.

28 | P a g e
Σ = A  B  Cin
Cout = AB + (AB)Cin

Notice in the figure below, there are two half-adders, connected as shown in the block
diagram below, with their output carries ORed.

Example: For each of the three full-adders below, determine the outputs for the inputs
shown.

29 | P a g e
SMALL- SCALE INTEGRATED CIRCUIT
Introduction:
There are several different families of logic gates. Each family has its capabilities and
limitations, its advantages and disadvantages. The following list describes the main logic
families and their characteristics. More information will follow later in the section.

Diode Logic (DL): These use diodes to perform AND and OR logic function. The fact that a

diode can act as a switch is used in these constructions. Very simple and inexpensive and can

be used effectively in specific situations. The NOT function cannot be implemented using

DL. Hence the usefulness is limited.

Resistor-Transistor Logic (RTL): These use Transistors to combine multiple input signals,

which also amplify and invert the resulting combined signal. Often an additional transistor is

included to re-invert the output signal. This combination provides clean output signals that

are either inverted or non-inverted. RTL gates are almost as simple as DL gates, and remain

inexpensive. However these draw a lot of current from the power supply for each gate.

Another limitation is that the RTL gate cannot switch at high speeds due to the transistor.

RTL gates have only historical significance since they are no longer used in the fabrication of

gates.

Diode-Transistor Logic (DTL): By letting diodes perform the logical AND or OR function
and then amplifying the result with a transistor, we can avoid some of the limitations of RTL.
DTL takes diode logic gates and adds a transistor to the output, in order to provide logic
inversion and to restore the signal to full logic levels. Again DTL gates are no longer used in
gate fabrication and it’s only of historical significance. The improved version of this is found
in TTL.
Transistor-Transistor Logic (TTL): The physical construction of integrated circuits made it
more effective to replace all the input diodes in a DTL gate with transistors built with
multiple emitters. The result is transistor-transistor logic, which became the standard logic
circuit in most applications for a number of years. As the state of the art improved, TTL

30 | P a g e
integrated circuits were adapted slightly to handle a wider range of requirements, but their
basic functions remained the same. These devices comprise the 7400 family of digital ICs.

Emitter-Coupled Logic (ECL): Also known as Current Mode Logic (CML), ECL gates are
specifically designed to operate at extremely high speeds, by avoiding the ”lag” inherent
when transistors are allowed to become saturated. Because of this, however, these gates
demand substantial amounts of electrical current to operate correctly.

MOS/CMOS Logic: One factor is common to all of the logic families we have listed above:
they use significant amounts of electrical power. Many applications, especially portable,
battery-powered ones, require that the use of power be absolutely minimized. To accomplish
this, the CMOS (Complementary Metal-Oxide Semiconductor) logic family was developed.
This family uses enhancement-mode MOSFETs as its transistors, and is so designed that it
requires almost no current to operate. CMOS gates are, however, severely limited in their
speed of operation. Nevertheless, they are highly useful and effective in a wide range of
battery-powered applications. The basic circuit in each IC digital logic family is either a
NAND or a NOR gate. These are the primary building blocks with which more complex
blocks are derived. This is the reason behind learning the NAND/NOR implementation
during combinational circuit design. In order to analyze the logic families we use some
special characteristics such as fan-in, fan-out, power dissipation, propagation delay, and noise
margin. Now we look more closely at these parameters.

Special Characteristics

Fan-In: Fan-in is the number of inputs a gate has. For example a two input AND gate has a
fan-in of 2. A NOT gate always has a fan-in of one. The fan-in has some effect on the delay
offered by a gate. Normally the delay increases as a quadratic function of fan-in. In another
form the fan-in can be defined as the number of standard loads drawn by an input to ensure
reliable operation of the gate. Most inputs have a fan-in of 1.

Fan-Out: The fan-out of a gate specifies the number of standard loads that can be connected
to the output of the gate without degrading its normal operation. A standard load is defined as
the amount of current needed by an input of another gate in the same logic family. The fan-
out is also referred to as loading. Note that this quantity has meaning only within one logic
31 | P a g e
family. If the connection is between two logic families, then the interface circuitry should
accommodate the drive requirements and limitations of both families. Fan-out is important
because each logic gate can supply only a limited amount of current before the operation is
degraded. The computation for fan-in is carried out depending on the capability of the logic
gate to maintain either logic high or logic low. When the gate is at logic high, it provides
current IOH to all the gates that are loading it. When the gate is at logic low, it sinks current I OL
from the loading gates. The fan-out (FO) of the gate is calculated from

Consider the diagram below for Fan-out computation.

Example: A standard TTL gate has the following parameters. IOH = 400 μA, IIH = 40 μA,
IOL = 16mA and IIL = 1.6 mA. Compute the FO

Solution: The ratios give rise to the same number; in this case it is 10. Hence FO=10.
In general a TTL gate has a fan-out of 10. Now we look at the effects of loading the gate
output beyond its rated fan-out. It has the following effects.
1. In the LOW state the output voltage VOL may increase above VOL, max.
2. In the HIGH state the output voltage VOH may decrease below VOH, min.
3. The operating temperature of the device may increase thereby reducing the reliability of
the device and eventually causing the device failure.
4. Output rise and fall times may increase beyond specifications
5. The propagation delay may rise above the specified value.
Normally, as in the case of fan-in, the delay offered by the gate increases with the increase in
fan-out.

32 | P a g e
Power Dissipation: This is the amount of power required to operate the electronic circuit. It
is the power delivered to the gate from the power supply and not the power delivered by the
gate to the load. If the supply voltage is V CC and the gate draws a current ICC from the source,
the amount of power dissipated by the gate is VCCICC. However since the current drain from
the power supply depends on the logic state of the gate, the average current is computed
using

The variables have their usual meaning.

Example: For a TTL logic gate the ICCH = 1 mA (for the high state the current drain is low
since the transistor is off) and ICCL = 3 mA (for the low state the transistor has to be in
saturation). The average power dissipated with a supply of 5V is 10 mW. If the number of
gates within the IC is 4, the total dissipation is 40 mW.
There is also another quantity called I CCT which is the current drawn by the gate during
transition (High to Low state and vice versa). For TTL type gates the I CCT is negligible when
compared to ICCL and ICCH. For CMOS, the currents ICCL and ICCH are negligible when
compared to ICCT .

Hence the average power dissipated for CMOS is computed as

The consequence of this is that the CMOS family power dissipation is dependent on the
frequency of operation and in TTL this is not so. Power Dissipation is an important metric for
two reasons.
1. Amount of current and power available in a battery is nearly constant. Power dissipation of
a circuit or system defines battery life; the greater the power dissipation, the shorter the
battery life.
2. Power dissipation is proportional to the heat generated by the chip or system. Excessive
heat dissipation may increase operating temperature and cause gate circuitry to drift out of its
normal operating range and will cause gates to generate improper output values. Thus power
dissipation of any gate implementation must be kept as low as possible.

33 | P a g e
Noise Margin: When a digital signal is transmitted over a medium, spurious electrical signal
can induce noise. The noise component comes in two forms, firstly the DC noise caused by
the drift in the voltage level in the signal or AC noise that comes in the form of random
pulses caused by other switching circuits. The noise margin indicates the maximum noise
voltage that can be added to an input signal of a digital circuit but still will not cause an
undesirable change in the circuit output. Digital gate that have a higher noise margin is
preferable in a noisy environment.

The noise margin is computed using the voltage levels of the input signal and output signal.
We first define the voltage levels corresponding to the two logic levels.
VOH,min : The minimum output voltage in HIGH state . VOH,min is 2.4 V for TTL and 4.9 V for
CMOS.
VOL,max : The maximum output voltage in LOW state . VOL,max is 0.4 V for TTL and 0.1 V for
CMOS.
VIH,min : The minimum input voltage guaranteed to be recognized as logic 1. VIH,min is 2 V for
TTL and 3.5 V for CMOS.
VIL,max : The maximum input voltage guaranteed to be recognized as logic 0. VIL,max is 0.8 V for
TTL and 1.5 V for CMOS.
IOH,min : The maximum current the output can source in HIGH state while still maintaining the
output voltage above VOH,min.
IOL,max : The maximum current the output can sink in LOW state while still maintaining the
output voltage below VOL,max.
II,max : The maximum current that flows into an input in any state (1μA for CMOS).

The definition of the two noise margins are given below.


LNM (Low noise margin): The largest noise amplitude that will not change the output
voltage level when superimposed on the input voltage of the logic gate (when this voltage is
in the LOW interval).
LNM=VIL,max − VOL,max.

HNM (High noise margin): The largest noise amplitude that will not change the output
voltage level if superimposed on the input voltage of the logic gate (when this voltage is in
the HIGH interval).

34 | P a g e
HNM=VOH,min − VIH,min.

The noise margin of a digital gate is then defined as

NM = min {HNM,LNM}

DIODE-TRANSISTOR-LOGIC (DTL)
The basic circuit element in DTL digital logic family is the NAND gate as shown in the
figure below. The construction of this gate is the combination of diode logic AND gate and
the transistor in its inverter configuration. It should be noted that the gate performs the
NAND function for positive logic.

TRANSISTOR- TRANSISTOR LOGIC (TTL)


The original TTL gate was a slight improvement on the DTL gate. However now we have a
series of enhancements that have been applied to the basic TTL gate to form different logic
families. We will first explain the drawbacks of a simple DTL gate and then get into the TTL
operation. The Figure below gives the configuration of a DTL gate with input diode replaced
by the transistor version of a diode.
We note the following.
1. Consider the case when Vi = 1, the current flows through T2, D and into the base of
T3 and drives it into saturation. The Vo ≈ 0.2 V. We note that T1 is in its reverse
active region where VBE < 0 (VB < VE) and VBC > 0 (VB > VC).

35 | P a g e
2. Consider the case when Vi = 0 (or 0.2 V). The output of the gate now goes to logic
high since T1 is saturated, T2 and D are off and hence T3 is cutoff.
3. However for T3 to get cutoff, it has to come out of saturation, pass through the active
region and get to the cutoff region. However the cutoff region cannot be reached until
the stored base charge is dissipated. Remember that this was a primary cause of a sow
gate on previous occasions too. During the removal of the base charge T1 is saturated
hence the base voltage of T2 is Vi + VCE(sat)|T1 = 0.2 + 0.2 = 0.4 V. This will ensure
that T2 is cutoff.
4. As a result of the previous item, the base charge of T3 cannot dissipate through D and
T2 since that path is blocked with a transistor that is cutoff. Hence the charge
dissipation happens through the base discharge resistor Rb. Leaking the charge
through Rb is a slow process and hence essentially constrains the speed of switching
of the gate.
The basic form of the TTL gate is shown below.

EMITTER-COUPLED LOGIC (ECL)


Emitter-coupled logic (ECL) is the fastest of all logic families and therefore is used in
applications where very high speed is essential. High speeds have become possible in ECL
because the transistors are used in difference amplifier configuration, in which they are never

36 | P a g e
driven into saturation and thereby the storage time is eliminated. Here, rather than switching
the transistors from ON to OFF and vice-versa, they are switched between cut-off and active
regions. Propagation delays of less than 1ns per gate have become possible in ECL.
Basically, ECL is realized using difference amplifier in which the emitters of the two
transistors are connected and hence it is referred to as emitter-coupled logic. A 3-input ECL
gate is shown in Fig. 4.19, which has three parts: The middle part is the difference amplifier
which performs the logic operation.

SEQUENTIAL LOGIC

The Basic RS NAND Latch: In order for a logical circuit to "remember" and retain its
logical state even after the controlling input signal(s) have been removed, it is necessary for
the circuit to include some form of feedback. We can use NAND or NOR gates, and using
the extra input lines to control the circuit. The circuit shown below is a basic NAND latch
with designated "S" and "R" for "Set" and "Reset" respectively. The outputs of any single bit
latch or memory are designated Q and Q'. For the NAND latch circuit, both inputs should
normally be at a logic1.

The Basic RS-NOR Latch:


The circuit shown below is a basic NOR latch. The inputs are generally designated "S" and
"R" for "Set" and "Reset" respectively. Because the NOR inputs must normally be logic 0
to avoid overriding the latching action, the inputs are not inverted in this circuit. For the
NOR latch circuit, both inputs should normally be at 40 a logic 0 level. Changing an input
to logic 1 level will force that output to logic 0.

37 | P a g e
The Clocked RS-NAND Latch:

By adding a pair of NAND gates to the input circuits of the RS latch, we accomplish two
goals: normal rather than inverted inputs and a third input common to both gates which we
can use to synchronize this circuit with others of its kind. The clocked RS NAND latch is
shown below.

The JK Flip-flop:

The basic S-R NAND flip-flop circuit has many advantages and uses in sequential logic
circuits but it suffers from two basic switching problems.

 1. the Set = 0 and Reset = 0 condition (S = R = 0) must always be avoided

 2. if Set or Reset change state while the enable (EN) input is high the correct latching
action may not occur

Then to overcome these two fundamental design problems with the SR flip-flop design,
the JK flip Flop was developed.

This simple JK flip Flop is the most widely used of all the flip-flop designs and is considered
to be a universal flip-flop circuit. The two inputs labelled “J” and “K” are not shortened
abbreviated letters of other words, such as “S” for Set and “R” for Reset, but are themselves
autonomous letters chosen by its inventor Jack Kilby to distinguish the flip-flop design from
other types.

38 | P a g e
The D Latch (Data Latch)

One very useful variation on the RS latch circuit is the Data latch which is constructed by
using the inverted S input as the R input signal. The single remaining input is designated "D"
to distinguish its operation from other types of latches. It makes no difference that the R input
signal is effectively clocked twice, since the CLK signal will either allow the signals to pass
both gates or it will not.

Digital Counters

A counter is a device used to count operations, quantities, or periods of time. They may also
be used for dividing frequencies, for addressing information in storage, or for temporary
storage. Counters are a series of flip-flops wired together to perform the type of counting
desired. They will count up or down by ones, twos, or more. The total number of counts or
stable states a counter can indicate is called MODULUS.

For instance, the modulus of a four-stage counter would be 16 10, since it is capable of
indicating 00002 to 11112. The term modulo is used to describe the count capability of
counters; that is, modulo-16 for a four-stage binary counter, modulo-8 for a three stage binary
counter, and so forth. Hardware counters are limited to given number of columns of digits,
i.e. there is a maximum number that a counter can represent. A 3- digit decimal counter can
represent numbers from 000 through 999. We define such a counter as a modulus (mod) 1000
counter. One more count will cause it to cycle back to 000. There are counters that count up
called up-counters and others that count down called down-counters. There may be counters
that can count both ways and in this case they are up-down-counters. Counters are found in:

39 | P a g e
* Digital Multi-Meter, (DMM) in which a voltage is measured and displayed digitally. Inside
the DMM, the A/D (Analog-to-Digital) converter probably uses a counter in the process of
converting the analog voltage signal to a digital equivalent.

* A real estate agent uses a gadget placed on a wall to measure interior dimensions (distance
between walls). The gadget uses a counter to time how long it takes an acoustical signal to
travel across the room and return after reflecting off the opposite wall.

* You drive across two rubber tubes stretched across the pavement. A counter starts running
when you hit the first one, and stops when you hit the second one. That way the police know
how fast you were going and tell you that when the ticket comes in the mail.

Shift Register

A register that is capable of shifting data one bit at a time is called a shift register. One of the
uses of a shift register is to convert between serial and parallel interfaces. This is useful as
many circuits work on groups of bits in parallel, but serial interfaces are simpler to construct.
Shift registers can be used as simple delay circuits and also as pulse extenders. A serial shift
register consists of a chain of flip-flops connected in cascade, with the output of one flip-flop
being connected to the input of its neighbour. The operation of the shift register is
synchronous; thus each flip-flop is connected to a common clock. Using D flip-flops forms
the simplest type of shift-registers.

Types of Shift Register:

These are: serial-in, parallel-out (SIPO) and parallel-in, serial-out (PISO) types. There are
also types that have both serial and parallel input and types with serial and parallel output.

40 | P a g e
There are also bi-directional shift registers which allow varying the direction of the shift
register.

Serial-In, Serial-Out

Destructive Readouts

In destructive readout - each datum is lost once it has been shifted out of the rightmost bit.
These are the simplest kind of shift register. The data string is presented at 'Data In', and is
shifted right one stage each time 'Data Advance' is brought high. At each advance, the bit on
the far left (i.e. 'Data In') is shifted into the first flip-flop's output. The bit on the far right (i.e.
'Data Out') is shifted out and lost.

Non-destructive readout

Non-destructive readout can be achieved if another input line is added – the Read/Write

Control. When this is high (i.e. write) then the shift register behaves as normal, advancing the

input data one place for every clock cycle, and data can be lost from the end of the register.

However, when the R/W control is set low (i.e. read), any data shifted out of the register at

the right becomes the next input at the left, and is kept in the system. Therefore, as long as the

R/W control is set low, no data can be lost from the system.

Serial-In, Parallel-Out

This configuration allows conversion from serial to parallel format. Data are inputed serially,

and once the data has been inputed, it may be either read off at each output simultaneously, or

it can be shifted out and replaced.

Parallel-In, Serial-Out

This configuration has the data input in parallel format. To write the data to the register, the

Write/Shift control line must be held LOW. To shift the data, the W/S control line is brought

HIGH and the registers are clocked. As long as the number of clock cycles is not more than

the length of the data-string, the Data Output, Q, will be the parallel data read off in order.
41 | P a g e
Parallel-In, Parallel-Out

This kind of shift register takes the data from the parallel inputs (D0-D3) and shifts it to the

corresponding output (Q0-Q3) when the registers are clocked. It can be used as a kind of

'history', retaining old information as the input in another part of the system, until ready for

new information, whereupon, the registers are clocked, and the new data are 'let through'.

42 | P a g e

You might also like