From WikiChip
Editing boolean algebra

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.

Latest revision Your text
Line 1: Line 1:
 
{{title|Boolean Algebra}}
 
{{title|Boolean Algebra}}
'''Boolean algebra''' (or less commonly '''symbolic logic''') is a [[instance of::branch of algebra]] that deals with only two logic values - [[0]] (corresponding to [[false]]) and [[1]] (corresponding to [[true]]).  
+
'''Boolean algebra''' (or less commonly '''symbolic logic''') is a branch algebra that deals with only two logic values - [[0]] (corresponding to [[false]]) and [[1]] (corresponding to [[true]]).  
  
 
Today, Boolean algebra is the primary mathematical tool used in designing modern digital systems. Switching functions are described using Boolean algebra since they deal with two discrete states - ON and OFF (or 1 and 0). Those functions are in turn implemented via [[transistors]] which act as switches, a natural implementation for representing Boolean algebra operations. Once primitive Boolean operation circuits such as [[NOT]], [[AND]], and [[OR]] [[logic gates|gates]] are implemented, any conceivable system of logic can be implemented using them like Lego pieces.
 
Today, Boolean algebra is the primary mathematical tool used in designing modern digital systems. Switching functions are described using Boolean algebra since they deal with two discrete states - ON and OFF (or 1 and 0). Those functions are in turn implemented via [[transistors]] which act as switches, a natural implementation for representing Boolean algebra operations. Once primitive Boolean operation circuits such as [[NOT]], [[AND]], and [[OR]] [[logic gates|gates]] are implemented, any conceivable system of logic can be implemented using them like Lego pieces.
 
== Variables ==
 
{{main|/variables|boolean data type|l1=Boolean Variables}}
 
Boolean algebra uses variables just like normal algebra. Those variables can only have one of two values - either a 0 or a 1. Variable are commonly represented as a single alphabet letter. While there is no one acceptable convention, a it's not uncommon to see letters such as <math>A, B, \text{ and } C</math> used for inputs and <math>P, Q, R, \text{ and } Z</math> for output. That's also the convention used on WikiChip. Sometimes it's desired to represent the [[negated]] (opposite) value of a variable, that's often done with a bar or a tick (prime) above or next to the letter, for example <math>\bar A</math> or <math>\neg B</math> although [[negation|other notations exist]]. <math>\bar A</math> is read "not A", regardless of notation.
 
 
== 0 & 1 ==
 
Boolean algebra deals with only two unique states or values. We often represent those two values as <math>0</math> and <math>1</math>. However it's important to understand that those are just two convenient representations. This is often written as <math>\mathbb{B} = \{0,1\}</math>. You can assign <math>\blacksquare</math> and <math>\blacktriangle</math> instead and the math should work just fine.
 
 
== Operations & Truth tables ==
 
{{main|/operations|truth table|l1=Boolean Operations}}
 
Boolean algebra has a set of operations that can be performed on Boolean values, those operations are conveniently enough called '''[[binary operation]]s'''. The three common Boolean operators are '''[[conjunction|AND]]''', '''[[disjunction|OR]]''', and '''[[negation|NOT]]'''. Understanding those operators can better be done by examining their behavior via tool called a  truth table. '''[[truth tables]]''' is a table that lists all possible input values and their respective output values. Truth tables can be said to be the unique signature of a specific Boolean function. Truth tables are an excellent way of seeing the relationships between input values and given Boolean expressions. While there may be many ways to realize or construct a Boolean function to represent a specific relation, they all share the very same truth table. A [[truth-vector]] is a truth table in vector form.
 
 
=== AND operator ===
 
{{main|conjunction}}
 
<div style="float: right;">{{truth table/and}}</div>
 
The Boolean operator [[conjunction|AND]] is usually represented by either <math>\land</math>, <math>\cdot</math>, or no symbol at all: for example "<math>A \land B</math>", "<math>A \cdot B</math>", and "<math>AB</math>" are all equivalent and are read "A AND B". The behavior of this operator is shown in the [[truth table]] on the right. The result of "A AND B" is true if both A and B are true; otherwise the result is false. This expression is also called a '''Boolean product'''.
 
 
For example, suppose we have the function <math>f(a,b,c) = (a \land b) \land c</math>
 
::<math>
 
\begin{align}
 
Q    &= f(1,0,1) \\
 
    &= (1 \land 0) \land 1 \\
 
    &= 0 \land 1 \\
 
    &= 0_2 \\
 
\end{align}
 
</math>
 
 
Or
 
::<math>
 
\begin{align}
 
Q    &= f(1,1,1) \\
 
    &= (1 \land 1) \land 1 \\
 
    &= 1 \land 1 \\
 
    &= 1_2 \\
 
\end{align}
 
</math>
 
{{clear}}
 
=== OR operator ===
 
{{main|disjunction}}
 
<div style="float: right;">{{truth table/or}}</div>
 
The Boolean operator [[disjunction|OR]] is usually represented by <math>\lor</math> or <math>+</math> operators. For example "<math>A \lor B</math>" and "<math>A+B</math>". The expression <math>A+B</math> is read "A or B". The result of "A OR B" is true if either A is true or B is true; otherwise the result is false. This expression is also called a '''Boolean sum'''.
 
 
For example, suppose we have the function <math>f(a,b,c) = (a \lor b) \lor c</math>
 
::<math>
 
\begin{align}
 
Q    &= f(1,0,1) \\
 
    &= (1 \lor 0) \lor 1 \\
 
    &= 1 \lor 1 \\
 
    &= 1_2 \\
 
\end{align}
 
</math>
 
Or
 
::<math>
 
\begin{align}
 
Q    &= f(0,0,1) \\
 
    &= (0 \lor 0) \lor 1 \\
 
    &= 0 \lor 1 \\
 
    &= 1_2 \\
 
\end{align}
 
</math>
 
{{clear}}
 
=== NOT operator ===
 
{{main|negation}}
 
<div style="float: right;">{{truth table/not}}</div>
 
The Boolean operator [[negation|NOT]] is represented by many notations, the three most popular ones are "<math>\neg A</math>", "<math>\bar A</math>", and "<math>A'\!</math>". Note that unlike the AND and OR operators, the NOT operator is a [[unary operator]] and is thus drawn above or on the side of the variable. The expression <math>\bar A</math> is read "not A". The truth table for the NOT operator is shown on the right. The result of the NOT operator is true if A is false, otherwise the result is true.  This expression is called a '''Boolean complement'''.
 
 
For example, suppose we have the function <math>f(a) = \bar a</math>
 
::<math>
 
\begin{align}
 
Q    &= f(1) \\
 
    &= \bar 1 \\
 
    &= 0_2 \\
 
\end{align}
 
</math>
 
Or
 
::<math>
 
\begin{align}
 
Q    &= f(0) \\
 
    &= \bar 0 \\
 
    &= 1_2 \\
 
\end{align}
 
</math>
 
 
==Order of operations==
 
{{main|/order of operations|l1=Order of Operations}}
 
So far we've made it simple by explicitly using parenthesis in all of our examples to indicate a certain part of the expression is evaluated before another part. The order of operations of a Boolean expression is very important to obtain correct result. For example consider the function <math>f(a,b,c) = a \land b \lor c</math> for input <math>f(0,0,1)</math>. Does it mean <math>(0 \land 0) \lor 1 = (0) \lor 1 = 1</math>? or does it mean <math>0 \land (0 \lor 1) = 0 \land (1) = 0</math>? Same expression, different results. It turns out the the correct order is <math>(a \land b) \lor c</math> (and <math>f(0,0,1) = 1</math>). In Boolean expressions, the NOT operator has the highest precedence, followed by AND, then OR.
 
 
For example,
 
::<math>f(a,b,c) = A \land \bar B \lor A \land \bar C \lor A \land B \land C = ((A \land (\bar B)) \lor (A \land (\bar C)) \lor ((A \land B) \land C))</math>
 
and
 
::<math>f(a,b) = \bar A \lor B \land A = ((\bar A) \lor (B \land A))</math>
 
 
== Axioms ==
 
{{main|/axioms|l1=Boolean Algebra Axioms}}
 
Boolean algebra is govern by a set of special '''axioms''' that say what kind of Boolean expression manipulations can be done. They are called axioms because they not things that have to be proven but rather part of the definition of Boolean algebra. Many of those laws are common to both Boolean algebra and ordinary algebra. Using those laws, equations can be converted into different forms. One particular transformation known as minimization plays a crucial role in the design of logic circuits. One last thing to note before we get to the actual laws is that Boolean algebra identities come in pairs. This is known as '''duality principle''' and it is covered in much more detail later on.
 
 
{| style="border: 1px solid #e5eeff;"
 
! style="width: 200px;" | Axiom !! style="width: 300px;" | AND form !! style="width: 300px;" | OR form
 
|-
 
|[[/identity axiom|Identity Axiom]]
 
|| <math>a \land 1 = a</math>
 
|| <math>a \lor 0 = a</math>
 
|-
 
|[[/inverse axiom|Inverse Axiom]]
 
|| <math>a \land \bar a = 0</math>
 
|| <math>a \lor \bar a = 1</math>
 
|-
 
| [[/commutative axiom|Commutative Axiom]]
 
|| <math>a \land b = b \land a</math>
 
|| <math>a \lor b = b \lor a</math>
 
|-
 
| [[/associative axiom|Associative Axiom]]
 
|| <math>(a \land b) \land c = a \land (b \land c)</math>
 
|| <math>(a \lor b) \lor c = a \lor (b \lor c)</math>
 
|-
 
|[[/distributive axiom|Distributive Axiom]]
 
|| <math>a \lor (b \land c) = (a \lor b) \land (a \land c)</math>
 
|| <math>a \land (b \lor c) = (a \land b) \lor (a \lor c)</math>
 
|}
 
 
 
In addition to those five axioms, there are a number of other handful '''laws'''. Those laws can be proven using the axioms we've introduced above.
 
 
{| style="border: 1px solid #e5eeff;"
 
! style="width: 200px;" | Law !! style="width: 300px;" | AND form !! style="width: 300px;" | OR form
 
|-
 
|[[/complement law|Complement Law]]
 
|| <math>\bar 1 = 0</math>
 
|| <math>\bar 0 = 1</math>
 
|-
 
|[[/dominance law|Dominance Law]]
 
||<math>a \land 0 = 0</math>
 
||<math>a \lor 1 = 1</math>
 
|-
 
|[[/idempotent law|Idempotent Law]]
 
|| <math>a \land a = a</math>
 
|| <math>a \lor a = a</math>
 
|-
 
|[[/absorption law|Absorption Law]]
 
|| <math>a \land (a \lor b) = a</math>
 
|| <math>a \lor (a \land b) = a</math>
 
|-
 
|[[/demorgan's law|DeMorgan's Law]]
 
|| <math>\overline{(a \land b)} = \bar a \lor \bar b</math>
 
|| <math>\overline{(a \lor b)} = \bar a \land \bar b</math>
 
|-
 
|[[/involution law|Involution Law]] || colspan="2" style="text-align:center;" | <math>\overline{(\bar a)} = a</math>
 
|}
 
 
It's interesting to note that it's easy to see the divergence between Boolean algebra and ordinary algebra from those laws. For example consider <math>1+1</math>. From [[/dominance law|Dominance Law]] we know the answer is <math>1</math>. This is clearly not true for ordinary algebra where <math>1+1 = 2</math>. Likewise from the Absorption Law we know that <math>1+(1 \cdot 1) = 1</math> while in ordinary algebra this is not true either.
 
 
=== Axioms explanation ===
 
The '''[[/identity axiom|Identity Axiom]]''' simply states that any expression ANDed with 1 or ORed with 0 results in the original expression. '''[[Identity element]]s''' or simply identities are elements that when used with their appropriate operator leave the original element unchanged. In the case of Boolean algebra, the identity element for AND is 1 and 0 for OR.
 
 
Example:
 
::<math>
 
\begin{align}
 
  f(a)    &= a \land 1 \lor 0 \\
 
          &= a \lor 0 & \mbox{By Identity Axiom} \\
 
          &= a & \mbox{By Identity Axiom} \\
 
\end{align}
 
</math>
 
 
The '''[[/inverse axiom|Inverse Axiom]]''' simply states that when you AND or OR an expression with its [[negation|complement]] results in the [[identity element]] for that operation.
 
 
Example:
 
::<math>
 
\begin{align}
 
  f(a,b)    &= (a \land \bar a) \lor \overline{(b \land \bar b)}  \\
 
            &= 0 \lor \overline{(b \land \bar b)} & \mbox{By Inverse Axiom} \\
 
            &= 0 \lor \overline{0} & \mbox{By Inverse Axiom} \\
 
            &= 1 & \mbox{By Inverse Axiom} \\
 
\end{align}
 
</math>
 
 
The '''[[commutative axiom|Commutative Axiom]]''' states that individual elements in an expressions can be reordered without affecting the meaning of the expression. For example <math>a \land b \land c = c \land a \land b</math>.
 
 
 
The '''[[associative axiom|Associative Axiom]]''' states that individual elements in an expression can be regrouped without affecting the meaning of the expression. For example <math>(a \land b) \land (c \land d) = a \land (b \land c) \land d</math>. Simply put, it makes no difference in what order you group the expressions when ANDing or ORing several expressions together.
 
 
 
The '''[[distributive axiom|Distributive Axiom]]''' states that ANDing distributes over over ORing. That is, ORing several expressions and ANDing the result is equaivilent to ANDing the result with a each of the individual expressions then ORing the product. Often times the Distributive Axiom is applied in reverse in a similar way to factoring in ordinary algebra. For example, given <math>(a \land b) \lor (a \land c)</math>, the <math>a</math> expression can be factored out to <math>a \land (b \lor c)</math>
 
 
== Canonical Forms ==
 
{| class="wikitable" style="float: right; margin-left: 20px;"
 
! a !! b !! c !! [[minterm]]s !! notation
 
|-
 
| 0 || 0 || 0 || <math>\bar a \land \bar b \land \bar c</math> || <math>m_0</math>
 
|-
 
| 0 || 0 || 1 || <math>\bar a \land \bar b \land c</math> || <math>m_1</math>
 
|-
 
| 0 || 1 || 0 || <math>\bar a \land b \land \bar c</math> || <math>m_2</math>
 
|-
 
| 0 || 1 || 1 || <math>\bar a \land b \land c</math> || <math>m_3</math>
 
|-
 
| 1 || 0 || 0 || <math>a \land \bar b \land \bar c</math> || <math>m_4</math>
 
|-
 
| 1 || 0 || 1 || <math>a \land \bar b \land c</math> || <math>m_5</math>
 
|-
 
| 1 || 1 || 0 || <math>a \land b \land \bar c</math> || <math>m_6</math>
 
|-
 
| 1 || 1 || 1 || <math>a \land b \land c</math> || <math>m_7</math>
 
|}
 
Earlier we've covered [[truth tables]] which are like signatures; there are many ways to represent the same logic, however it will always result in the very same truth table. When two Boolean functions result in the same exact truth table, the two functions are said to be [[logically equivalent]]. The different representations of a truth table are known as '''forms'''. In an attempt to eliminate confusion, a few forms were were chosen to be '''canonical''' or '''standard''' forms. Before we describe those forms we need to go over a few terms.
 
 
===Sum of Products (SoP) ===
 
A '''[[minterm]]''' is the Boolean product (ANDing) of <math>n</math> variables and contains all <math>n</math> variables of the function just once, in either normal or complemented form. For example, for the function <math>f(a, b)</math> with two variables, we can have the following minterms: <math>a \land b</math>, <math>a  \land  \bar b</math>, <math>\bar a  \land  b</math>, and <math>\overline{a  \land  b}</math>. If the value assigned to a variable is 0, the variable is complemented, conversely a variable remains uncomplemented if the value assigned to it is 1. Consider the table to the right, since in the first row, the variables <math>a</math>, <math>b</math>, and <math>c</math> are all <math>0</math>, the minterms for them is <math>\bar a \land \bar b \land \bar c</math>
 
 
A '''[[sum term]]''' is the Boolean sum (ORing) of  variables as a subset of the possible variables or their complements. For example, for the function <math>f(a,b,c)</math>, the following are a few possible sum terms: <math>\bar c</math>, <math>a \lor b</math>, and <math>\bar a \lor b</math>.
 
 
The '''[[sum of minterms]]''' also called '''Sum of Product''' ('''SoP'''), '''canonical sum of products''', '''minterm expansion''', and '''canonical disjunctive normal form''' ('''CDNF''') is a Boolean expression in which each term contains all the variables, either in normal or complemented form as sum of all the minterms. For example, consider the following Boolean functions.
 
:<math>f(a,b,c, d) = a(b+c+d)</math>
 
We can express that function in SoP form as follows.
 
:<math>f(a,b,c, d) = (abcd) + (abc \bar d) + (ab \bar c d) + (ab \bar c \bar d) + (a \bar bcd) + (a \bar b c \bar d) + (a \bar b \bar c d)</math>
 
 
{| class="wikitable" style="float: right; margin-left: 20px;"
 
! colspan="7" | Sum of Product Example
 
|-
 
! colspan="7" | Function
 
|-
 
| colspan="7" | <math>f(a,b,c) = a \lor (b \land c)</math>
 
|-
 
! a !! b !! c !! minterms !! notation !! <math>f</math> !! <math>f^\prime</math>
 
|-
 
| 0 || 0 || 0 || <math>\bar a \land \bar b \land \bar c</math> || <math>m_0</math> || 0 || 1
 
|-
 
| 0 || 0 || 1 || <math>\bar a \land \bar b \land c</math> || <math>m_1</math> || 0 || 1
 
|-
 
| 0 || 1 || 0 || <math>\bar a \land b \land \bar c</math> || <math>m_2</math> || 0 || 1
 
|-
 
| 0 || 1 || 1 || <math>\bar a \land b \land c</math> || <math>m_3</math> || 1 || 0
 
|-
 
| 1 || 0 || 0 || <math>a \land \bar b \land \bar c</math> || <math>m_4</math> || 1 || 0
 
|-
 
| 1 || 0 || 1 || <math>a \land \bar b \land c</math> || <math>m_5</math> || 1 || 0
 
|-
 
| 1 || 1 || 0 || <math>a \land b \land \bar c</math> || <math>m_6</math> || 1 || 0
 
|-
 
| 1 || 1 || 1 || <math>a \land b \land c</math> || <math>m_7</math> || 1 || 0
 
|}
 
The minterms for which the function produces a <math>1</math> are called '''1-minterms'''. Likewise, minterms for which the function produces a <math>0</math> are called '''0-minterms'''. Any Boolean function can be expressed as sum of its 1-minterms. For example, consider the following function:
 
 
:<math>f(a,b,c) = a+(bc)</math>
 
 
The truth table for it is on the right. We can express this function as '''sum of 1-minterms''':
 
 
:<math>f(a,b,c) = \bar a b c + a \bar b \bar c + a \bar b c + a b \bar c + abc</math>
 
 
We can replace the individual minterms with their respective index which is also shown in the table.
 
 
:<math>f(a,b,c) = m_3 + m_4 + m_5 + m_6 + m_7</math>
 
 
This function can now be more concisely written a in the following way:
 
 
:<math>f(a,b,c) = \sum m(\mbox{list of 1-minterm indices})</math>
 
 
I.e.
 
 
:<math>f(a,b,c) = m_3 + m_4 + m_5 + m_6 + m_7 = \sum m(3,4,5,6,7)</math>
 
 
Likewise, the inverse of the function can be expressed as the '''sum of 0-minterms''':
 
 
::<math>
 
\begin{align}
 
  f^\prime(a,b,c)  &= \bar a \bar b \bar c + \bar a \bar b c + \bar a b \bar c \\
 
                    &= m_0 + m_1 + m_2 \\
 
                    &= \sum m(0,1,2) \\
 
\end{align}
 
</math>
 
 
 
===Product of Sum (PoS) ===
 
{| class="wikitable" style="float: right; margin-left: 20px;"
 
! colspan="7" | Sum of Product Example
 
|-
 
! colspan="7" | Function
 
|-
 
| colspan="7" | <math>f(a,b,c) = a \lor (b \land c)</math>
 
|-
 
! a !! b !! c !! minterms !! notation !! <math>f</math> !! <math>f^\prime</math>
 
|-
 
| 0 || 0 || 0 || <math>a \lor b \lor c</math> || <math>M_0</math> || 0 || 1
 
|-
 
| 0 || 0 || 1 || <math>a \lor b \lor \bar c</math> || <math>M_1</math> || 0 || 1
 
|-
 
| 0 || 1 || 0 || <math>a \lor \bar b \lor c</math> || <math>M_2</math> || 0 || 1
 
|-
 
| 0 || 1 || 1 || <math>a \lor \bar b \lor \bar c</math> || <math>M_3</math> || 1 || 0
 
|-
 
| 1 || 0 || 0 || <math>\bar a \lor b \lor c</math> || <math>M_4</math> || 1 || 0
 
|-
 
| 1 || 0 || 1 || <math>\bar a \lor b \lor \bar c</math> || <math>M_5</math> || 1 || 0
 
|-
 
| 1 || 1 || 0 || <math>\bar a \lor \bar b \lor c</math> || <math>M_6</math> || 1 || 0
 
|-
 
| 1 || 1 || 1 || <math>\bar a \lor \bar b \lor \bar c</math> || <math>M_7</math> || 1 || 0
 
|}
 
A '''[[maxterm]]''' is the Boolean sum (ORing) of <math>n</math> variables and contains all <math>n</math> variables of the function just once, in either normal or complemented form. For example, for the function <math>f(a, b)</math> with two variables, we can have the following maxterms: <math>a \lor b</math>, <math>a \lor \bar b</math>, <math>\bar a \lor b</math>, and <math>\bar a \lor \bar b</math>.
 
 
A '''[[product term]]''' is the Boolean product (ANDing) of variables as a subset of the possible variables or their complements. For example, for function <math>f(a,b,c)</math>, the following are possible product terms: <math>\bar c</math>, <math>a \land \bar b</math>, and <math>c \land a</math>.
 
 
The '''[[sum of maxterms]]''' also called '''Product of Sum''' ('''PoS'''), '''canonical sum of products''', '''maxterm expansion''', '''canonical conjunctive normal form''' ('''CCNF''') is a Boolean expression in which each term contains all the variables, either in normal or complemented form as the sum of all the maxterms.
 
 
:<math>f(a,b,c, d) = a(b+c+d)</math>
 
We can express that function in PoS form as follows.
 
:<math>
 
\begin{align}
 
f(a,b,c, d) = (a+b+c+d)\cdot(a+b+c+\bar d)\cdot(a+b+\bar c+d)\cdot(a+b+\bar c+\bar d)\cdot \\
 
      (a+\bar b+c+d)\cdot(a+\bar b+c+\bar d)\cdot(a+\bar b+\bar c+d)\cdot(a+\bar b+\bar c+\bar d)\cdot \\
 
      (\bar a+b+c+d)
 
\end{align}
 
</math>
 
 
The maxterms for which the function produces a <math>1</math> are called '''1-maxterms'''. Likewise, maxterms for which the function produces a <math>0</math> are called '''0-maxterms'''. Any Boolean function can be expressed as product of its 0-maxterms. For example, consider the following function:
 
 
:<math>f(a,b,c) = a+(bc)</math>
 
 
The truth table for it is on the right. We can express this function as '''product of 0-maxterms''':
 
 
:<math>f(a,b,c) = (a+b+c)\cdot(a+b+\bar c)\cdot(a+\bar b+c)</math>
 
 
We can replace the individual minterms with their respective index which is also shown in the table.
 
 
:<math>f(a,b,c) = M_0 + M_1 + M_2</math>
 
 
This function can now be more concisely written a in the following way:
 
 
:<math>f(a,b,c) = \prod M(\mbox{list of 1-minterm indices})</math>
 
 
I.e.
 
 
:<math>f(a,b,c) = M_0 + M_1 + M_2 = \prod M(0,1,2)</math>
 
 
Likewise, the inverse of the function can be expressed as the '''product of 0-maxterms''':
 
 
::<math>
 
\begin{align}
 
  f^\prime(a,b,c)  &= (a+\bar b+\bar c)\cdot(\bar a+b+c)\cdot(\bar a +b+\bar c)\cdot(\bar a+\bar b+c)\cdot(\bar a+\bar b+\bar c) \\
 
                    &= M_3+M_4+M_5+M_6+M_7 \\
 
                    &= \prod M(3,4,5,6,7)
 
\end{align}
 
</math>
 
 
 
== Minimization ==
 
{{main|logic minimization}}
 
Generating Boolean functions for certain logic is often pretty straightforward. On many occasions, however, those functions might be more complicated than they have to be. '''Minimization''' is the process of transforming a Boolean function into its smallest [[logically equivalent|equivalent]] form. While the best circuit design depends on the technology involved, minimization usually translates directly into smaller circuits and lower implementation costs (areas, time, et al).
 
 
One way to minimize a Boolean expression is to simply massage the expression using the axioms and laws described earlier on. Such process is heuristic in nature and thus there is no one algorithm or rules that must be followed. For example, consider the following function <math>f(a,b,c,d) = ab+bc+\bar bc</math>. It can be hand minimized as:
 
 
::<math>
 
\begin{align}
 
  f(a,b,c,d)  &= ab+bc+\bar bc \\
 
                &= ab+c(b+\bar b)  && \mbox{By Distributive Axiom} \\
 
                &= ab+c(1)          && \mbox{By Inverse Axiom}  \\
 
                &= ab+c            && \mbox{By Identity Axiom }
 
\end{align}
 
</math>
 
 
=== Karnaugh Map ===
 
{{main|Karnaugh Map}}
 
While minimization function through manual massaging of expressions works for basic functions, it becomes incredibly complex and time consuming as you increase the number of terms and variables involved. A '''[[Karnaugh Map]]''' (or '''K-Map''') is technique that provides a systematic way to simplify as well as manipulate Boolean expressions.
 
 
{| class="wikitable" style="float: right; margin: 10px;"
 
! colspan="4" | [[Truth Table]]
 
|-
 
! colspan="4" | <math>f(a,b,c) = a \bar b + c(\bar a + b)</math>
 
|-
 
! colspan="3" | Inputs || Outputs
 
|-
 
! A !! B !! C !! Q
 
|-
 
| 0 || 0 || 0 || 0
 
|-
 
| 0 || 0 || 1 || 1
 
|-
 
| 0 || 1 || 0 || 0
 
|-
 
| 0 || 1 || 1 || 1
 
|-
 
| 1 || 0 || 0 || 1
 
|-
 
| 1 || 0 || 1 || 1
 
|-
 
| 1 || 1 || 0 || 0
 
|-
 
| 1 || 1 || 1 || 1
 
|}
 
A K-Map is a actually a [[truth table]] that has been rearranged in a number of important ways to make it possible to visually find the expression we're looking for. An <math>n</math>-variable K-Map is table consisting of <math>2^n</math> cells, each representing a single [[minterm]]. For each minterm where the function results in a <math>1</math>, we put a 1. Conversely, for each minterm where the function results in a <math>0</math>, we put a zero - or more commonly, we leave blank.
 
 
[[File:3-var blank k-map.svg|150px|left]]
 
Let's consider the following 3-variable Boolean function: <math>f(a,b,c) = a \bar b + c(\bar a + b)</math>. Because this function has 3 variables, we need a 3-variable K-Map which means we're working with an 8-cell table. Note the special arrangement of the columns and rows. Particularly, between two adjacent row and columns, only a single variable is allowed to transition from 0 to 1 or 1 to 0. As a result, after "01" we move to "11" instead of "10" which would result in both variables transitioning between 0 and 1. For 3 variables, the table is arranged in 4 columns of 2 rows each. 2 of the variables span the columns while the last variable spans the rows. Our choice for which variable goes where was arbitrary. Any way you want will work so long the k-map is constructed correctly based on a truth table.
 
 
[[File:a and not b or c k-map (1).svg|150px|left]]
 
For every minterm in the truth table where the output is <math>1</math> we mark 1 on the K-Map. The cells that result in <math>0</math> are left blank for convenience. Once the k-map represents the entire truth table, visually inspect the table and locate the largest groups of adjacent squares containing 1. Those groups must have power of 2 number of cells; i.e. a group can only be of size 1, 2, 4, 8, etc.. A group of 3 cells thus must be broken down into 2 groups of 2 where one cell overlaps (see the [[k-map]] article for a more detailed explanation). In our K-Map, we have 1 group of 4 cells and another group of just 2 cells.
 
 
[[File:a and not b or c k-map (2).svg|150px|left]]
 
For each group marked down we look which variables are common to all the cells in the group. For the group consisting of four cells neither ''a'' nor ''b'' are common since they both change, however ''c'' is always 1 for all four cells. Therefore the expression for that group is simply <math>c</math>. For the second group with just two cells, ''c'' is no loner common to the group. However both ''a'' and ''b'' are now common since neither of them change. Because ''a'' is always 1 and ''b'' is always 0 in that group, the expression for the second group is <math>a \bar b</math>. The final simplified expression is the SoP: <math> a \bar b + c</math>. I.e.:
 
 
::<math>f(a,b,c) = a \bar b + c(\bar a + b) = a \bar b + c</math>
 
 
=== Quine-McCluskey Method ===
 
{{main|quine-mccluskey method|l1=Quine-McCluskey Method}}
 
The '''Quine-McCluskey Method''' ('''QMM''') is an algorithm developed for minimizing Boolean expressions. This algorithm is functionally identical to how a [[K-Map]] works but orients itself in tabular form. Due to its algorithmic nature, it's much more suitable to be implemented as a program and can be easily applied to any number of variables and terms. QMM is used in various [[electronic design automation|EDA]] tools.
 
 
==Boolean functions==
 
{{main|/function|l1=Boolean Function}}
 
A '''Boolean function''' is an algebraic function of the form <math>f:\mathbb{B}^n \to \mathbb{B}</math> where <math>\mathbb{B} = \{0, 1\}</math> is defined as [[Boolean domain]] and <math>n \ge 0</math>. There are <math>2^{2^n}</math> possible Boolean functions for <math>n</math> Boolean variables.
 
 
== Properties of functions ==
 
{{main|/function properties|l1=Boolean Function Properties}}
 
The properties of Boolean functions have been a subject of extensive research especially in conjunction with the [[switching theory]]. Understanding the properties of the Boolean functions has proven to help in various stages of logic design (e.g. [[logic synthesis]]). Below are some of the more important properties of Boolean functions.
 
 
=== Duality Principle ===
 
{{main|/duality principle|l1=Duality Principle}}
 
Earlier on it was pointed out that every axiom and every law has an OR form and an AND form. The '''Duality Principle''' simply state that when you take a valid Boolean statement and interchange all <math>\land</math> with <math>\lor</math> and <math>1</math> with <math>0</math> and vice vesa, you obtain its '''dual''' which is also a valid Boolean statement.
 
 
Consider the following statement <math>\overline{(a \land b)} = \bar a \lor \bar b</math> which happens to be [[DeMorgan's Law]]. Then by the duality principle we also know that <math>\overline{(a \lor b)} = \bar a \land \bar b</math> must also be true. Indeed that is the second form of the law.
 
==== Self-dual function ====
 
{{main|/self-dual function|l1=Self-dual Boolean function}}
 
A Boolean function is said to be a '''self-dual function''' if it is [[logically equivalent|equivalent]] to the same function with all inputs and outputs inverted. For example consider the [[majority function]]. It is defined as:
 
::<math>\text{MAJ}(x,y,z) = (x \land y) \lor (x \land z) \lor (y \land z)</math>
 
 
We can derive it's dual function by inverting the inputs and outputs:
 
::<math>
 
\begin{align}
 
      \text{MAJ}^d(x,y,z) &= \overline{\text{MAJ}(\bar x,\bar y, \bar z)} & \mbox{By Duality Principle} \\
 
                          &= \overline{(\bar x \land \bar y) \lor (\bar x \land \bar z) \lor (\bar y \land \bar z)} \\
 
                          &= \overline{(\bar x \land \bar y)} \land \overline{(\bar x \land \bar z)} \land \overline{(\bar y \land \bar z)} & \mbox{By DeMorgan's Law} \\
 
                          &= (x \lor y) \land (x \lor z) \land (y \lor z) & \mbox{By DeMorgan's Law} \\
 
                          &= \text{MAJ}(x,y,z) \\
 
\end{align}
 
</math>
 
 
Since <math>\text{MAJ}(x,y,z) = \text{MAJ}^d(x,y,z)</math>, the [[majority function]] is said to be a self-dual function.
 
===Monotonic functions===
 
{{main|/monotonic function|l1=Monotonic Functions}}
 
The '''monotonicity property''' of Boolean functions says that for two Boolean vectors ''a'' and ''b'' such that <math>a_n \ge b_n \forall_{a,b\in\{0,1\}^n}</math>; if <math>f(a) \ge f(b)</math> then <math>f</math> is a '''monotonically increasing''' function. Conversely, if <math>f(a) \le f(b)</math> then <math>f</math> is a '''monotonically decreasing''' function. In other words, a monotonically increasing (or decreasing) function never has its output changes from one to a zero (or zero to a one for decreasing) by changing zero to a one in the input. Monotonic functions can be constructed with just ANDs and ORs operations without negation. By an extension, [[monotone circuit]]s are circuits in which only [[AND]] and [[OR]] [[logic gates|gates]].
 
 
===Unate functions===
 
{{main|/unate function|l1=Unate Boolean Function}}
 
{{empty section}}
 
 
===Linear functions===
 
{{main|/linear function|l1=Linear Boolean Functions}}
 
A '''linear function''' is either the constant <math>0</math> function or the exclusive OR of <math>n</math> variables. That is, is one that has the form
 
 
::<math>f(x_0, x_1, \ldots, x_n) = r_0 \oplus \bigoplus^n_{i=1}r_ix_i; r_i \in \mathbb{B} = \{0, 1\}</math>
 
 
For example, consider the following function <math>f(a,b) = a \land b \lor \bar a \land \bar b</math>. Since we can write it in the polynomial form above (i.e. <math>f = 1 \oplus a \oplus b</math>), it's a linear function.
 
 
===0/1 Preserving functions===
 
{{main|/0-preserving function|/1-preserving function|l1=0-Preserving Boolean Function|l2=1-Preserving Boolean Function}}
 
A Boolean function is said to be '''0-preserving''' if <math>f(0, 0, \ldots, 0) = 0</math>. Likewise a Boolean function is said to be '''1-preserving''' if <math>f(1, 1, \ldots, 1) = 1</math>.
 
 
===functional completeness===
 
{{main|/functional completeness|l1=Functional Completeness}}
 
A set of Boolean functions <math>\mathcal{F}</math> is said to be universal or complete iif it's [[/0-preserving function|0-preserving]], [[/1-preserving function|1-preserving]], [[/self-dual function|Self-dual]], [[/monotonically increasing function|monotonically increasing]], and [[/linear function|linear]]. A complete set can form any other Boolean expression by combining the functions in the set.
 
 
== Complementary Function==
 
{{empty section}}
 
 
== Additional Logical connectives==
 
{{main|Logical connective}}
 
In additional to the basic AND, OR, and NOT, there are a number of other logical connectives.
 
=== XOR ===
 
{{main|xor}}
 
{{empty section}}
 
 
=== conditional===
 
{{main|conditional}}
 
<div style="float: right;">{{truth table/if}}</div>
 
It's often necessary to express things in the form <math>\text{If } A\text{ then }B</math>. In that form, <math>A \rightarrow B</math> is called a '''conditional''', <math>A</math> is the antecedent, and <math>B</math> is the '''consequent'''. In order for the expression to hold, when <math>A</math> is true, <math>B</math> must also be true. However, conversely, when <math>A</math> is false, the statement is trivially true regardless of <math>B</math>.
 
{{clear}}
 
 
=== biconditional===
 
{{main|biconditional}}
 
<div style="float: right;">{{truth table/iff}}</div>
 
A proposition in the form <math>A  \leftrightarrow B</math> is called '''{{ba|biconditional}}''' and represents the truth function: ''A'' if and only if ''B''.  The truth table is thus identical to <math>(A \rightarrow B)\land (B \rightarrow A)</math>. In fact, the standard method of proving biconditional is to prove <math>A \rightarrow B</math> and then to prove <math>B \rightarrow A</math>.
 
{{clear}}
 
== Propositional calculus==
 
{{main|propositional calculus}}
 
{{empty section}}
 
 
== Posets and Lattices==
 
{{empty section}}
 
 
== Diagrammatic representations ==
 
{{empty section}}
 
===Venn Diagram===
 
{{empty section}}
 
===Logic Diagram===
 
{{empty section}}
 
===Hasse Diagram===
 
{{empty section}}
 
 
== Additional topics ==
 
{{empty section}}
 
 
== See also ==
 
* [[Karnaugh Map]]
 
 
[[wikidata id::Q173183| ]]
 

Please note that all contributions to WikiChip may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see WikiChip:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)

This page is a member of 1 hidden category:

Facts about "Boolean Algebra"
instance ofbranch of algebra +
wikidata idQ173183 +