KEMBAR78
QC Part1 | PDF | Quantum Mechanics | Basis (Linear Algebra)
0% found this document useful (0 votes)
7 views59 pages

QC Part1

Uploaded by

Abhinav Rag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views59 pages

QC Part1

Uploaded by

Abhinav Rag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

INTRODUCTION TO QUANTUM COMPUTING

Mathematical Formalism

Linear Vector Space (LVS)


This is not a self-contained course on quantum
computation.
The idea of this course is to introduce the physics
behind quantum computing and some basic concepts
in quantum computing which will help students to take
higher level courses on quantum computing.
Computation

• Processing of information/data using a defined set of rules.


• Classical computing – fundamental unit of information is a bit
• Bit is a physical system which has two distinct states: 0 & 1
• For eg, a transistor.
• If the transistor is ‘on’, the bit has a value of 1; if the transistor is ‘off’, the bit has a value
of 0.
• Every instruction fed to a computer and output derived from it is a pattern of such 0s
and 1s.
• Classical information is encoded in states of classical bits
Computation
• Classical information is encoded in states of classical bits
• Logic gates manipulate these patterns using a set of rules called Boolean
algebra.
• All this logic flows at blazing speeds, enabling humankind to make rapid
technological advancements.
Quantum Computation
• Quantum computation – Computing harnessing the laws of quantum
physics
• In quantum computing, information is encoded in the states of quantum
mechanical systems called quantum bits – qubits
• The information encoded in qubits are manipulated using quantum gates
• Quantum computers are devices that manipulate quantum bits, or qubits.
• Instead of using a transistor to perform this function, quantum computers
directly encode information onto elementary particles like electrons and
photons, or even entire atoms.
• These particles are thus part of a quantum computer’s hardware, and
because they play by quantum rules, they execute their functions as
qubits through strange quantum mechanical effects.
Qubit
• A qubit (or quantum bit) is the quantum mechanical analogue of a
classical bit.
• Any quantum mechanical system (such as an atom, a nuclear spin, or a
polarized photon) with two distinct states can be realized as a qubit.

• In quantum computing, the qubits are manipulated using quantum gates.


• A quantum gate is a device which performs a fixed unitary operation on
selected qubits in a fixed period of time
• To understand more about qubits, quantum gates we need to know how
to represent the states of a quantum mechanical systems and what type
of operations are allowed on quantum mechanical systems.
Historical Evolution of QM
Quantum Mechanics was originally introduced by two distinct
approaches.
1. Matrix Mechanics Approach – W. Heisenberg (1925)
Its mathematical language is Linear algebra
(Born, Jordan, Dirac)
2. Wave Mechanics Approach – E. Schrodinger (1926)
Its mathematical language is partial differential
equations.
Historical Evolution of QM
• John von Neumann (1932) proved Schrödinger and Heisenberg theories
are mathematically equivalent and also put quantum theory on a firm
theoretical basis.
• Von Neumann was the first to establish a rigorous mathematical
framework for quantum mechanics, known as the Dirac–von Neumann
axioms (Mathematical Foundations of Quantum Mechanics).
• In this formalism, the state of a quantum system could be represented by
a vector in a (complex) Hilbert space that, in general, could be infinite-
dimensional even for a single particle.
• Observable quantities such as position or momentum are represented as
linear operators acting on the Hilbert space associated with the quantum
system.
• The physics of quantum mechanics was thereby reduced to the
mathematics of Hilbert spaces and linear operators acting on them.
• This new mathematical formulation included as special cases the
formulations of both Heisenberg and Schrödinger.
In quantum mechanics the state of a physical system is
represented by a vector in a Hilbert space: a complete complex
linear vector space with an inner product.
Observables are represented by Hermitian linear operators.
Linear Vector Space (LVS)
• Consider a set 𝑉 = {𝜓, 𝜙, 𝜒, … } together with a set (field) of scalars 𝑆 =
{𝑎, 𝑏, 𝑐, … } along with two algebraic rules called (vector) addition and
scalar multiplication.
• The set 𝑉 (along with the vector addition and scalar multiplication rule) is
said to be a linear vector space (LVS) on the field of scalars 𝑆 if the
following conditions are satisfied:
• Addition rule:
• Closure: ∀ 𝜓, 𝜙 ∈ 𝑉 , 𝜓 + 𝜙 ∈ 𝑉
• Commutativity: ∀ 𝜓, 𝜙 ∈ 𝑉 , 𝜓 + 𝜙 = 𝜙 + 𝜓
• Associativity: ∀ 𝜓, 𝜙, 𝜒 ∈ 𝑉 , 𝜓 + 𝜙 + 𝜒 = 𝜓 + (𝜙 + 𝜒)
• Existence of zero element (additive identity): ∀ 𝜓 ∈ 𝑉, ∃ 0 ∈ 𝑉 | 𝜓 + 0 =
0+𝜓 =𝜓
• Existence of additive inverse element: ∀ 𝜓 ∈ 𝑉, ∃ − 𝜓 ∈ 𝑉 | 𝜓 + (−𝜓) =
(−𝜓) + 𝜓 = 0
Linear Vector Space (LVS)
• Multiplication rule:
• Closure: ∀ 𝜓, 𝜙 ∈ 𝑉 and ∀ 𝑎, 𝑏 ∈ 𝑆, 𝑎𝜓 + 𝑏𝜙 ∈ 𝑉
• Distributivity: ∀ 𝜓, 𝜙 ∈ 𝑉 and ∀ 𝑎, 𝑏 ∈ 𝑆, 𝑎 𝜓 + 𝜙 = 𝑎𝜓 + 𝑎𝜙 and
𝑎 + 𝑏 𝜓 = 𝑎𝜓 + 𝑏𝜓
• Associativity: ∀ 𝜓 ∈ 𝑉 and ∀ 𝑎, 𝑏 ∈ 𝑆, 𝑎 𝑏𝜓 = 𝑎𝑏 𝜓
• Existence of unitary scalar and zero scalar: ∀ 𝜓 ∈ 𝑉, ∃ 𝐼, 𝑜 ∈ 𝑆 | 𝐼𝜓 =
𝜓, 𝑜𝜓 = 0

• If all the above conditions are satisfied, then 𝑉 is called a linear vector
space and the elements 𝜓, 𝜙, … ∈ 𝑉 are called vectors.
• If 𝑆 is a set of real numbers, then 𝑉 is a real vector space; If 𝑆 is a set of
complex numbers, then 𝑉 is a complex vector space.
LVS – Some Examples
• Ordinary ‘vectors’ in 1/2/3 D space
• Set of all real/complex numbers form a LVS
• Set of all integers form a LVS
• Set of all (n x 1) column matrices form a LVS
• Set of all (1 x n) row matrices form a LVS
• Set of all (n x n) matrices form a LVS
• Set of all polynomials of degree three or less (𝑃3 = {𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 +
𝑎3 𝑥 2 |𝑎0 , 𝑎1 , 𝑎2 , 𝑎3 ∈ 𝑅}) form a LVS
• Set of polynomials with real coefficients (𝑃𝑛 = {𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ +
𝑎𝑛 𝑥 𝑛 |𝑛 ∈ 𝑁, 𝑎0 , 𝑎1 , 𝑎2 , … , 𝑎𝑛 ∈ 𝑅}) form a LVS
• Set of all real-valued functions of one natural number variable {𝑓|𝑓: 𝑁 → 𝑅} is
a LVS
• Set of all real-valued functions of one real variable {𝑓|𝑓: 𝑅 → 𝑅} is a LVS
𝑑2𝑓
• Set {𝑓: 𝑅 → 𝑅| 𝑑𝑥 2 + 𝑓 = 0} is a LVS
• Set all physical states of a quantum mechanical system forms a LVS.
Dirac Notation (Bracket Notation)

• The standard quantum mechanical notation (called the Dirac


notation) for a vector in a vector space is:
|𝜓⟩
• 𝜓 is a label for the vector (any label is valid, although we
prefer to use simple labels like 𝜓, 𝜙, 𝑎, 𝑏, …).
• The | ⋅⟩ notation is used to indicate that the object is a vector.
• The entire object |𝜓⟩ is sometimes called a ket or a ket vector.
• The zero vector is denoted as 0 and not |0⟩.
• In quantum computing |0⟩ is used represent a state of a qubit.
Vectors used in quantum computing

• Column and row vectors


Linear Dependence and Independence of vectors
• Consider a subset of non-zero vectors |𝑣1 ⟩, 𝑣2 ,… , |𝑣𝑛 ⟩ ∈ 𝑉
and consider a relation among these vectors as given below:
𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 = 0 … (1),
(𝑎1 , 𝑎2 , … 𝑎𝑛 are complex numbers)
• These vectors are said to be linearly independent if the only
solution to equation (1) is
𝑎1 = 0, 𝑎2 = 0, … , 𝑎𝑛 = 0
• If non-zero solutions to Eq.(1) exists, then the vectors are
linearly dependent.
Example: Lin. Dependence and Independence
• 𝑉 = 𝐶 𝑛 , the set of all complex column matrices of order
𝑛 × 1 . For simplicity, let us take 𝑛 = 2.
• This space 𝐶 2 consists of all complex column matrices of order
2 × 1 . i.e., will contains vectors of the form
𝑧1
𝑣 = 𝑧 ,
2
where 𝑧1 and 𝑧2 are complex numbers.
• Consider subsets of 𝐶 2 :
1 0 1 1
, , ,
0 1 1 −1
• Consider another subsets of 𝐶 2 :
1 2 1 2+𝑖 5+𝑖
, , , ,
2 4 𝑖 1 2+𝑖
Basis and Dimension

• A set of linearly independent vectors |𝑣1 ⟩, 𝑣2 ,… , |𝑣𝑛 ⟩ ∈ 𝑉 is said to form


a basis (basis set, spanning set) for 𝑉 if any vector 𝑣 ∈ 𝑉 can be written
as a linear combination of vectors in the basis set as follows:
𝑣 = 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛
• Expansion coefficients 𝑎𝑖 are in general complex numbers.
• The vectors |𝑣1 ⟩, 𝑣2 ,… , |𝑣𝑛 ⟩ are called the basis vectors and we say that
these vectors span the entire vector space 𝑉.
• The set {|𝑣𝑖 ⟩} = {|𝑣1 ⟩, 𝑣2 ,… , |𝑣𝑛 ⟩} consisting of basis vectors is called a
basis set.
Basis and Dimension

• The dimension of a vector space is the number of basis vectors in a basis


set.
• Generally, a vector space may have many more than one basis sets.
• It can be shown that any two sets of linearly independent vectors can
independently span the same vector space 𝑉, and that they contain the
same number of elements.
Example: Basis and Dimension
• Let us take the vector space 𝐶 2 : The space of all complex column matrices
of order 2 × 1 . i.e. 𝐶 2 , will contains vectors of the form
𝑧1
𝑣 = 𝑧 ,
2
where 𝑧1 and 𝑧2 are complex numbers.
• The basis set for this space is set consisting of the vectors
1 0
0 ≡ 1 ≡
0 1
𝑧1
• Any arbitrary vector |𝑣⟩ ≡ 𝑧 can be expanded in terms of the basis
2
vectors as
𝑧1 1 0
𝑧2 = 𝑧1 + 𝑧2
0 1
• i.e.
𝑣 = 𝑧1 0 + 𝑧2 1
• In quantum computing, the basis states 0 and 1 are frequently used
and are referred as computational basis or standard basis.
Example: Basis and Dimension
• Another basis set for 𝐶 2 is the set consisting of the vectors
1 1 1 1
𝑣1 ′ ≡ 𝑣2 ′ ≡
2 1 2 −1
• In quantum computing 𝑣1 ′ and 𝑣2 ′ are referred as + and −
respectively.
𝑧1
• Any vector |𝑣⟩ ≡ 𝑧 can be expanded in terms of these basis vectors as
2
𝑧1 𝑧1 + 𝑧2 𝑧1 − 𝑧2
𝑧2 = 𝑣1 ′ + 𝑣2 ′
2 2
• For example:
3 3+5 3−5
= 𝑣1 ′ + 𝑣2 ′
5 2 2
8 1 1 2 1 1 1 8−2 1 6 3
RHS = − = = = = 𝐿𝐻𝑆
2 2 1 2 2 −1 2 8+2 2 10 5

• We see that 𝐶 2 has more than one basis set.


• Since the number of vectors in a basis set of 𝐶 2 is 2, the dimension of 𝐶 2
is 2.
Inner Product (Scalar Product)
• Given a vector space, it is sometimes possible to define an
additional structure on this space called the inner product.
• An inner product is a function which takes as input two vectors |𝑣⟩
and |𝑤⟩ from a vector space and produces a complex number as
output.
• The inner product of two vectors |𝑣⟩ and |𝑤⟩ in Dirac notation is
denoted as
⟨𝑤|𝑣⟩
• The inner product has to satisfy the following requirements:
𝑤𝑣 = 𝑣𝑤∗
𝑤 𝑎1 𝑣1 + 𝑎2 𝑣2 = 𝑎1 ⟨𝑤|𝑣1 ⟩ + 𝑎2 ⟨𝑤|𝑣2 ⟩
𝑎1 𝑤1 + 𝑎2 𝑤2 𝑣 = 𝑎1∗ 𝑤1 𝑣 + 𝑎2∗ 𝑤2 𝑣
𝑣 𝑣 ≥ 0, with equality if and only if 𝑣 = 0.
• Vector spaces equipped with an inner product are called inner
product spaces.
Example: Inner Product
• Consider the space 𝐶 𝑛 .
𝑧1 𝑦1
• For any two vectors 𝑣 = ⋮ and 𝑤 = ⋮ in 𝐶 𝑛 , the inner
𝑧𝑛 𝑦𝑛
product can be defined by
𝑧1

𝑤 𝑣 = 𝑦1∗ ⋯ 𝑦𝑛 ⋮ = ෍ 𝑦𝑖∗ 𝑧𝑖
𝑧𝑛 𝑖
• It is easy to show that the above rule satisfies all the requirements
for an inner product.
Hilbert Space
• State space of quantum mechanics is Hilbert space.
• In the finite dimensional complex vector spaces that come up
in quantum computation and quantum information, a Hilbert
space is exactly the same thing as an inner product space.
• From now on, we use the two terms interchangeably.
Norm, Normal (Unit) vector

• We define the norm of a vector |𝑣⟩ (denoted by 𝑣 ) as


𝑣 = ⟨𝑣|𝑣⟩

• A vector |𝑣⟩ will be called a normal or unit vector if


𝑣 = 1.
• We also say that |𝑣⟩ is normalized if 𝑣 = 1.
Orthogonal vectors, Orthonormal vectors
• Two vectors |𝑣⟩ and |𝑤⟩ are said to be orthogonal if their
inner product is zero:
𝑤 𝑣 =0.
• Two vectors |𝑣1 ⟩ and |𝑣2 ⟩ are said be orthonormal if the
following conditions are simultaneously satisfied:
𝑣1 = 1 𝑣2 = 1 𝑣2 𝑣1 = 0 .
• The above conditions can be written in a compact form as:
𝑣𝑖 𝑣𝑗 = 𝛿𝑖𝑗 ,
• where 𝑖, 𝑗 = 1,2 and 𝛿𝑖𝑗 is the Kronecker delta symbol.

• A set of vectors forms an orthonormal set, if each vector in


this set is a unit vector, and distinct vectors in this set are
orthogonal to each other.
Orthonormal basis set
• If the basis set {|𝑣1 ⟩, 𝑣2 ,… , |𝑣𝑛 ⟩} of a vector space 𝑉 is an
orthonormal set, then this basis set will be called an
orthonormal basis set.
• For example, one of the orthonormal basis set for 𝐶 2 is
1 0
0 ≡ , 1 ≡
0 1
• It is easy to verify that this basis set is an orthonormal basis
set.
• Another orthonormal basis set for 𝐶 2 is
1 1 1 1
+ ≡ , − ≡
2 1 2 −1
Euclidean/Cartesian Vectors in 3D

• There is a distinct advantage in “representing” a vector in terms of its


“components”:
• It simplifies various algebraic operations applied to abstract vectors.

*image: https://www.geogebra.org/m/J2Ked4Fg
Matrix Representation of a ket vector
• Any vector 𝑣 ∈ 𝑉 can be written as a linear combination of
vectors in the basis set {|𝑣𝑖 ⟩} as:
𝑣 = 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛
• Here the expansion coefficients 𝑎1 , 𝑎2 , … , 𝑎𝑛 are in general complex
numbers.
• If the basis is an orthonormal basis, then these expansion
coefficients 𝑎𝑖 are given by
𝑎𝑖 = ⟨𝑣𝑖 |𝑣⟩
• Now if we arrange these expansion coefficients in the form a
column matrix, we get the matrix representation of the ket |𝑣⟩:
𝑎1 ⟨𝑣1 |𝑣⟩
𝑎2 ⟨𝑣2 |𝑣⟩
𝑣 ≡ =
⋮ ⋮
𝑎𝑛 ⟨𝑣𝑛 |𝑣⟩
Dual space/vector
• The concept of dual space and dual vectors will be introduced using
a matrix space 𝐶 𝑛 as an example, where each element is a 𝑛 × 1
column matrix of the form:
𝛼1
𝛼2
𝑣 =

𝛼𝑛
• Now let us take the Hermitian conjugate (complex conjugate
transpose) of the above vector:
𝛼1 †
𝛼2

𝑣 ≡ = (𝛼1∗ 𝛼2∗ … 𝛼𝑛∗ )

𝛼𝑛
• Now the row matrix in the RHS do not belong to the vector space 𝑉,
but belongs to the vector space consisting of all 1 × 𝑛 row vectors.
• We call this vector space formed by taking the Hermitian conjugate
of column vectors, the dual vector space 𝑉 ∗ and the vectors dual
vectors.
Bra-vectors
• In the Dirac notation, dual vectors are denoted using the bra ⟨. | symbol:
⟨𝑣| ≡ 𝑣 †
• Thus if we have a vector space 𝑉 = { 𝑣 , 𝑢 , 𝑤 , … }, we will have a
corresponding dual space 𝑉 ∗ = {⟨𝑣|, ⟨𝑢|, ⟨𝑤|, … } consisting of dual
vectors.
• The dual vectors ⟨𝑣|, ⟨𝑢|, ⟨𝑤|, … are also called the bra vectors.
• The bra ⟨𝑣| is called the dual of ket |𝑣⟩.
• The concept of dual vector space can be extended to any vector space.
• If a ket |𝑣⟩ has the a matrix representation then the matrix representation
of ⟨𝑣| is obtained by taking the Hermitian conjugate of the column matrix
for ⟨𝑣|:
𝑎1 𝑎1 †
𝑎2 𝑎2
𝑣 ≡ ⟨𝑣| ≡ = 𝑎1∗ 𝑎2∗ … 𝑎𝑛∗
⋮ ⋮
𝑎𝑛 𝑎𝑛

• Kets are represented by column matrices and bras by row matrices.


2. Consider the following two kets:

a) Find the corresponding bra vectors ⟨𝜓| and ⟨𝜙|.


b) Evaluate the scalar products: ⟨𝜙|𝜙⟩ and ⟨𝜓|𝜓⟩.
c) Evaluate the norm of the given kets.
d) Evaluate the scalar product ⟨𝜙|𝜓⟩.
e) Are the given kets are orthonormal?
f) Normalize the kets |𝜓⟩ and |𝜙⟩.
5. Give the matrix representation of the following vectors with
respect to the orthonormal {|𝟎⟩, |𝟏⟩} basis :
𝟏 𝟏
𝒚 = 𝟎 + 𝒊|𝟏⟩ ; 𝒛 = 𝟎 − 𝒊|𝟏⟩
𝟐 𝟐
Operators
• A linear operator on a vector space 𝑉 is a linear
transformation: 𝑉 → 𝑉 of the vector space to itself
• For a vector space 𝑉 = 𝑣 , 𝑤 , … , a linear operator 𝐴 is
defined as
𝐴 𝑣 = |𝑢⟩ ∈ 𝑉
𝐴 𝑎1 𝑣 + 𝑎2 𝑤 ) = 𝑎1 𝐴 𝑣 + 𝑎2 𝐴|𝑤⟩
• Linear operators can also act on bra vectors. If 𝑉 ∗ =
⟨𝑣|, ⟨𝑤|, … ,
⟨𝑣|𝐴 = ⟨𝑡| ∈ 𝑉 ∗
⟨𝑣|𝑎1 + ⟨𝑤|𝑎2 𝐴 = ⟨𝑣|𝐴𝑎1 + ⟨𝑤|𝐴𝑎2
• Identity operator, 𝐼, defined by the equation
𝐼 𝑣 ≡ |𝑣⟩ for all vectors |𝑣⟩.
• The zero operator maps all vectors to the zero vector,
0 𝑣 ≡ 0.
Operators
• If 𝐴 and 𝐵 are two operators defined on 𝑉, then their product
𝐴𝐵 is defined as
𝐴𝐵 𝑣 = 𝐴(𝐵|𝑣⟩) .
• Similarly the product 𝐵𝐴 is defined as
𝐵𝐴 𝑣 = 𝐵(𝐴|𝑣⟩) .
• In general
𝐴𝐵 ≠ 𝐵𝐴.
• I.e. the operator products are not commutative.
Outer Products
• The inner product ⟨𝑤|𝑣⟩ of two vectors |𝑣⟩ and |𝑤⟩ is
obtained by multiplying |𝑣⟩ on the left by the dual vector ⟨𝑤|.
• Similarly, we can define an outer product obtained by
multiplying |𝑣⟩ on the right by ⟨𝑤|.
• The meaning of such an outer product |𝑣⟩⟨𝑤| is that it is an
operator which, when applied to another vector |𝑢⟩, acts as
follows:
𝑣 𝑤 𝑢 = 𝑣 𝑤𝑢
= 𝑤𝑢 𝑣 =𝑐𝑣 ,
• Here 𝑐 = 𝑤 𝑢 is a complex number.
Completeness Relation
• Let |𝑣𝑖 ⟩ be an orthonormal basis for the vector space 𝑉
𝑛
𝑣 =෍ 𝑎𝑖 |𝑣𝑖 ⟩
𝑖=1
• Recall that 𝑣𝑖 𝑣 = 𝑎𝑖 and therefore
𝑛 𝑛 𝑛

෍ |𝑣𝑖 ⟩⟨𝑣𝑖 | 𝑣 = ෍ |𝑣𝑖 ⟩⟨𝑣𝑖 |𝑣⟩ = ෍ 𝑎𝑖 |𝑣𝑖 ⟩ = |𝑣⟩


𝑖=1 𝑖=1 𝑖=1
• Since the last equation is true for all 𝑣 , it follows that
𝑛

෍ |𝑣𝑖 ⟩⟨𝑣𝑖 | = 𝐼 .
𝑖=1
• This equation is known as the completeness relation.
Products involving a bra, an operator and a ket

• In vector spaces, we also encounter products of the


form
𝑢𝐴𝑣
• The above represents an inner product between |𝑢⟩
and 𝐴|𝑣⟩.
Matrix representation of operators
• Any 𝑣 ∈ 𝑉 can be expressed in terms of orth. basis vectors { 𝑣𝑖 } as
𝑣 = 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛
• The action of a linear operator 𝐴 on |𝑣⟩ is
𝐴 𝑣 = 𝑎1 𝐴 𝑣1 + 𝑎2 𝐴 𝑣2 + ⋯ + 𝑎𝑛 𝐴 𝑣𝑛
• Therefore, the action of a linear operator 𝐴 on any vector |𝑣⟩ can be
determined once the action of 𝐴 on the basis vectors |𝑣𝑖 ⟩ are specified.
• Now suppose,
𝐴 𝑣𝑖 = |𝑤𝑖 ⟩, where |𝑤𝑖 ⟩ are kets in 𝑉.
• Since 𝑤𝑖 ∈ 𝑉, |𝑤𝑖 ⟩ can be expanded in the basis ( 𝑣𝑖 , 𝑖 = 1, … , 𝑛). Hence
𝐴 𝑣1 = 𝑤1 = 𝑎11 𝑣1 + 𝑎21 𝑣2 + ⋯ + 𝑎𝑛1 𝑣𝑛
𝐴 𝑣2 = 𝑤2 = 𝑎12 𝑣1 + 𝑎22 𝑣2 + ⋯ + 𝑎𝑛2 𝑣𝑛

𝐴 𝑣𝑛 = 𝑤𝑛 = 𝑎1𝑛 𝑣1 + 𝑎2𝑛 𝑣2 + ⋯ + 𝑎𝑛𝑛 𝑣𝑛
• Or in a compact notation,
𝑛

𝐴 𝑣𝑗 = ෍ 𝐴𝑖𝑗 |𝑣𝑖 ⟩
𝑖=1
Matrix representation of operators
𝑛

𝐴 𝑣𝑗 = ෍ 𝐴𝑖𝑗 |𝑣𝑖 ⟩
𝑖=1
• The matrix whose entries are the values 𝐴𝑖𝑗 is said to form a matrix
representation of the operator 𝐴.
𝐴11 𝐴12 … 𝐴1𝑛
𝐴21 𝐴22 … 𝐴2𝑛
𝐴≡
⋮ ⋮ ⋮ ⋮
𝐴𝑛1 𝐴𝑛2 … 𝐴𝑛𝑛
• This matrix representation of 𝐴 is completely equivalent to the operator 𝐴,
and we will use the matrix representation and abstract operator
viewpoints interchangeably.
• If 𝐴 is an operator defined on a vector space of dimension 𝑛, then the
matrix representation of 𝐴 will be a square matrix of dimension 𝑛 × 𝑛.
• 𝐴𝑖𝑗 are called the matrix elements of the operator 𝐴 in the |𝑣𝑖 ⟩ basis.
• It can easily be proved that
𝐴𝑖𝑗 = ⟨𝑣𝑖 𝐴 𝑣𝑗 ⟩
The Pauli matrices
• The Pauli - 𝑋, 𝑌 & 𝑍 are some of the important operators
(gates) used in quantum computing and quantum
information.
• These operators can be expressed in matrix form and are
called Pauli matrices.
• These are traceless 2 by 2 matrices, which go by a variety of
notations as given below:
0 1 0 −𝑖
𝜎1 ≡ 𝜎𝑥 ≡ 𝑋 ≡ ; 𝜎2 ≡ 𝜎𝑦 ≡ 𝑌 ≡ ; 𝜎3 ≡ 𝜎𝑧
1 0 𝑖 0
1 0
≡𝑍≡
0 −1
1 0
• Sometimes 𝜎0 ≡ 𝐼 ≡ is also included as the fourth
0 1
Pauli matrix.
Eigenvectors and eigenvalues

• An eigenvector of a linear operator 𝐴 on a vector


space is a non-zero vector |𝑣⟩ such that
𝐴𝑣 =𝑣𝑣 ,
• where 𝑣 is a complex number known as the
eigenvalue of 𝐴 corresponding to |𝑣⟩.
Adjoint operator
• Suppose 𝐴 and 𝐴† are linear operators on a Hilbert space, 𝑉.
If
† ∗
𝑤𝐴 𝑣 = ⟨𝑣|𝐴|𝑤⟩
for all vectors 𝑣 , 𝑤 ∈ 𝑉,
• Then 𝐴† is called the adjoint or Hermitian conjugate of the
operator 𝐴.
• From the definition it is easy to see that 𝐴𝐵 † = 𝐵† 𝐴† .
• By convention, if |𝑣⟩ is a vector, then we define: 𝑣 † ≡ ⟨𝑣|.
• With this definition it is not difficult to see that
(𝐴 𝑣 )† ≡ ⟨𝑣|𝐴†
• The matrix for 𝐴† is the complex conjugate transpose (also
called ‘Hermitian conjugate’, or ‘adjoint’) of the matrix for 𝐴.
Hermitian Operators
• An operator 𝐴 is called Hermitian (or self-adjoint) if
𝐴† = 𝐴
• i.e. it is equal to its own adjoint.
• It can proved that the eigenvalues of a Hermitian
operator are real and their eigenvectors form a
complete orthonormal basis.
• In quantum mechanics, Hermitian operators represent
physical observables like position, momentum, energy,
angular momentum, etc.
• Hermitian operators are represented by Hermitian
matrices.
Unitary Operators
• An operator 𝑈 is said to be unitary if
𝑈𝑈 † = 𝑈 † 𝑈 = 𝐼
• The time-evolution of the quantum states of closed
systems if described by an unitary operator.
• The unitary operators preserve inner products between
vectors, and in particular, preserve the norm of vectors.
• In quantum computation, quantum gates have to be
unitary.
• Unitary operators are represented by unitary matrices.
Normal Operators
• A linear operator 𝐴 is said to be a normal operator if
𝐴𝐴† = 𝐴† 𝐴
• Both unitary and Hermitian operators are normal.
• So most of the operators that are important for quantum
mechanics, and quantum computing are normal.
• Any normal operator 𝐴 can be written as
𝑛

𝐴 = ෍ 𝛼𝑖 |𝛼𝑖 ⟩⟨𝛼𝑖 | ,
𝑖=1
• 𝛼𝑖 are the eigenvalues corresponding to the eigenvectors |𝛼𝑖 ⟩.
• We sometimes refer to 𝐴 written in its own eigenbasis as the
spectral decomposition of 𝐴.
• The set of eigenvalues of 𝐴 is called the spectrum of 𝐴.
The Postulates of Quantum Mechanics
• Postulate 1:
• The state of any physical system is specified, at each time 𝑡, by
a state vector |𝜓(𝑡)⟩ in a Hilbert space; |𝜓⟩ contains (and
serves as the basis to extract) all the needed information
about the system.
• Any superposition of state vectors is also a state vector.
• The simplest quantum mechanical system, and the system
which we will be most concerned with, is the qubit.
• A qubit has a two-dimensional state space.
• Suppose |0⟩ and |1⟩ form an orthonormal basis for that state
space. Then an arbitrary state vector in the state space can be
written
𝜓 = 𝑎 0 + 𝑏|1⟩
• 𝑎 and 𝑏 are complex numbers.
Postulate 2: Observables and operators

• To every physically measurable quantity 𝐴, called an


observable or dynamical variable, there corresponds
a linear Hermitian operator 𝐴 whose eigenvectors
form a complete basis.
Postulate 3: Measurements and eigenvalues of operators

• Postulate 3, gives insights on measurements and results of


measurements on quantum systems.
• Suppose a quantum system is in a state |𝜓⟩ and one tries to
measure an observable 𝐴 on this state.
• The only possible result of such a measurement is one of
the eigenvalues 𝑎𝑖 (which are real) of the operator 𝐴.
• Here the operator 𝐴 is required to satisfy the eigenvalue
equation:
𝐴 𝑎𝑖 = 𝑎𝑖 𝑎𝑖 (𝑖 = 1,2, … )
• If the result of a measurement of 𝐴 on a state |𝜓⟩ is 𝑎𝑖 , the
state of the system immediately after the measurement
changes to |𝑎𝑖 ⟩.
• This is called the collapse of the state vector (wave
function).
Postulate 4: Probabilistic outcome of measurements

• When measuring an observable 𝐴 of a system in a


state |𝜓⟩, the probability 𝑃𝑖 of obtaining one of the
eigenvalues 𝑎𝑖 of the corresponding operator 𝐴 is
given by
𝑎𝑖 𝜓 2
𝑃𝑖 𝑎𝑖 = ,
𝜓𝜓
• where |𝑎𝑖 ⟩ is the eigenstate of 𝐴 with eigenvalue 𝑎𝑖 .
• This can also be interpreted as the probability to
obtain the state |𝑎𝑖 ⟩ under measurement.
Postulate 4: Probabilistic outcome of measurements

• The probability can also be defined as follows.


• We know that the eigenstates of a Hermitian operator forms
an orthonormal basis set. Hence, we can expand the given
state |𝜓⟩ using the eigenvectors {|𝑎𝑖 ⟩} of 𝐴 as:
𝑛

𝜓 = ෍ 𝑐𝑖 |𝑎𝑖 ⟩
𝑖=1
• Then the probability for obtaining the eigenstate |𝑎𝑖 ⟩ under
measurement is
𝑃𝑖 = 𝑐𝑖 2
• Note: Whenever we are doing a measurement, we should
always specify a basis in which we are performing the
measurement.
Postulate 4: Probabilistic outcome of measurements

• So, for example, consider a qubit in the state


1 1
𝜓 = 0 + |1⟩
2 2
• Suppose we measure the qubit in the 0 , 1 basis.
• Then the probability of getting the state 0 is
2
1 1
𝑃 0 = =
2 2
• The probability of getting the state 1 is
2
1 1
𝑃 1 = = .
2 2
Postulate 5: Evolution of Quantum Systems
• Evolution
• How does the state, 𝜓 , of a quantum mechanical
system change with time? The following postulate
gives a prescription for the description of such state
changes.
• The evolution of a closed quantum system is described
by a unitary transformation.
• That is, the state 𝜓 of the system at time 𝑡1 is related
to the state 𝜓′ of the system at time 𝑡2 by a unitary
operator 𝑈 which depends only on the times 𝑡1 and 𝑡2
(and not on the state |𝜓⟩),
𝜓′ = 𝑈(𝑡1 , 𝑡2 ) 𝜓 .
Postulate 5’: Evolution of Quantum Systems
• The time evolution of the state of a closed quantum
system is described by the time-dependent
Schrodinger equation,
𝑑|𝜓⟩
𝑖ℏ =ℋ 𝜓 .
𝑑𝑡
• ℎ is a physical constant known as Planck’s constant.
• ℋ is an Hermitian operator known as the
Hamiltonian of the closed system.

• Postulate 5 and 5’ are equivalent.


More on Measurement in Quantum Mechanics
• In classical physics it is possible to perform measurements on
a system without disturbing it significantly.
• In quantum mechanics, however, the measurement process
perturbs the system significantly.
• While carrying out measurements on classical systems, this
perturbation does exist, but it is small enough that it can be
neglected.
• In atomic and subatomic systems, however, the act of
measurement induces non-negligible or significant
disturbances.
More on Measurement in Quantum Mechanics
• Consider a system which is in a state |𝜓⟩.
• Before measuring an observable 𝐴, the state |𝜓⟩ can be
represented by a linear superposition of eigenstates |𝑎𝑖 ⟩ of
the operator 𝐴:
𝑛

𝜓 = ෍ 𝑐𝑖 |𝑎𝑖 ⟩
𝑖=1
• According to Postulate 4, the act of measuring 𝐴 changes the
state of the system from |𝜓⟩ to one of the eigenstates |𝑎𝑖 ⟩ of
the operator 𝐴, and the result obtained is the eigenvalue 𝑎𝑖
with probability 𝑃𝑖 = 𝑎𝑖 2 .
• The only exception to this rule is when the system is already in
one of the eigenstates of the observable being measured.
More on Measurement in Quantum Mechanics
• For instance, if the system is in the eigenstate |𝑎𝑖 ⟩ of operator
𝐴, a measurement of the observable 𝐴 yields with certainty
(i.e., with probability = 1) the value 𝑎𝑖 without changing the
state |𝑎𝑖 ⟩.
• Before a measurement, we do not know in advance with
certainty in which eigenstate, among the various states |𝑎𝑖 ⟩, a
system will be after the measurement; only a probabilistic
outcome is possible.
• The quantum wave function does not predict the results of
individual measurements.
• It instead determines the probability distribution, 𝑃 ∝ 𝜓 2 ,
over measurements on many identical systems in the same
state.
Global Phase of State Vectors
• Consider a qubit in a state
𝜓 = 𝑎 0 + 𝑏|1⟩
• If we measure this qubit in the computational basis, the
probability of getting the result |0⟩ is 𝑎 2 and the probability
of getting the result |1⟩ is 𝑏 2 .
• Now consider a state 𝜓′ , which differs from the state 𝜓
only by a multiplicative factor of unit modulus:
𝜓′ = 𝑒 𝑖𝜃 𝜓 = 𝑎𝑒 𝑖𝜃 0 + 𝑏𝑒 𝑖𝜃 |1⟩
• If we measure this qubit in the { 0 , 1 } basis:
𝑖𝜃 2 2
• the probability of getting the result |0⟩ is 𝑒 𝑎 = 𝑎
𝑖𝜃 2
• the probability of getting the result |1⟩ is 𝑒 𝑏 = 𝑏 2.
Global Phase of State Vectors
• Thus we see that even though 𝜓 and 𝜓′ = 𝑒 𝑖𝜃 𝜓 are
mathematically different vectors, no measurement can distinguish
the two vectors 𝜓 and 𝜓′ : both represent the same quantum
(physical) state. i.e.
𝜓 ≡ 𝜓′
• This is an important point so let's state it again in a different way:
• No measurement (using any basis) can distinguish between 𝜓 and
𝜓′ = 𝑒 𝑖𝜃 𝜓 .
• Therefore, from an observational point of view these two states are
identical.
• The statistics of any measurements we could perform on the state
𝑒 𝑖𝜃 𝜓 are exactly the same as they would be for the state 𝜓 .
Global Phase of State Vectors
• In the above example, we say 𝜓 and 𝜓′ = 𝑒 𝑖𝜃 𝜓 are
globally phase-equivalent.
• We call the 𝑒 𝑖𝜃 term the global phase factor.
• In other words, we say that the two states 𝜓 and 𝜓′ differ
only by a global phase factor.
• In summary: If any two ket vectors (states) are globally phase
equivalent (or differ only by a multiplicative factor of unit
modulus), from an observational point of view these two
states are identical and represent the same quantum state.

You might also like