Chapter 1: Arina’s First Lecture

034_Arkananta Rasendriya
9 min readSep 5, 2023

--

The story before this chapter is actually the zeroth chapter — it is important like zeroth thermodynamic law but it is found later than the first one. Arina’s journey in her seventh semester will be written on this medium as her diary.

Monday, August 21st 2023

It was the first day of Arina’s seventh semester. As a last year student, Arina only takes a little amount of credits; and even less than her friend’s due to her course sniping project in the previous semesters.

One of the lectures that Arina took on her first seventh semester day is an outsider supplementary course. The course is held by a study program located in one of the Labtek-Labtek Kembar (Labtek V-VIII; Labtek or Laboratorium Teknologi means Technology Laboratories in English) of Institut Teknologi Bandung. She attended her first class cheerfully, until she realized that…

… the class is closed due to lack of students.

Arina was intentionally going backward to the study program office who closed the class. She asked to the staff and the staff explained that the course will not be opened because these are only two students who chose the class. Arina than thanked to the staff and she went back to the physics building.

During her trip to the physics building, she saw some sort of tables and chairs along Labtek V lobby. Arina impulsively remembered a note that she made during her Computational Physics class. The tables and chairs reminded her a lot about matrix. Arina than took a seat and started to write her notebook under that sunny and windy day.

Tuesday, February 14th 2023

FI3202: Fiskom

(Fiskom is an acronim of “Fisika Komputasi”, which means “Computational Physics” in English)

A matrix is a rectangular array of elements, which can be numbers or another mathematic expressions. A m × n matrix M has m rows and n columns. A matrix element that located on i-th row and j-th column is usually defined as aᵢⱼ.

These are some kinds of special matrix such as zero matrix, identity matrix, square matrix, upper triangular matrix, and lower triangular matrix; but the most important are the first three kinds.

  • Zero matrix
    Zero matrix is a matrix that all of its elements are zero. Zero matrix is usually used as a beginning construction of any matrices in programming languages due to its simplicity.
  • Identity matrix
    Identity matrix I is a matrix that if a matrix M is multiplied by I, then the multiplication result is M. All of identity matrix diagonal components are one and otherwise are zero.
  • Square matrix
    A matrix M is a square matrix if it has same amount of rows and columns.

A matrix can be constructed in Python with np.array() like a source code that written below. np is an abbreviation for numpy, one of a Python extensions. Before constructing a matrix like this, numpy extensions must be installed to your computer. The information of numpy installation can be found on this page NumPy — Installing NumPy.

import numpy as np
A = np.array([2,1],[1,7])

#this is an example of 2*2 matrix

Matrix Operations

These are some basic matrix operations, such as addition, subtraction, and multiplication.

  • Addition and Subtraction

Defined two matrices A and B which have same dimension. If addition of matrix A by B results a new matrix C, then the components of matrix C are

with a is the components of matrix A and b is the same for matrix B. Identic with the addition but with different sign, if subtraction of matrix A by B produces a new matrix D, then the components of matrix D are

The Python source code for matrix addition and subtraction is like a source code that written below.

#for the following codes, the matrices are assumed had been constructed.
#m is the number of rows and n is the number of columns

for i in range(m):
for j in range(n):
C[i][j] = A[i][j] + B[i][j] #matrix component addition
D[i][j] = A[i][j] - B[i][j] #matrix component subtraction
  • Multiplication

These are numerous form of matrix multiplication, such as multiplication of matrix with scalar, basic multiplication, dot product, and cross product.

The first kind of matrix multiplication is multiplication of a matrix with scalar. Suppose a matrix M (with its components m) is multiplied by a scalar λ. Thus, the result of multiplication will be a new matrix P with its components are

The Python source code for matrix multiplication with a scalar is like a source code that written below.

lambda = int(input()) #scalar
for i in range(m):
for j in range(n):
P[i][j] = lambda*M[i][j]

If the value of scalar is less than zero, then the resulting matrix is a negative matrix.

Next, suppose two matrices A with a dimension of m × p and B with a dimension of p × n. If the multiplication of A by B results a new matrix C with a dimension of m × n, then the components of matrix C can be calculated with the following formula

The Python source code for matrix basic multiplication is like a source code that written below.

for i in range(m):
for j in range(n):
for k in range(p):
C_[i][j] = C_[i][j] + A[i][k]*B[k][j]

The following two kinds of multiplication are usually used between the vectors. The n-dimension vector is basically a 1-row matrix with n columns or 1-column matrix with n rows. In tensorial notation, both of the vector configurations that mentioned before are usually mentioned as contravariant and covariant vector. But in the sake of simplification, the indices written on this medium are not justified as contravariant or covariant indices.

Suppose two vectors A and B are two n-dimensional vectors. Then, dot product multiplication will be applied to both vectors. Dot product of A by B means the length of vector A projected on vector B. The dot product of both vectors will result

The Python source code for dot product multiplication is like a source code that written below.

#construct the vectors by making a 1*something arrays

result = 0 #result of dot product
for i in range(n):
result = result + A[i]*B[i]

Cross product multiplication means a product between two vectors that results a vector that perpendicular to both multiplied vectors. If the result of cross product between vector A and vector B is a vector C, then the components of vector C are

with ϵᵢⱼₖ is a Levi-Civita tensor with the definition of

The j and k index on equation (6) is treated with Einstein summation (Einstein notation — Wikipedia). Even permutation means that if the number of permutating the list elements is even (such as (123) to (231)) and odd permutation means the same number when it is odd (such as (123) to (132)).

Hadamard Product

Suppose the Hadamard product of matrix A by matrix B, which have same dimension, results a matrix C also with identical dimension. The expression of C components in terms of A and B gives

Thus, the Python source code for computing the coefficient of C is like the one that given below.

for i in range(m):
for j in range(n):
C[i][j] = A[i][j]*B[i][j]

Matrix Inverse and Linear Algebra

Suppose a square matrix A. Inverse of matrix A, A⁻¹, is defined a matrix that if it is multiplied by A will result an identity matrix. Or easily, if it is written on mathematical notation

Before finding a matrix inverse, the matrix must be proof invertible. A square matrix A is invertible if its determinant is not zero, or if it is written mathematically, det(A) ≠ 0. The method of finding a matrix determinant is generally by a cofactor expansion. Informations of matrix determinant and matrix cofactor can be read on this page 4.2: Cofactor Expansions — Mathematics LibreTexts.

These are numerous methods of finding an inverse of a matrix. Generally, a couple of linear equations can be constructed and solved from equation (9), just assume that all of A⁻¹ components are unknown. But, the inverse of a matrix can be found easier with adjoint method. Both method are described on this page Inverse Matrix — Definition, Formulas, Steps to Find Inverse Matrix, Examples (byjus.com).

Matrix inverse can be used to find the solution of a linear equations system. Basically, all of coefficients in linear equation system can be represented as components of matrix A, all of variables can be arranged as a variable vector X, and all of constants can be arranged as coefficient vector B such that

If the left-hand side of equation (10) is multiplied by A⁻¹, the equation can be transformed to

Thus, the matrix X written on equation (11) contains all solutions of linear equations system. Example of solving a linear equations system can be accessed through this page How to Solve a System of Equations Using the Inverse of a Matrix — dummies.

Interestingly, Python has powerful abilities to find a matrix inverse and solutions of linear equations system. Suppose there is a square matrix A. Finding the inverse of matrix A in Python can be done with this following source code.

A_inv = np.linalg.inv(A)

Solving a linear equations system in Python can be done by defining a matrix A and B as introduced on equation (10). Then, the components of matrix X, i.e. the solutions, can be obtained with this following source code

X = np.linalg.solve(A,B)

or also with multiplying the inverse of matrix A by B like the displayed equation (11) with this following source code.

X = np.dot(A_inv, B)

Basic information of linear algebra can be studied by reading the mathematical physics book such as a book entitled Mathematical Methods in the Physical Sciences written by Mary L. Boas; and more practical information, especially for computing the linear equations system solution, can be found on online courses such as this one which is written by University of Berkeley Chapter 14. Linear Algebra and Systems of Linear Equations — Python Numerical Methods (berkeley.edu).

After completing her reading, Arina stretched her body and walked to the physics department building. During her trip, she remembered that she would continue her progress on making new Physics Experiment II module. She worked on module 7. But incidentally, Arina realized that she opened the wrong module, because this year module arrangement is different with the last year. Thus, she opened her report and she found this interesting matrix.

Expression of conductivity tensor (source: Phys. Rev. 137, A448 (1965) — Faraday Effect in Solids (aps.org))

As what Arina studied in many Photonics and Magnetics Research Group courses, the conductivity is a second order tensor which can be represented in matrix form. The conductivity tensor must be second order since electrons can move along a material in many directions. But, due to its simplicity, the conductivity tensors are usually reduced to the zeroth rank (scalar) in many Electromagnetic Field problems.

Two weeks passed. Biweekly, Arina must make a Quantum Physics problem set for her class. In this time, Arina made the third problem set about time-independent Schrödinger equation. After reading some books like the book written by Griffith and Zettily, she wrote this problem.

This problem uses several matrices, such as the operator matrix and superposition of eigenstates. Yes, in quantum physics, matrices are generally used, such as for operators, quantum state superpositions, bra-ket notations, and so on.

Pauli’s spin rotation matrix. Source: quantummechanics.ucsd.edu/ph130a/130_notes/node279.html

Arina continued her progress on making problem set and because she wanted to focusing her mind, she asked the author to continues her story on the next part. See you :D

--

--