This website tracks my progress in self-studying physics. The contents are mostly my notes and solutions to exercises in books.
Contact: dcheng728@{google's email}.com
$$\frac{d}{dt}(\frac{\partial L}{\partial \dot{q}_j}) - \frac{\partial L}{\partial q_j} = Q_j$$
for some generalized force $Q_j$ that can be put into
$$Q_j = - \frac{\partial U}{q_j} + \frac{d}{dt}(\frac{\partial U}{\partial \dot{q}_j})$$
Notice the velocity dependence.
A systematic approach for solving (sort of) semiholonomic constraints
which implies
$$\delta \int_{t_0}^{t_1} \left[ L(q_i,\dot{q}i,t) + \lambda\alpha f_\alpha \right] dt = 0$$
Then apply the lagr eqm
$$\frac{d}{dt} \frac{\partial [ L+ \lambda_\alpha f_\alpha ] }{\partial \dot{q}i} - \frac{\partial [ L + \lambda\alpha f_\alpha ]}{\partial q_i} = 0$$
to obtain the desired eqm. KEEP IN MIND THE TOTAL TIME DERIVATIVE, PRODUCT RULE NEED TO BE USED, because $\lambda$ is time dependent.
$$T \propto m_1 \dot{q}_1^2 + m_2 \dot{q}_2^2$$
$$= \frac{1}{(m_1 + m_2)} \left[ m_1(m_1 + m_2) \dot{q}_1^2 + m_2(m_1 + m_2) \dot{q}_2^2 \right]$$
$$= \frac{1}{(m_1 + m_2)} \left[ m_1^2 \dot{q}_1^2 + m_2^2 \dot{q}_2^2 + m_1m_2 \dot{q}_1^2 + m_1m_2 \dot{q}_2^2 \right]$$
$$= \frac{1}{(m_1 + m_2)} \left[ (m_1 \dot{q}_1 + m_2 \dot{q}_2)^2 -2m_1m_2\dot{q}_1\dot{q}_2 + m_1m_2 \dot{q}_1^2 + m_1m_2 \dot{q}_2^2 \right]$$
$$= \frac{1}{(m_1 + m_2)} \left[ (m_1 \dot{q}_1 + m_2 \dot{q}_2)^2 + m_1m_2(\dot{q}_1 - \dot{q}_2)^2 \right]$$
$$=\frac{1}{m_1+m_2}(p_1^2 + p_2^2) + \frac{m_1 m_2}{m_1 + m_2}(\dot{q_1}-\dot{q_2})^2$$
This identifies a COM term and effective mass term.
$f \rightarrow f' = f + \frac{l^2}{mr^3} <=> V \rightarrow V' = V + \frac{1}{2}\frac{l^2}{2mr^2}$
$$\sigma(\Omega) d\Omega = \frac{# \text{particles scattered into solid angle } d\Omega \text{ per unit time}}{\text{incident intensity}}$$ $d\Omega = 2\pi \sin \theta d \theta$
in general, not any simpler. Can be simplified for special cases that overlap with physical interest $$H = \dot{q}_i p_i - L$$ $$p_i = \frac{\partial L}{\partial \dot{q}_i}$$
cyclic coordinates $q_j$ does not appear in Lagr as a non-derivative term, they give conserved momenta:
$$p_j = \frac{\partial L }{\partial q_j}$$
$$\dot{p}_j = \frac{\partial L }{\partial q_j} = 0$$
Apply cano. tran. defined by p->P, q->Q, H->K. Then Hamilton's principle takes form
$$\delta \int [P_i \dot{Q}_i - K(Q,P,t)]dt = 0$$
$$\delta \int [p_i \dot{q}_i - H(q,p,t)]dt = 0$$
The general solution takes the form
$$\lambda (p_i \dot{q}_i - H) = P_i \dot{Q}_i - K + \frac{d F}{dt}$$
for F function of two of p,q,P,Q, called generating function.
If $\lambda = 1$, the transform is called canonical transformation. If $\lambda \neq$, the transformation is called extended canonical transformation.
Table 9.1 gi es trlations between the canonical trans and generating functions.
Let $Q_i = Q_i(q,p)$, then
$$\dot{Q}_i = \frac{\partial Q_i}{\partial q_j} \dot{q}_j + \frac{\partial Q_i}{\partial p_j} \dot{p}_j$$
when combined with hami eqm, it is
$$\dot{Q}_i = \frac{\partial Q_i}{\partial q_j} \frac{\partial H}{\partial p_j} - \frac{\partial Q_i}{\partial p_j} \frac{\partial H}{\partial q_j}$$
This is written compactly in matrix form:
$$\dot{\vec{\eta}} = \vec{J} \frac{\partial H}{\partial \vec{\eta}}$$
where $\vec{J}$ is the 2nx2n matrix of
$$\left[\begin{array}{rr} 0 & I \ -I & 0 \end{array}\right]$$
and $\vec{\eta} = (q_1, q_2, ..., p_1, ..., p_n)$
As an illustration
$$ \left[\begin{array}{rr} \dot{q}_1 \ \dot{q}_2 \ \dot{p}_1 \ \dot{p}_2 \end{array}\right]
=
\left[\begin{array}{rr} 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \ -1 & 0 & 0 & 0 \ 0 & -1 & 0 & 0 \end{array}\right]
\left[\begin{array}{rr} \partial_{q_1} H \ \partial_{q_2} H \ \partial_{p_1} H \ \partial_{p_2} H \end{array}\right] $$
Now take restricted transformation $\vec{\zeta} = \vec{\zeta}(\vec{\eta})$. Then $\dot{\vec{eta}} = \vec{M} \dot{\vec{\eta}}$ where $\vec{M}$ is Jacobian. Together with Hami eqm, we have
$$\dot{\vec{\eta}} = \vec{M} \vec{J} \frac{\partial{H}}{{\partial \vec{\eta}}}$$
But $\frac{\partial H}{\partial \vec{\eta}} = \tilde{M} \frac{\partial H}{\partial \vec{\zeta}}$. The Hami eqm for $\zeta$ is
$$\dot{\zeta} = M J \tilde{M} \frac{\partial H}{\partial \zeta}$$
as oppose to $$\dot{\eta} = J \frac{\partial H}{\partial \eta}$$
But we also must have
$$\dot{\zeta} = J \frac{\partial H}{\partial \zeta}$$
This implies $$MJ\tilde{M} = J$$
note: for extedned canonical trans, it is $MJ\tilde{M} = J$
This is called symplectic condition, $M$ is called symplectic matrix.
define
$$[u,v]_{q,p} = \frac{\partial u}{\partial q_i} \frac{\partial v}{\partial p_i} - \frac{\partial u}{\partial p_i}\frac{\partial v}{\partial q_i}$$
similarily, the definition can be written as
$$[u,v]_{\eta} = \left( \frac{\partial u}{\partial \vec{\eta}} \right)^T J \left( \frac{\partial v}{\partial \vec{\eta}} \right)$$
The above two relations are written neatly as $[\eta, \eta]_{\eta} = J$
The relation $[\zeta, \zeta]\eta = J = [\zeta, \zeta]\zeta$ is called fundamental poisson bracket. It is invariant under cano. trans.
Cano. trans. form a group. The subgrp that is analytic of continuous parameters form a lie grp.
lie grp of parameter $\theta_i$ lie on flat vector space whose basis vectors constitute a lie algebra satisfying the poisson bracket $$[u_i, u_j] = \sum_k c_{ij}^k u_k$$ The elements of the lie grp are generated by $$Q(theta_i) = \exp \left[ \frac{i}{2} \sum \theta_i u_i \right]$$
pauli matrices form a representation of rotation grp, the vectors add, in a sense.
Demand that from the canonical transformation $K(P,Q) = 0$, together assuming $F_2$ type generating function, then we have
$$H(q_i; \frac{\partial F_2}{\partial q_i}; t) + \frac{\partial F_2}{\partial t} = 0$$
The equation is the Hamilton-Jacobi eq.
Write $S = S(qi; \alpha_i; t)$ as solution for $F_2$, where $\alpha$ are constants of integration. Then take $\alpha_i$ as momentum $P_i$, we have from generating function properties
$$p_i = \frac{\partial S}{\partial q_i}, \quad Q_i = \frac{\partial S}{\partial P_i} = \frac{\partial S}{\partial \alpha_i} \equiv \beta_i$$
Then we have parameterization
$$q_j = q_j(\alpha, \beta, t)$$
Hamilton's principal function is the generator of a canonical trans. to constant coordinates and momenta?
In charge-free space, the value of the electrostatic potential at any point is the average of the potential over the surface of any sphere centered on that point. electrostatics = coulomb's law at various extents and contexts - $\rho \rightarrow$ potential $\rightarrow \vec{E} \rightarrow$ force
|ESU|SI| |--|--| |k=1|$k=(4\pi\epsilon_0)^{-1}$| |charge in units of statcoulomb|charge in coulombs| |field in statvolts/meter|field in volts/meter|
$\delta$ has units of inverse length
$\vec{E}$'s gauss law depends on 1. inverse square law, 2. central force nature, 3. linear superposition
differential form $\approx$ differential equation
neumann's problem: normal derivatives specified
green's function satisfy $$\vec{\nabla}^2 (\frac{1}{|\vec{x}- \vec{x}'|}) = - 4 \pi \delta(\vec{x}- \vec{x}')$$
Green Functions for Poisson’s Equation Obtaining Green Functions from the Method of Images
LN 3.47 discussion: pg 135 star
the component F_D of the full Dirichlet Green Function G_D can be determined by the method of images in some cases.
I wish I knew these well before trying exercises in chapter 3 of Jackson! - $P_l$ is $l$-th order in $x$, - $P_l$ has only even/odd powers if $l$ is even/odd - $P_l(1) = 1, P_l(-1) = (-1)^l$ - $P_l(0)=[(−1)^n(2n−1)!!]/2^n n!$ for even $l$. $P_l(0)=0$ for odd $l$ - $\nabla^2 P_l(\cos \theta) = - \frac{l(l+1)}{r^2}P_l(\cos \theta)$
$<.|.> = \int_0^{2\pi}d\phi \int^{\pi} \sin\theta [...]$ - to have finite solutions on [-1,1] the parameter $l$ must be nonzero integer, and integer $m$ takes on $2l+1$ values between $\pm l$. These legendre polynomial are called associated legendre functions $P^m_l (x)$,
$$P^m_l (x) = (-1)^m (1-x^2)^{m/2} \frac{d^m}{dx^m} P_l(x)$$ , in rodriguez form: $$P^m_l (x) = \frac{(-1)^m}{2^l l!} (1-x^2)^{m/2} \frac{d^{l+m}}{dx^{l+m}}[(x^2-1)^l]$$
the addition theorem expresses a legendre polynomial of order $l$ in the angle $\gamma$ in terms of products of the spherical harmonics of the angle $\theta,\phi$ and $\theta',\phi'$ $$P_l(\cos \gamma) = \frac{4\pi}{2l+1} \sum_{m=-l}^{l} Y^*{lm}(\theta',\phi') Y{lm}(\theta,\phi)$$ - gives expansion of $\frac{1}{|\vec{r} - \vec{r}'|}$
When complex notation is used, keep in mind that we are only talking about the real part.
dispersive: whenever the speed of wave depends on its frequency, the medium is dispersive
Microcanonical ensemble: macrostates defined by (N,V,E)
The ensemble average of any physical quantity is identical to the value one expects to obtain on measurement
The volume $\omega = \int d^{3N}p d^{3N}q$ integrated over allowed region is the volume of allowed region in phase space, give direct measure of multiplicity of states
=> we are led to define $\omega_0$ as volume of single state in phase space => $\omega_0 \equiv O(\hbar^N)$
Transition from quantum mechanical to classical
CM partition function:
$$Q_N = \frac{1}{N! h^{3N}} \int d\omega e^{\beta H(p,q)}$$
where $d\omega = d^{3N}p d^{3N}q$
QM partition function: $$Q_N = \int_0^\infty e^{\beta E} g(E) dE$$
Microcanonical Ensemble: $\Omega = \Omega(N,V,E)$ Canonical Ensemble: $\Omega = \Omega(N,V,T)$
In canonical ensemble, $\textbf{Prob}(r) = \frac{e^{-\beta E_r}}{\sum_i e^{-\beta E_i}}$. This result can be obtained in two ways: 1. system in thermal equilibrium with large heat reservoir, 2. ensemble approach, then find most probable macrostate via lagrange multipliers.
fundamental assumption: a closed system is equally likely to be in any of the quantum states accessible to it. By accessible, we mean 'compatible' with the physical description (temperature, volume, energy, pressure, etc.). This has nothing to do with which state the system will collapse to, but the number of states compatible (degeneracy) wich the state that the system collapses to. Usually this number is large.
an ensemble is a collection of many systems, all constructed alike. Each system in the ensemble is a replica of the actual system in one of the quantum states accesible to the system. If there are g accessible states, then there will be g systems in the ensemble, one system for each state. The concept of ensemble facilitates turning the fundamental assumption into mathematical language.
an ensemble of systems is composed of many systems, all constructed alike, each in one of the accessible states
assuming lattice vibrational excitations: $\epsilon = s\hbar \omega$. Consider only mode $\omega$.
$$Z_\omega = \sum_s \exp(-s\hbar \omega / \tau) = \frac{1}{1- e^{-\hbar \omega/ \tau}}$$
$$\left< s \right>\omega = \frac{\left< \epsilon \right>\omega}{\hbar \omega} = \sum_s (s) \exp(-s\hbar \omega / \tau) = \frac{1}{e^{\hbar \omega / \tau} - 1}$$
for bosons, particle # need not be conserved, and $\left< s \right>_\omega$ can be interpreted as the thermal equilibrium expec number of $\omega$-energy photons/phonons in the system. This formalism is completely general to all bosons
In the realm of therst, $\left< \epsilon \right>\omega$ "IS" $\epsilon\omega$. So lifting our discussion to all $\omega$:
$$U = \sum_\omega D(\omega) \epsilon_\omega$$
Now transition to phase space with a standard treatment of degeneracy gives
$$\frac{U}{V} = \frac{\pi^2}{15 \hbar^3 c^3} \tau^4$$
$$u_\omega = \frac{\hbar}{\pi^2 c^3} \frac{\omega^3}{ e^{\hbar \omega / \tau} - 1}$$
Kittel claims this is where all quantum theory began
$$\left< s(\omega) \right> = \frac{1}{\exp (\hbar \omega / \tau) - 1}$$
the form $x = A e^{i\omega t}$ for hamonic oscillators leads us to assume amplitude and frequency are independent for phonons
N coupled oscillators have 3N modes (reference: goldstein 6.4), so phonons have 3N modes in contrary to $\infty$ for photons. A bound is thus introduced for the integration in phase space, given by the debye frequency.
Heat flows from higher temperature to lower temperature. Particles flow from higher chemical potential to lower chemical potential.
ex: battery with electrons
diffusive contact: can exchange heat and particles
(reminder) Helmholdz free energy: $F = U - \tau \sigma$ will be a minimum for a system in thermal equilibrium
Then use consider two systems at thermal equilibrium
$$F = F_1 + F_2 = U_1 + U_2 - \tau (\sigma_1 + \sigma_2)$$
demand conservation of particle number $N = N_1 + N_2$, $dN_1 + dN_2 = 0$, and that helmholtz free energy at minimum $dF = 0$
The infinitesmal change of $F$ is
$$dF = \left(\frac{\partial F_1}{\partial N_1}\right)\tau dN_1 + \left(\frac{\partial F_2}{\partial N_2}\right)\tau dN_2 = 0$$
when combined with $dN_1 + dN_2 = 0$, it gives
$$\left(\frac{\partial F_1}{\partial N_1}\right)\tau = \left(\frac{\partial F_2}{\partial N_2}\right)\tau$$
which facilitates the definition $\mu(\tau, V, N) = \left( \frac{\partial F}{\partial N} \right)_{\tau, V}$
This facilitates the formulation diffusive equilibrium in chemical potential in analogy with thermal equilibrium in temperature.
THE CHEMICAL POTENTIAL IS EQUIVALENT TO A TRUE POTENTIAL ENERGY!
Because Schutz does such a good job with clarity, the notes here only highlights key points and eqs.
$$l = \int_{\lambda_0}^{\lambda_1}|\vec{V} \cdot \vec{V}|^{1/2} d\lambda$$
$$\Gamma^\gamma_{\beta\mu} = \frac{1}{2}g^{\alpha \gamma}(g_{\alpha\beta,\mu}+g_{\alpha\mu,\beta}-g_{\beta\mu,\alpha})$$
$$V^\alpha_{,\beta} = \frac{\partial V^\alpha}{\partial x^\beta}$$
$$V^\alpha_{;\beta} = \frac{\partial V^\alpha}{\partial x^\beta} + V^\mu \Gamma^\alpha_{\mu \beta}$$
$$P_{\alpha;\beta} = P_{\alpha,\beta} - \Gamma^\mu_{\alpha \beta} P_\mu$$
$$T^{\alpha \beta}{;\gamma} = T^{\alpha\beta}{,\gamma} + \Gamma^\alpha_{\mu \gamma}T^{\mu \beta} + \Gamma^\beta_{\mu \gamma}T^{\alpha \mu}$$
$$g_{\alpha \beta;\gamma} = 0 $$
in any basis
$$V^\alpha_{;\alpha} = V^\alpha_{,\alpha} + \Gamma^\alpha_{\mu\alpha} V^\mu$$
Let vector $V$ be taken by a curve parameterized by $\lambda$, parallel transport means
$$\frac{dV^\alpha}{d\lambda} = 0 $$
a geodesic is parallel transport of parallel line, generalization of the idea of "staight line".
In curved space, parallel lines when extended do not remain parallel.
riemann curvature tensor: quantifying the amount of change of parallel transport of a vector around a infinitesimally small rectangular loop.
$$R_{\alpha \beta \mu \nu} = -R_{\beta \alpha \mu \nu} = -R_{\alpha \beta \nu \mu} = R_{\mu \nu \alpha \beta}$$ $$R_{\alpha \beta \mu \nu} + R_{\alpha \nu \beta \mu} + R_{\alpha \mu \nu \beta} = 0$$
it has a relationship to commutator:
$$[\nabla_{\alpha}, \nabla_{\beta}] V^{\nu} = R^{\mu}_{\nu \alpha\beta} V^{\nu} $$
$$R_{\alpha\beta\mu\nu;\lambda} + R_{\alpha\beta\lambda\mu;\nu} + R_{\alpha\beta\nu\lambda;\mu} = 0 $$
$$R_{\alpha \beta} \equiv R^\mu_{\sigma \mu \beta} = R_{\beta\alpha}$$
$$G^\alpha \beta \equiv R^{\alpha \beta} - \frac{1}{2}g^{\alpha \beta} R = G^{\beta\alpha}$$
$$G^{\alpha \beta}_{;\beta} = 0$$
I began with the 1st edition, doing exercises from that, then switched to the 3rd edition, on 4/25/2024, and relabeled exercise numbers. Interestingly, within exercises I have done, the exercise numbers match the newer edition.
$$(\phi,\psi) = (\psi,\phi)^, \quad (\phi,a\psi_1+b\psi_2) = a(\phi,\psi_1) + b(\phi,\psi_2)$$ $$(a\phi_1+b\phi_2,\psi) = a^(\phi_1,\psi) + b^(\phi_2,\psi)$$ - Wigner proved that for any symmetry transformation, there is a corresponding operator $U$ in Hilbert space that is either unitary and linear or antiunitary and antilinear* - differential probability $\Leftrightarrow$ amplitude squared
Unitary and Linear means
$$(U\psi,U\phi) = (\psi,\phi) \quad U(a\psi_1 + b\psi_2) = aU\psi_1 + bU\psi_2$$
antiunitary and antilinear means
$$\braket{U\psi}{U\phi} = \braket{\psi}{\phi}^ \quad U(a\psi_1 + b\psi_2) = a^U\psi_1 + b^*U\psi_2$$
the adjoint for unitary is defined as
$$(\phi,A\psi) = (A^\dagger \phi,\psi)$$
while the adjoint for antiunitary operator is defined as
$$\textbf{Rep}(\Lambda_1)\textbf{Rep}(\Lambda_2)= N(\Lambda_1, \Lambda_2)\textbf{Rep}(\Lambda_1 \Lambda_2) \simeq \textbf{Rep}(\Lambda_1 \Lambda_2)$$
in other words, representations of symmetry transformations tell us how the states transform in response to this symmetry transformation. - [❤️] we are looking for Poincare-invariant theories thus we are interested in unitary representations of the Poincare group.
The proper orthochronous Lorentz group SO(1,3) consists of all Lorentz transformations that preserve the orientation and direction of time and are connected to the identity transformation.
any lorentz transformation is either proper and orthochronous, or may be written as the product of an element of the proper orthochronous lorentz group with a discrete transformation parity or time-reversal
or equivalently with $P={P^1,P^2,P^3}$, $J={J^{23},J^{31},J^{12}}$, $K={J^{01},J^{02},J^{03}}$, we have
$$ [J_i, J_j] = i \epsilon_{ijk} J_k, \quad [J_i, K_j] = i \epsilon_{ijk} K_k, \quad [K_i, K_j] = -i \epsilon_{ijk} J_k, \quad [J_i, P_j] = i \epsilon_{ijk} P_k$$
$$ [K_i, P_j] = - i H \delta_{ij}, \quad [K_i, H] = -i P_i, \quad [J_i, H] = [P_i, H] = 0 $$
In this section, we consider the representation of the lorentz group, and classify the states by their little group, and we find that such classification corresponds to that of massive and massless particles. - We begin with a general statement of how representation of a lorentz transformation should act on a state (2.5.3) - no matter how we lorentz transform, we can never transform a time-like particle into a massive particle, so we can classify the particle by their momentum into 'cliques', where momenta from two different cliques are never related by a lorentz transformation. - we pick a standard momentum in each 'clique', we define a standard transformation that takes the standard momentum to any other momentum within the clique - The standard momentum and standard transformation above allows us to factor a general operator induced by a lorentz transformation into an operator induced by the standard transformation and some other operator reduced by a transformation that fixes the momentum, we call this 'other transformation' the 'little group' - On Lil grp: $p^\mu = L^{\mu}{\nu} p^\nu(p) k^{\mu}$ with $k^\nu$ being the standard momentum, along with $\psi{p,\sigma} \equiv N(p) U(L(p))\psi_{k \sigma}$ gives $U(\Lambda) \psi_{p,\sigma} = N(p) U(L^{-1}\Lambda L(p)) \psi_{k\sigma}$ Observe that $L^{-1}\Lambda L(p)$ belongs to the subgrp of the homogeneous lorentz group that fixes $k$, this is a more rigorous definition of the Wigner's little group. - Some simple further derivation leads quickly to (2.5.11), which turns the problem of finding the representation of lorentz group into finding representation of the little group. At a more fundamental level, what (2.5.11) says is that the representation of a general $\Lambda$ can be written as scaling by some number $\frac{N(p)}{N(\Lambda p)}$ together with a representation of the little group $D(W, p)$. - we fix the normalization factor in (2.5.11), shown in (2.5.18) - [❤️] the little group is SO(3) and ISO(2) for massive and massless particles}
$$\textbf{P} J \textbf{P}^{-1} = J \quad \textbf{P} K \textbf{P}^{-1} = -K \quad \textbf{P} P \textbf{P}^{-1} = -P$$
$$\textbf{T} J \textbf{T}^{-1} = -J \quad \textbf{T} K \textbf{T}^{-1} = K \quad \textbf{T} P \textbf{T}^{-1} = -P$$
$$T \Psi_{\vec{p}, \sigma} = \xi_{\sigma} \exp{\pm i \pi \sigma} \Psi_{P\vec{p}, \sigma}$$
again the $\pm$ depends on whether the two component of $\vec{p}$ is positive or negative
further derivations leads to the Lippmann-Schwinger formula $$\Psi_\alpha^{\pm} = \Phi_\alpha + \int d\beta \frac{T_{\beta \alpha}^{\pm} \Phi_\beta}{E_\alpha - E_\beta \pm i\epsilon}, \text{\quad \quad} T_{\beta \alpha}^{\pm} \equiv \braket{\Phi_\beta}{V \Psi_\alpha^{\pm}}$$
as a convenient representation for the factor of $\frac{1}{E_\alpha - E_\beta \pm i \epsilon}$, we have $$\frac{1}{E \pm i \epsilon} = \frac{\mathscr{P}}{E} \mp i \pi \delta(E)$$ where $\mathscr{P} = \frac{E}{E^2 + \epsilon^2}$
If there were no interaction then the in and out states would be the same, meaning $$S_{\beta \alpha} = \delta(\alpha-\beta),$$ thus the rate of reaction $\Psi_\alpha \rightarrow \Psi_\beta$ is $|S_{\beta \alpha} - \delta(\alpha - \beta)|^2$ \footnote{the in and out states are not differentiated, they are only labeled differently here, and they only differ in how they are created or their asymptotic time evolution, in other words $$\braket{\Psi_\beta^-}{\Phi_\alpha^+} = \braket{\Psi_\beta}{\Phi_\alpha}$$
S matrix connects two complete sets of orthonormal states, it must be unitary $$\int d\beta S_{\beta \gamma}^* S_{\beta \alpha} = \braket{\Psi_\gamma}{\Psi_\alpha} = \delta(\gamma - \alpha), \quad S^\dag S = 1$$
If we define the operator analog of the S matrix as $$\braket{\Phi_\beta}{S \Phi_\alpha} = S_{\beta \alpha}$$ we will find $$S = \Omega(\infty)^\dag \Omega(-\infty) = U(\infty, -\infty)$$ $$U(\tau, \tau_0) = \exp{iH_0 \tau }\exp{-iH(\tau - \tau_0)}\exp{-iH_0 \tau }$$
As another caveat, how do we know $U(\Lambda)$ acts the same on the in and out state? We dont, but we can apply the transformation rules in Hilbert space and we will find that the inner product will be invariant if the theory satisfies
$$ [S, U(\Lambda)_0] = 0$$ , which when expressed in terms of the infinitesmal lorentz transformation, is
$$[H_0,S]=[P,S]=[J_0, S]=[K_0,S]=0$$
Weinberg also goes to show that an alternative formulation of the condition also makes Smat lorentz invariant $$[V,P_0] = [V, J_0]=0$$
The naive definition of the parity operator is $$U(P) \Psi_{p, \sigma} = \eta \Psi_{P p, \sigma} $$where $\eta$ is the intrinsic parity (a phase factor that arises as an eigenvalue of the parity operation) of the state. In a subtle way, this gives us the freedom to redefine $P$ if $P$ were to be conservedm
Parity being conserved means $$[P, H_0] = [P, V] = 0$$ If we have a conserved internal symmetry $T$ means
$$[U_0(T), H_0] = [U_0(T), V] = 0$$
thus
$$[P U_0(T), H_0] = [P U_0(T), V] = 0$$ $$P' = PU_0(T) \text{ for internal symmetry } T$$ - Must the intrinsic parity always take on $\pm 1$, ignoring normalization? - Yes, Suppose it doesn't, then we have $$P^2 \Psi = e^{i\theta} \Psi $$ if the theory we are considering has a continuous internal symmetry, then we can redefine $\textbf{P} \rightarrow \textbf{P} I$ for some internal symmetry $I$ that cancels the $e^{i\theta}$ exactly, and this new $\textbf{P}'$ will give intrinsic parity of $\pm 1$, we then redefine it to be the parity operator $\Box$ - No, if we don't have continuous internal symmetry. - $P^2 \Psi = \eta \Psi$ means $P^2$ acts like an internal symmetry
$$S_{\beta \alpha} \rightarrow -e^{2i(\delta_\alpha + \delta_\beta)} S^*_{\textbf{P} \textbf{T} \beta \textbf{P} \textbf{T} \alpha}$$
$\textbf{P} \textbf{T}$ would preserve the momenta while swapping spin, so invariance of S-matrix under $\textbf{P} \textbf{T}$ implies there wouldnt be any preference for the electron in the decay $\text{Co}^{60} \rightarrow \text{Ni}^{60} + e^- + \bar{nu}$ to be emitted in the same or opposite direction to the $\text{Co}^{60}$ spin - the 1957 experiment did not rule out time-reversal symmetry immediately, but ruled out PT by demonstrating the above statement is false
It is understood today that C is not conserved in the weak interaction, just like how P is not conserved
It is thought to be true that CP is ''more conserver" than C and P in weak force, but nevertheless CP is still not a conserved quantity
There is good reason to believe that CPT is conserved exactly, which would be really nice as it 1. gives a good interpretation of antiparticles 2. the fact that CPT commutes with the Hamiltonian tells us that the particle and antiparticle have the same mass
Weinberg: this section is more like a mnemonic, because it seems like no interesting open problems in physics hinge on getting the fine points right regarding these matters
$$\Im M_{\alpha \alpha } = - \pi \int d\beta \delta^{4}(p_\beta - p_\alpha ) |M_{\alpha \beta }|^2$$
which implies
$$\int d\beta \delta^{4}(p_\beta - p_\alpha ) |M_{\alpha \beta }|^2 = \int d\beta \delta^{4}(p_\beta - p_\alpha ) |M_{\beta \alpha }|^2$$
Cluster Decomposition Principle - We can express the Hamiltonian by giving its matrix elements between states with arbitrary number of particles. We will do so via a bunch of creation and annihilation operator, the benefit of this approach is that if a certain condition on the $\ap \ap^\dag $s in the Hamiltonian are satisfied, the S-matrix will satisfy the cluster decomposition principle, which will imply locality is obeyed.
local: events separated by spacelike vectors can not influence each other
Quantum Fields and Antiparticles
In this chapter, quantum fields will be introduced, during its construction, as a result of the union between special relaticity and quantum mechanics, we will encounter
See (5.1.6, 5.1.7, the $D_{\bar{l} l}$) on the RHS is the matrix that weinberg is referring to
(4.2.12) gives us the transformation rules of the $\text{a}{\text{p}}^\dag$, we take its adjoint and find the transformation rules of both $\text{a}{\text{p}}, \text{a}{\text{p}}^\dag$. Then putting the transformation rules of $\text{a}{\text{p}}, \text{a}{\text{p}}^\dag$ with that of $\Psi^\pm$, we find the transformation rules of coefficients $u,v$ of $\text{a}{\text{p}}, \text{a}_{\text{p}}^\dag$ in $\Psi^\pm$ under general Lorentz transformation (5.1.13, 5.1.14), these are the fundamental requirements that will allow us to calculate the $u,v$
[❤️] We then take the rule above, and plug in pure translations, boosts, and rotations, and find
It would be fair to say $\mathscr{H}$ is a polynomial in $\psi^+$, $\psi^-$
$$\mathscr{H} = \sum_{NM} \sum_{l'1 ... l'_N} \sum{l_1 ... l_M} g_{l'1...l'_N,l_1...l_M} \psi^-{l'1}...\psi^-{l'N} \psi^+{l_1}...\psi^+_{l_M}$$
Weinberg claims that in order for the Hamiltonian to commute with $Q$, it is necessary that the Hamiltonian be formed out of fields that have simple commutation relations with $Q$, meaning $[Q, \psi_l] = -q_l \psi_l$\footnote{I don't understand why this HAS to be the case right now, I see why it would be nice, but I ll take his words. - But this is clearly not the case for $\psi = \text{a}{\text{p}} + \text{a}{\text{p}}^\dag$, unless $Q=0$. \footnote{for $[Q,\psi] = [Q, \text{a}{\text{p}} + \text{a}{\text{p}}^\dag] = -q(\text{a}_{\text{p}}- ap^\dag)$.} - A possible solution would be to introduce another particle that carries the opposite charge. This is the idea behind \textbf{antiparticles}.
causal: fields commute at spacelike separations.
$$\Psi^+{l}(x) = \sum{\sigma n} \int d^3 u_{l} (x;\textbf{p}, \sigma, n) a(\textbf{p}, \sigma, n) $$
having scalar field means we can remove this $l$ index, again, keep in mind that the field only takes on a scalar value at each point in spacetime! - [👉] the field being just a scalar means the matrix $D_{\bar{l} l}(\Lambda)$ that we were talking about is simply a *scalar: $D(\Lambda)$***! - [👉] In his book weinberg considers the simplest representation where $D(\Lambda)=1$
\begin{itemize} - Recall that the $D^{(j)}$ representation of the rotation group is dim $2j + 1$, so for this representation to be a scalar representation we necessarily have spin = 0. A scalar field necessarily describes particle with 0 spin. - [👉] we can drop the $\bar{\sigma},\sigma$ labels because they only take on the value of 0, assuming we are working with one species of particles, we can drop the $n$ as well, so we write $$u = u(p), v = v(p)$$ - it is conventional to normalize the annihilation and creation fields so that $u(0) = v(0) = \sqrt{\frac{1}{2m}}$, so we have $$u(\textbf{p}) = \sqrt{\frac{1}{2p^0}}, \quad v(\textbf{p}) = \sqrt{\frac{1}{2p^0}}$$ \end{itemize}
$$ \phi = \phi^+ + \phi^{+c\dag}$$ $$ = \int \frac{d^3 p}{(2\pi)^{3/2} (2 p^0)^{1/2}} \left[ a_{\textbf{p}}e^{ip\cdot x} + a_{\textbf{p}}^{c\dag}e^{-ip \cdot x}\right] $$
We first follow the same procedure as we did for the scalar field (5.3.1-5.3.5), we encounter a difference when it comes to spin of the particle: vector field can have nontrivial spin (5.3.6, 5.3.7). Thus we need to examine spin by looking at the rotation transforamtion properties of the vector field, by looking at the rotation generators $\mathscr{J}^\mu_\nu$
\begin{itemize} - The rotation generators are $$\left(\begin{array}{rrr} 0 & 0 & 0 \ 0 & 0 & -i \ 0 & i & 0 \end{array}\right), \left(\begin{array}{rrr} 0 & 0 & i \ 0 & 0 & 0 \ -i & 0 & 0 \end{array}\right), \left(\begin{array}{rrr} 0 & -i & 0 \ i & 0 & 0 \ 0 & 0 & 0 \end{array}\right)$$ squaring them, and adding them up yields $2\delta^i_j$. - We know that in the four vector representation of the rotation group, the time component is zero, pluggint this into (5.3.6, 5.3.7) we obtain (5.3.12, 5.3.14). Plugging in our our last result of $2\delta$ into (5.3.6, 5.3.7), we find (5.3.13, 5.3.15). \footnote{Recall that $(J^{(s)})^2 = s(s+1)$, so it may seem like for the $2\delta$ to be true, we must have $s =1$, but that would be the case if the rotation generators are the entire story of this representation. Which isnt the case because there is also another component of time. If the spatial components is nonzero and nontrivial, then we necessarily have $s=1$. However, there is another solution of setting the spatial components trivially to be 0, which would allow us to set $s=0$. } - Possibility 1: at $\textbf{p}=0$, only $u^0, v^0$ are nonzero and $s$ (or $j$) equals 0. - Possibility 2: at $\textbf{p} =0$, only $u^i, v^i$ are nonzero and $s = 1$.
\begin{itemize} - In the 4D spacetime dimensions, weinberg considers an example set of $\gamma$ (5.4.17), the Pauli spinors. - recall that the pauli spinor matrices give the projection of the electron's spin along a direction. - using the relation between $\mathscr{J}$ and $\gamma$ given in (5.4.6), we find the lorentz group generators (5.4.19,20), and we find that these generators are block-diagonal, meaning we have found a reduciable representation that is simply a direct sum of two irreducible ones. \end{itemize}
$$\sum_{\bar{\sigma}}u_{\bar{l}}(0, \bar{\sigma}) J^{(j)}{\bar{\sigma} \sigma} = \sum{l} \mathscr{J}_{\bar{l}l} u_l(0, \sigma)$$
we find the two equations above (5.5.3). - By a theorem from group theory, we find that the spin of the representation must be $\frac{1}{2}$, and we further find the basis vectors (equations above 5.5.6 )
The Feynman Rules
Feynman was led to these diagrammatic rules through his development of the path-integral approach, which will be the subject of section 9.
\begin{itemize} - The Dyson's series expresses the matrix element as (6.1.1)
Recall the expression of $S$ in terms of Dyson series (3.5.10), using our newest creation and annihilation operator formalism, we put the free fields on both sides of (3.5.10), and putting the corresponding creation and annihilation operators between the free fields and $S$ yields (6.1.1)
This can be understood as the 'wave equation' of the particle, which contains everything we can possibly know about such particle. The integral over those coefficients imply that this is a localized particle, or 'wave packet'.
The rules for calculating the S-matrix are conveniently summarized in temrs of Feynman diagrams. Each vertex in the diagrams represent one of the $\Ham_i$, and each of the lines represent a pairing described above.
the canonical formalism
the real point of lagr is that it provides a natural framework for the qm implementation of symmetry principles
there must be an even number of second class constraints
qed
the requirement above can be restated as a principle of invariance: the matter action is invariant under the adjoint transformations: $$\delta A_\mu(x) = \partial_\mu \epsilon(x), \quad \delta \psi_l(x) = o \epsilon(x) q_l \psi_l(x)$$
Taking the radiation action to be that of (8.1.14), and imposing the equation of motion yields $0 = \frac{\delta}{\delta A_\nu} [I_\gamma + I_{M}] = \partial_\mu F^{\mu \nu} + J^\nu$, which is the inhomogeneous maxwell equations.
choosing the coulomb gauge, and then applying dirac's method of removing second class constraints, we done.
path integral methods
$, weinberg considers the transition into a state with infinitesmal increment of time: $< q': \tau + d\tau | q: \tau>$
The idea being instead of all paths from point to point, we are integrating over all fields that take on the initial and final conditions
in space and over a spin and species index $m$, and replacing $Q_a(t)$ with $Q_a(\vec{x},t)$, same for $P$.
for the 4-momentum reason
\subsection{external field methods}
\paragraph{effective action from external field}
\begin{itemize} \item $Z[J]$ is the vac-vac amplitude under external current $J$ coupled to $\phi$, it is given by $$Z[J] = \sum_{N=0}^{\infty}\frac{(iW[J])^N}{N!} = \exp{(iW[J])}$$ where $iW[J]$ is the sum of all connected vacvac amp, counting permutations as different diagrams. This means finding Z is finding W. \item define $\phi_J$ to be the vac expect value of the operator $\Phi(x)$ in the presence of the current $J$, or equivalently $$\phi_J = \frac{\delta}{\delta J(x)}W[J]$$ \item the quantum effective action is defined as $$\Gamma[\phi] \cong - \int d^4x \phi^r(x) J_{\phi r}(x) + W[J]$$ one finds $\frac{\delta \Gamma[\phi]}{\delta \phi^s(y)} = -J_{\phi s}(y)$ it is shown that $\Gamma[\phi]$ is the sum of all connected one-particle irreducible graphs in presence of $J_\phi$ \item $W_\Gamma [J,g]$ is for $W[J]$ when $I[\phi] \rightarrow g^{-1}\Gamma[\phi]$ \item[❤️] we find W via the effective action: $$iW[J] = \int_{\text{conn. tree}}\left[ \prod_{r,x}d\phi^r(x) \right] \exp{ \left[ i\Gamma[\phi] + i \int \phi^r(x)J_r(x) d^4x \right] }$$ \end{itemize}
\paragraph{application on scalar theory} \begin{itemize} \item take scalar action and add position-independent external field $\phi_0(x) = \phi_x$ assumed to be some functional over all space so we get a $\mathscr{V}_4 = \int d^4x$, and $\Gamma[\phi_0] = - \mathscr{V}_4 V(\phi_0)$ \item where $V(\phi_0)$ is known as the effective potential, if we wish to calculate it to one-loop order, we obtain (16.2.13,14) \item the divergence of the effective potential can be absorbed into appropriate constants (16.2.15) \end{itemize}
\paragraph{energy interpretation} \begin{itemize} \item \end{itemize}
most similar to helium in theories we have already studied
the 3-body difficulty is alleviated by considering $l=0$ states, where angular momentum of entire system comes from only coupling of the spins. This gives a j=3/2 quartet (totally symmetric), and two j=1/2 doublets (both antisymmetric among two particles): $2\otimes 2 \otimes 2 = 4 \oplus 2 \oplus 1$
There are 3 experimental probes of elementary particle interactions: bound states, decays, scattering.
the feynman rules stated for qed works with spin, but we often dont care about the spins in actual experiments. In those cases the relevant cross section is an average over all initial spin configurations, and sum over all final spin configurations.
I. Crystal structure, symmetry and types of chemical bonds. (Chapter 1) • The crystal lattice • Point symmetry • The 32 crystal classes • Types of bonding (covalent, ionic, metallic bonding; hydrogen and van der Waals).
• II. Diffraction from periodic structures (Chapter 2) • Reciprocal lattice; Brillouin zones • Laue condition and Bragg law • Structure factor; defects • Methods of structure analysis • HRXRD. Experimental demonstration in the Physics Lab using Bruker D8 Discover XRD
• III. Lattice vibrations and thermal properties (Chapter 3) • Elastic properties of crystals; elastic waves • Models of lattice vibrations • Phonons • Theories of phonon specific heat; thermal conduction. • Anharmonicity; thermal expansion • Raman Scattering by phonons. Experimental demonstration in the Physics Lab using Ar-laser/SPEX 500M, CCD –based Raman Scattering setup
• IV. Electrons in metals (Chapters 4–5) • Free electron theory of metals • Fermi Statistics • Band theory of solids
• V. Semiconductors (Chapters 6–7) • Band structure. • Electron statistics; carrier concentration and transport; conductivity; mobility • Impurities and defects • Magnetic field effects: cyclotron resonance and Hall effect • Optical properties; absorption, photoconductivity and luminescence • Basic semiconductor devices • Photoluminescence. Experimental demonstration in the Physics Lab using Nd:YAG laser/SPEX –based Photoluminescence setup
• VI. Dielectric properties of solids (Chapters 8) • Dielectric constant and polarizability (susceptibility) • Dipolar polarizability, ionic and electronic polarizability • Piezoelectricity; pyro- and ferroelectricity • Light propagation in solids
• VII. Magnetism (Chapters 9) • Magnetic susceptibility • Classification of materials; diamagnetism, paramagnetism • Ferromagnetism and antiferromagnetism • Magnetic resonance • Multiferroic Materials • VIII. Superconductivity (Chapter 10)
ordinary optical refraction happens at 500A in crystals
bragg diffraction (crystal diffraction) can be used to select a special spectrum of beam
bragg law: $2d \sin{\theta} = n \lambda$, this is the recurring theme of this chapter and will be echoed in the next chapter
in the fourer expansion of a function on the lattice, periodicity only allows terms with the same periodicity as the lattice in phase space. This defines the reciprocal lattice. Thus we can say the function is expanded on the reciprocal lattice in phase space: $$n(x) = \sum_p n_p \exp{i \frac{2 \pi p}{a} x}$$ where $\frac{2 \pi p}{a}$ is a point in the reciprocal lattice, with dimension of inverse distance.
periodicity is the realm of fourier analysis
If $\vec{a}_1, \vec{a}_2, \vec{a}_3$ are primitive vectors of the crystal lattice, the primitive vectors of the reciprocal lattice $\vec{b}_1, \vec{b}_2, \vec{b}_3$ are give by $$\vec{b}_i = 2 \pi \frac{\vec{a}_j \times \vec{a}_k }{\vec{a}_i \cdot \vec{a}_j \times \vec{a}_k}$$
the diffraction pattern of the crystal is a map of the reciprocal lattice
only waves whose wavevector $\vec{k}$ drawn from the origin terminates on the surface of the brillouin zone can be diffracted by the crystal
sc has reciprocal of sc
lennard jones potential: a combination of induced-dipole (or van der waal, or london) amd pauli exclusion principle $$U(R) = 4 \epsilon [(\frac{\sigma}{R})^{12} - (\frac{\sigma}{R})^6]$$
if we neglect the kinetic energy of the inert gas atoms, the cohesive energy of an inert gas crystal is given by summing the lennard-jones potential over all pairs of atoms in the crystal.
in metals, bonds are formed due to lowering of valence electron energy as compared with free atoms without bonding
hydrogen bonds: an atom of H is attracted to 2 other atoms. h-bond is important for th einteraction between H2O molecules and is responsible together with the electrostatic attraction of edip for water and ice
the elastic properties of a crystal is viewed by consideration of it as a continuous homogeneous medium, rather than a lattice. This is valid for elastic waves with $\lambda$ longer than E-6 cm or E12 Hz.
when a wave propagates in a crystal, we model it with entire planes of atoms moving in phase with displacements either parallel or perpendicular to wave vector (longitudinal/transverse)
the author builds a model using hook's law assuming contributions from planes after the most two closest planes vanish, then solved as a coupled oscillator. The dispersion relation is then obtained
only the elastic waves in the first brillouin zone are physically significant, for those outside such can be lattice-transformed into the first brillouin zone.
at the boundaries of the first brillouin zone, the solution to the wave is not traveling, i.e. standing. Recall that for refraction the boundaries on the first brillouin zone give the only wavevectors that can refract.
zero point energy $\approx$ GND state energy
wave-like solutions do not exist for certain frequencies in polyatomic lattices. for diatomic, this is between $\sqrt{2C/M_1}$ and $\sqrt{2C/M_2}$. There is a frequency gap at the boundary $K_{max} = \pm \frac{\pi}{a}$ of first brillouin zone.
TODO: - [x] understand the connection between elastic waves, refraction, bragg law, first brillouin zone. When bragg condition is satisfied, traveling waves dont form, so we obtain standing waves that oscillate back and forth
$$\psi_{\vec{k}} (\vec{r}) = u_{\vec{k}} (\vec{r}) \exp[ i \vec{k} \cdot \vec{r}]$$
griffiths has a section on bloch's theorem in his qm book - bloch functions can be decomposed into a sum of traveling waves
the elastic properties of a crystal is viewed by consideration of it as a continuous homogeneous medium, rather than a lattice. This is valid for elastic waves with $\lambda$ longer than E-6 cm or E12 Hz.
in the fourer expansion of a function on the lattice, periodicity only allows terms with the same periodicity as the lattice in phase space. This defines the reciprocal lattice. Thus we can say the function is expanded on the reciprocal lattice in phase space: $$n(x) = \sum_p n_p \exp{i \frac{2 \pi p}{a} x}$$ where $\frac{2 \pi p}{a}$ is a point in the reciprocal lattice, with dimension of inverse distance.
-van der waal potential ~ london potential ~ induced dip-dip potental. It is the principal attractive interaction in inert gas. The othe major contribution to inert gas interactions is pauli exclusive principle, repulsive. - inert gas ~ noble gas ~ rare gas
-van der waal potential ~ london potential ~ induced dip-dip potental. It is the principal attractive interaction in inert gas. The othe major contribution to inert gas interactions is pauli exclusive principle, repulsive. These two forces add to give lennard jones.
lennard jones potential: a combination of induced-dipole (or van der waal, or london) amd pauli exclusion principle $$U(R) = 4 \epsilon [(\frac{\sigma}{R})^{12} - (\frac{\sigma}{R})^6]$$
if we neglect the kinetic energy of the inert gas atoms, the cohesive energy of an inert gas crystal is given by summing the lennard-jones potential over all pairs of atoms in the crystal.
hydrogen bonds: an atom of H is attracted to 2 other atoms. h-bond is important for th einteraction between H2O molecules and is responsible together with the electrostatic attraction of edip for water and ice
the elastic properties of a crystal is viewed by consideration of it as a continuous homogeneous medium, rather than a lattice. This is valid for elastic waves with $\lambda$ longer than E-6 cm or E12 Hz.