Acessibilidade / Reportar erro

Pedagogical introduction to equilibrium Green's functions: condensed-matter examples with numerical implementations

Introdução pedagógica às funções de Green de equilíbrio: exemplos em matéria condensada com implementações numéricas

Abstracts

The Green's function method has applications in several fields in Physics, from classical differential equations to quantum many-body problems. In the quantum context, Green's functions are correlation functions, from which it is possible to extract information from the system under study, such as the density of states, relaxation times and response functions. Despite its power and versatility, it is known as a laborious and sometimes cumbersome method. Here we introduce the equilibrium Green's functions and the equation-of-motion technique, exemplifying the method in discrete lattices of non-interacting electrons. We start with simple models, such as the two-site molecule, the infinite and semi-infinite one-dimensional chains, and the two-dimensional ladder. Numerical implementations are developed via the recursive Green's function, implemented in Julia, an open-source, efficient and easy-to-learn scientific language. We also present a new variation of the surface recursive Green's function method, which can be of interest when simulating simultaneously the properties of surface and bulk.

Keywords
Green's functions; Quantum transport; Low-dimensional physics; Tight-binding; Density of states


O método das funções de Green possui aplicações em diversos campos da Física, desde equações diferenciais clássicas a problemas quânticos de muitos corpos. No contexto quântico, as funções de Green são funções de correlação, das quais é possível extrair informação sobre o sistema em estudo, tais como densidade de estados, tempos de relaxação e funções respostas. Apesar de seu poder e versatilidade, este método é conhecido por ser trabalhoso e às vezes intrincado. Neste trabalho introduzimos as funções de Green de equilíbrio e a técnica de equação de movimento, exemplificando o método em redes discretas de elétrons não-interagentes. Começamos com modelos simples, como a molécula de dois sítios, as cadeias unidimensionais infinita e semi-infinita, e a rede escada em duas dimensões. Implementações numéricas são desenvolvidas através das funções de Green recursivas, implementadas em Julia, uma linguagem científica de código aberto, eficiente e de fácil aprendizado. Também apresentamos uma nova variante do método de função de Green recursiva de superfície, que pode ser útil para simular simultaneamente as propriedades de superfície e bulk.

Palavras-chave
Funções de Green; Transporte quântico; Física de baixa dimensionalidade; Tightbinding; Densidade de estados


1. Introduction

The Green's functions method is a powerful mathematical tool to solve linear differential equations. These functions were named after the English miller, physicist and mathematician George Green (1793–1841) [1[1] D.M. Cannell, George Green, Mathematician and Physicist 1793–1841: The Background to His Life and Work (Society for Industrial and Applied Mathematics, Philadelphia, 2001).3[3] D.M. Cannell and N.T. Lord, The Mathematical Gazette 77, 26 (1993).]. His seminal work "An essay on the application of mathematical analysis to the theories of electricity and magnetism" (1828) [4[4] G. Green, arXiv preprint arXiv:0807.0088 (2008).] developed a theory of partial differential equations with general boundary conditions, introducing the so-called Green's theorem (also known today as Green's second identity1 1 Today, Green's identities are a set of three vector equations relating the bulk with the boundary of a region on which differential operators act, closely related to Gauss' divergence and Stokes' curl theorems. Green's second identity allows the conversion of a triple integral of laplacians within a volume into a double integral of gradients over its surface boundary: (1) where Ω is a volume bounded by the closed surface S, where S with unit length. This formula holds for regular functions u and v defined in Ω. is the outward normal to the boundary ), and the Green's functions [5[5] I. Grattan-Guinness, The American Mathematical Monthly 102, 387 (1995).7[7] G.L. Trigg, Mathematical Tools for Physicists (John Wiley & Sons, Hoboken, 2006).]. This essay was self-published by Green for private distribution among family and friends, and was later rediscovered by Lord Kelvin, being examined by Sturm, Liouville, Dirichlet, Riemann, Neumann, Maxwell, and others [8[8] J. Lindström, On the Origin and Early History of Functional Analysis (Uppsala Universitet, Uppsala, 2008).]. The Green's functions were born as auxiliary functions for solving boundary-value problems. The latter are differential equations with constraining boundary conditions, which specify values that the solution or its normal derivative take on the boundary of the domain. Boundary-value problems arise in several problems in physics, for instance, heat conduction in solid bodies, described by the diffusion or heat conduction equation: 2φ = −ρ/ϵ0; vibration in strings and membranes and wave propagation along special geometries, described by: 9[9] C.L.R. Braga, Notas de Física Matemática (Ed. Livraria da Fisica, São Paulo, 2006).11[11] D.J. Griffiths, Eletrodinâmica (Pearson, New York, 2011).]. From the Green's functions, a whole theory of partial differential equations arised, paving the way for the development of functional analysis, the branch of mathematics dedicated to the infinite-dimensional vector spaces and operators. By the end of the XIX century many boundary-value problems were approached in acoustics, hydrodynamics, thermodynamics and electromagnetism. Before we examine the development of the Green's functions in quantum mechanics, we shall review some of the general properties of a Green's function., the wave equation [; in charge distributions in surfaces by using the Poisson equation: ∇

1.1. Classical Green's functions

Formally, a Green's function is a solution of a linear differential equation with a Dirac delta inhomogeneous source (sometimes referred as a delta or unit pulse) with homogeneous boundary conditions. Let us clarify the emphazised concepts. A differential equation is said to be linear if the function f(x) and all its derivatives f(n)(x), n = (1, 2, ⋯, n), appear linearly. There is no product of the function and its derivatives, such as f(x).f″(x), and no powers of the function or of its derivatives beyond the first power. For example, in an ordinary differential equation, it should read:

(2)

On the other hand, the coefficients an(x) are arbitrary differentiable functions. Linearity of the operators is essential for the validity of the superposition principle, that allows the linear combination of solutions.

If a differential equation has a term on the right-hand-side (r.h.s.) of the equation that does not depend on your function f(x), we classify it as an inhomogeneous differential equation. For example, in Eq. (2), a linear homogeneous differential equation would have g(x) = 0, and an inhomogeneous one would have a non-zero function g(x) on the r.h.s., or a non-zero constant c.

The differential equation we will be concerned with has a special inhomogeneity function, the Dirac delta δ(xx′). Put simply, this object is defined to be zero when xx′, and infinite at x = x′:

(3)

Rigorously, the Dirac delta is not a function, since it would require to have a definite value for each point in its domain, but is instead classified as a distribution. Its most important property is

(4)

where f(x) is any continuous function of x. For other interesting properties of the Dirac delta, please check Refs. [9[9] C.L.R. Braga, Notas de Física Matemática (Ed. Livraria da Fisica, São Paulo, 2006).,10[10] E. Butkov, Física Matemática (Ed. LTC, São Paulo, 1988).,12[12] P.A.M. Dirac, The Principles of Quantum Mechanics (Oxford University Press, Oxford, 1981).].

Lastly, one often needs to impose boundary conditions on the solutions, meaning, conditions on the function or on its derivative at the boundary of the domain. If their values are zero, we call them homogeneous boundary conditions. For example, for a function f(x) with boundary at x = L, homogeneous boundary conditions would correspond to f(x = L) = 0 or f′(x = L) = 0.

Now shall we return to the classical Green's functions. To put the mathematical problem in perspective, imagine one would like to solve a partial linear inhomogeneous differential equation, say,

(5)

where D a linear differential operator, f(x) is the desired solution, and g(x) the inhomogeneity source.

The particular solution f(x) can be formally found with the aid of a function G(x, x′):

(6)

where the Green's function G(x, x′) is defined as the solution of a differential equation with a delta inhomogeneity:

(7)

To verify this, act with D on both sides of Eq. (6) and make use of the Dirac delta fundamental property, Eq. (4). Note that D acts on the x coordinate, keeping x′ fixed.

One can interpret Eq. (6) by considering the Green's functions as a "building block" for finding the particular solution f(x), since they are solutions to delta-impulse equations. In signal processing fields, the Green's function is often referred to as a response function, connecting a perturbation or "input signal" g(x) to the "output" f(x).

Before turning to applications, we should remark that if one wishes to find the complete general solution to f(x), the solution of the homogeneous equation Dh(x) = 0 must be added to Eq. (6), which is the particular solution. The solution of the homogeneous equation is found by satisfaction of inhomogeneous boundary conditions [9[9] C.L.R. Braga, Notas de Física Matemática (Ed. Livraria da Fisica, São Paulo, 2006).].

It might be interesting to have an example of how Eq. (6) can work in practice. The equation relating the electric potential φ(r) to a given charge density distribution ρ(r) is Poisson's equation:

(8)
The most common boundary condition is requiring that φ(r) goes to zero at infinity.

From Eq. (6), the potential φ(r) can be obtained with the help of a Green's function

(9)

where G(r, r′) satisfies the inhomogeneous equation

(10)

The solution to Eq. (10) can be identified physically. By associating the Dirac delta δ(rr′) with a point charge at r′, we can find a corresponding potential. Considering a point charge q = 1 at r′, the electric potential is simply

(11)

Removing from (11) the prefactor −1/ϵ0 of Eq. (9), the appropriate Green's function to the localized charge problem of Eq. (10) is

(12)

which satisfies homogeneous boundary conditions, since in the limit of |rr′| → ∞, G goes to zero.

Substituting (12) into Eq. (9), we have that for an arbitrary charge density distribution, the solution of Poisson's equation is given by the following integral over space:

(13)

which verifies to be a correct result in electrostatics [11[11] D.J. Griffiths, Eletrodinâmica (Pearson, New York, 2011).].

Although this is a quite simple example, the Green's function technique as presented can be applied to other physical problems described by linear differential equations. During the late half of the XIX century, it became a central tool for solving boundary-value problems. Further examples in vibrations and diffusion phenomena, as well as other ways of constructing Green's functions can be found in the references [7[7] G.L. Trigg, Mathematical Tools for Physicists (John Wiley & Sons, Hoboken, 2006).,9[9] C.L.R. Braga, Notas de Física Matemática (Ed. Livraria da Fisica, São Paulo, 2006).,10[10] E. Butkov, Física Matemática (Ed. LTC, São Paulo, 1988).,13[13] K.D Cole, J.V. Beck, A. Haji-Sheikh and B. Litkouhi, Heat Conduction Using Green's Functions (Taylor & Francis, Milton Park, 2010).].

1.2. Quantum Green's functions

By the beginning of the 20th century, Green's functions were generalized to the theory of linear operators, in particular, they were applied to the class of Sturm-Liouville operators [14[14] W.O. Amrein, A.M. Hinz and D.P. Pearson, Sturm-Liouville Theory: Past and Present (Springer, New York, 2005).]. These are second-order linear differential equations that depend linearly on a parameter λ as an eigenvalue problem: λ, and of the complete set of eigenfunctions φ became known as Sturm–Liouville theory. From this set, the Green's functions could now be built as a Fourier-like, or spectral expansion. As a generalized technique, the Green's functions allowed conversion of a differential problem into integral operator problems [8[8] J. Lindström, On the Origin and Early History of Functional Analysis (Uppsala Universitet, Uppsala, 2008).].. The study of the existence of eigenvalues

With the emergence of quantum mechanics, functional analysis and the theory of linear operators gained new significance. They are present at the very foundations of quantum mechanics, from Hilbert's vector space to Heisenberg's matrix formulation, and in Schrödinger's continuous wave mechanics (the one-dimensional Schrödinger's equation is one example of a Sturm-Liouville problem).

Schrödinger's equation is a celebrated piece of the quantum puzzle that tormented early twentieth century physicists. One should remark that the Schrödinger equation cannot be rigorously derived from any physical principle; it was postulated from Hamilton-Jacobi analogues [15[15] E. Schrödinger, Collected Papers on Wave Mechanics, vol. 302 (American Mathematical Society, Providence, 2003).] describing the propagation of a scalar field, the wave function, using a diffusion equation. As a very historical note, Schrödinger even referenced the Green's function in a footnote of one of his 1926 papers [15[15] E. Schrödinger, Collected Papers on Wave Mechanics, vol. 302 (American Mathematical Society, Providence, 2003).], citing Cornelius Lanczos' work. Lanczos had tried to develop an integral representation of Born and Jordan's matrix equations [16[16] M. Taketani and M. Nagasaki, The Formation and Logic of Quantum Mechanics (World Scientific, Singapore, 2001).], finding the Green's function along his formulation.

Shortly after Schrödinger's first papers, Max Born proposed a wave-mechanical model of atomic collisions [17[17] M. Born, Quantenmechanik der stoßvorgänge, Zeitschrift für Physik 38, 803 (1926).,18[18] J.A. Wheeler and W.H. Zurek, Quantum Theory and Measurement (Princeton University Press, Princeton, 2014).], developing the probabilistic interpretation of this wave function. His study was based on the free-particle wave function, a plane wave. In order to find the new scattered wavefunction, Born built a perturbation expansion in the first power of the potential, starting from the free solution. This first order is known today as the first Born approximation, generalized in the Lippmann-Schwinger equation for scattering [19[19] B.A. Lippmann and J. Schwinger, Physical Review 79, 469 (1950).], presented in Quantum Mechanics courses [20[20] J.J. Sakurai and J. Napolitano, Modern Quantum Mechanics (Addison-Wesley, Boston, 2011).]. In its time-dependent form, Schrödinger's equation reads

(14)

Eq.(14) is the quantum nonrelativistic equivalent of second Newton's law r, t). The right hand side of Eq.(14) has the Hamiltonian operator, p = −iℏ∇, and the second, the external potential. In this formulation, the eigenstates of the Hamiltonian play an important role, since their time evolution is simple to calculate (i.e. they are stationary)., whose expectation value is the total energy. The first term is the kinetic energy, rewritten using the momentum operator , governing instead the time evolution of the wavefunction Ψ(

The time-dependent Schrödinger equation is a linear partial differential equation. Also, it is of first order in time, so an initial condition must be specified. Although it is a homogeneous equation, we can rearrange the terms as

(15)
in order to treat the potential as a source of inhomogeneity. But note that it is not, since the right-hand-side also depends on the function Ψ(r, t). Ultimately, we will need a recursive solution to find Ψ(r, t), or an iterative procedure. This kind of self-consistent solution is achieved by the Lippmann-Schwinger equation for the wave function or a Dyson's equation [19[19] B.A. Lippmann and J. Schwinger, Physical Review 79, 469 (1950).22[22] F.J. Dyson, Physical Review 75, 1736 (1949).] for the Green's function. The basic idea would be to use the free solutions, those in the absence of an external potential, to solve the more general problem, with an external potential.

Therefore Eq.(15) is where Green's functions come into play. Instead of solving Schrödinger's equation for wave functions, one can equivalently look for the Green's function that solves the inhomogeneous problem

(16)

From the theory of Green's functions we already know that an inhomogeneous solution similar to Eq. (6), may be written as

(17)

Note the difference with respect to Eq. (6), where the solution itself enters the integral. The equation above describes the time evolution of the wave function from a given time and position (r′, t′), evolving it to another time and space (r, t). This is why the Green's function is known as the propagator.

In order to give a broader picture the propagating character of the Green's function, let us rewrite the wave function in terms of the time evolution operator2 2 For a time-independent Hamiltonian and in the Schrödinger picture, the solution to Eq. (14) is r, t′) to Ψ(r, t) in infinitesimal time intervals. It has important properties such as unitarity, U = U†, which preserves the norm of the wavefunction.. Here we see the time-evolution operator that evolves the wave function Ψ( 3 3 In the position representation (and Dirac notation), the bra ⟨r| is associated to a spatial function base. as Ψ(r, t) = ⟨r|Ψ(t)⟩. Writing Ψ(t) as the evolution from Ψ(t′), and using the closure relation ,. For simplicity, we can represent the wavefunctions as state vectors in the position representation

(18)

which reproduces Eq. (17) if we define (this is not yet our final definition, we will develop them only for pedagogical purposes),

(19)

where r, t| given that it started at |r′, t′⟩. It is interesting to note that Paul Dirac [23[23] P.A.M. Dirac, Physikalische Zeitschrift der Sowjetunion 3, 64 (1933).], while attempting develop a Lagrangian or path-integral formulation of quantum mechanics in the 1930's, found the propagator as the overlap of two functions in different positions and times. and . Thus, we have associated the Green's function to the probability amplitude of finding the particle in a state ⟨

The eigenstates of the Hamiltonian form a complete set, which we cast |n⟩ in the vector notation. Inserting again a completeness relation, ,

(20)

Since H acts on the eigenstate |n⟩, and the projection ⟨r|n⟩ is the eigenfunction φn(r),

(21)
(22)

This Green's function satisfies Eq. (16). By Fourier transforming Eq. (22) to energy or frequency domain, one obtains a spectral form of the Green's function (again, not yet our in final convention):

(23)

which has poles at the eigenenergies. Please note that so far, we have inspected the quantum Green's functions as propagators, but we have not constrained the particle to propagate in a certain direction of time, which will be perfomed shortly4 4 To ensure that particles propagate from times t′ < t, we must correct all equations above with a Heaviside function θ(t − t′), and change the analyticity domain, by adding an infinitesimal shift iη in the denominator of Eq. (23). Later we will return to this point. .

In the 1950's and 60's the quantum Green's functions were introduced as propagators in the quantum field theory by Feynman and Schwinger. Feynman [24[24] R.P. Feynman, Review of Modern Physics 20, 367 (1948).,25[25] R.P. Feynman, Physical Review 76, 749 (1949).] transformed Dirac's observations on the quantum propagators into a more rigorous formalism. He developed the path-integral formalism, interpreting Eq. (17) as the sum of the probabilities of the particle taking different individual paths. In addition, Feynman invented a graphical form of representing terms of a perturbation expansion of a scattering formalism, the Feynman diagrams.

At this point we need to switch from the so-called first quantization, from Schrödinger's wave mechanics, to quantum fields, using the technique of "second quantization". Stating very briefly, the Schrödinger equation describes the undulatory behavior of matter, such as electrons, by means of wave functions [26[26] T. Lancaster and S.J. Blundell, Quantum Field Theory for the Gifted Amateur (Oxford University Press, Oxford, 2014).]. But other wave phenomena were shown to behave as particles e.g., phonons (lattice vibrations with a wavelength) or photons (excitations of the electromagnetic field). The second quantization language treats particles and waves as a quantum field. It has several advantages over "first quantization", being more adequate for many-particle physics.

To clarify the definition of a field propagator, let us consider a thought experiment. Imagine that a particle is created in the ground-state of an interacting system5 5 Theoreticians often make this distinction between interacting and noninteracting systems. This means that the potential V in the Hamiltonian will be present (e.g. due to particle scattering), or not, so that we return to the simple free-particle system (V = 0). In practice, the solvable system will be a building block for the more complex ones. . That particle probes the system, which has its own complex interactions, even probably causing excitations, but at the end it is annihilated and the system returns to the ground state. In quantum field theory one does not deal with wave functions, but instead with one special state, the vacuum |0⟩, andthe creation (ci) operators, where we already assume fermionic fields. The so-called occupation number representation specifies the number of identical particles in each quantum state.) and the annihilation (

Feynman introduced a new quantum field propagator. He accounted for the propagation of virtual particles and antiparticles, which propagate forward and backward in space-time, inserting a Wick's time-ordering symbol T, that guarantees causal time orderings (we will detail the properties of this operator in the following section). The Feynman propagator definition reads then [26[26] T. Lancaster and S.J. Blundell, Quantum Field Theory for the Gifted Amateur (Oxford University Press, Oxford, 2014).]:

(24)

where the expectation values are evaluated over the interacting ground state of the system (later we will generalize to a quantum ensemble of states). The propagator consists of two parts. In the first, a particle is created by ψ(r′, t′) at position r′ and time t′ < t and later it is destroyed at the position r and time t. In the second part, an antiparticle is created at r and time t < t′ and propagates to the position r′, where it is annihilated at time t′. At this point, we have almost arrived at the many-particle Green's function definition that we will adopt. The differences are that expectation values can be evaluated in the ground-state or in an ensemble, and we insert a −i factor in our Green's function, in order to avoid the imaginary factor that appeared in Eq. (23).

Julian Schwinger realized the power of Green's functions in quantum field theory. In his very interesting lecture "The Greening of Quantum Field Theory - George and I" [27[27] J. Schwinger, arXiv preprint hep-ph/9310283 (1993).], Schwinger reviewed Green's idea and how it came to post-war developments of quantum field theory, finally reaching condensed-matter physicists. One can find several seminal works on Green's functions in the condensed-matter literature. To give some examples, Martin and Schwinger applied quantum field theory in many-particle physics [28[28] P.C. Martin and J. Schwinger, Physical Review 115, 1342 (1959).], introducing the "lesser" Green's functions to evaluate particle currents and spectral amplitudes, and exploiting the equation-of-motion technique with approximate two-particle Green's functions. Kadanoff and Baym developed the thermodynamic many-particle Green's function using a grand-canonical ensemble average, with periodic boundary conditions along an imaginary time axis [29[29] G. Baym and L.P. Kadanoff, Physical Review 124, 287 (1961).,30[30] L.P. Kadanoff and G.A. Baym, Quantum Statistical Mechanics (Benjamin, New York, 1962).], presenting conserving approximations and their diagrammatics. Within perturbation theory, the Green's functions can be expanded in series and acquires a recursive form, known as Dyson's equations [21[21] H. Bruus and K. Flensberg, Many-Body Quantum Theory in Condensed Matter Physics: An Introduction (Oxford University Press, Oxford, 2004).,22[22] F.J. Dyson, Physical Review 75, 1736 (1949).].

Due to its versatility, the Green's function method is quite popular in many-particle physics. It has also been generalized to particle scattering, far from equilibrium physics, finite temperatures, statistical mechanics, and other fields. These propagators are naturally correlation functions, connecting different positions and times, e.g. G(r1, t1; r2, t2).

Nevertheless, due to the arid formalism presented in most of the textbooks, the method still scares young students. In view of this, here we aim to provide a pedagogical introduction to the Green's functions with practical examples. We will be focused on an introductory level of noninteracting condensed-matter models i.e., without electron-electron Coulomb interaction. We will apply the Green's functions to quantum equilibrium properties of atomic lattices, described by Hamiltonians in a localized basis "tight-binding" or in an occupation Fock basis, as usually formulated in many-particle physics. The fundamentals and definitions can be found for instance, in Refs. [21[21] H. Bruus and K. Flensberg, Many-Body Quantum Theory in Condensed Matter Physics: An Introduction (Oxford University Press, Oxford, 2004).,31[31] A. Altland and B.D. Simons, Condensed Matter Field Theory (Cambridge University Press, Cambridge, 2010).33[33] G.D. Mahan, Many-Particle Physics (Kluwer Academic, Dordrecht, 2000).]. For fermions, the operator ordering is of utmost importance and their algebra should be revised. Here we will only add some remarks throughout the text.

1.3. Electron Green's function

We will start with formal definitions of the electron Green's function, our object of study. The single particle electron Green's function is defined as the statistical expectation value of the product of fermion operators at different positions i and j and different times t and t′. For instance, the so-called "causal" Green's function reads

(25)

where j-th site at time t′ and ci annihilates an electron in the i-th at time t. We have already introduced this causal Green's function in Eq. (24). The difference is the imaginary factor −i, which Mattuck describes as "decorative" [34[34] R.D. Mattuck, A Guide to Feynman Diagrams in the Many-Body Problem (Dover, Mineola, 1992).], and the fermionic creation and annihilation operators, expressed in a discrete basis. In this paper we consider atomic units in which we set = e = m = 1, such that the usual prefactor −i/ is simplified. In Eq. (25) we have the time-ordering operator, creates and electron at the

(26)

which guarantees causal orderings. This is due to the properties of the Heaviside function θ(x)6 6 The Heaviside step function θ(x) is defined by (27) It has a jump discontinuity at x = 0, for which the value usually taken is 1/2. The derivative of θ(x) is the Dirac delta δ(x). . Please verify that in each term of Eq. (26), the fermionic operator that appears on the left always acts at time later than the right one. This rule is known as "later to the left". Since we are dealing with electron operators, we should recall that the operators satisfy the anti-commutations relations {ci, cj} = 0, {ci, δij, where the anti-commutator is defined as {A, B} = AB + BA, and the Kronecker function δij assumes the values 0 if ij, and 1 if i = j., } = 0 and {} =

Besides the causal Green's function defined above, we introduce two other Green's functions from which many important physical quantities are more easily extracted. For example, for times t > t′ and t < t′, the retarded and advanced Green's functions are defined as

(28)
(29)

where Gr is non-zero only for t > t′, such that we can calculate the response of the system after it has been perturbed. This is why it is called retarded Green's function. The advanced Green's function is defined as the adjoint of the retarded Green's function, [Gr] = Ga. This means that, having determined one of them, we can immediately calculate the other.

It is important to note that the Green's functions carry information about the system excitations, since their time evolution is ruled by the Hamiltonian of the system. In the Heisenberg picture, operators evolve in time via Heisenberg equation, where the Hamiltonian is present. For an arbitrary operator Ô, it reads

(30)

where the last term accounts for possible explicit time dependence of the operator.

1.4. Spectral representation

So far we have presented the Green's function in the time domain. But very often it is convenient to represent it in the energy domain. For example, when our system is at equilibrium or when the Hamiltonian is time-independent7 7 If there is time translational symmetry, it is possible to describe the system via time differences t − t′ and perform a Fourier transform to represent the Green's function in the energy domain. Similarly, in the presence of spacial translational symmetry, the representation in the momentum space is also convenient. . For such cases the Green's function will depend only on time differences tt′ and we can perform a Fourier transform. To illustrate this, let us first consider the spectral representation in the special case of a free particle Hamiltonian, which can be written as

(31)

where cm) creates (annihilates) and electron in the m-th single-particle eigenstate of the system with energy εm. (

In the Heisenberg picture, using Eq. (30), the equations of motion of our operators are

(32)
(33)

Therefore, the creation and the annihilation operators evolve as cn(t) = e−iεnt cn and Eq. (28) and (29) for the free-particle case are simple functions of the time difference tt′:. From these expressions, the retarded and advanced Green's functions of

(34)
(35)

where we have used ⟨{cn, δnn′. Note that the Green's function is diagonal in the energy basis, which does not happen in the general interacting case, where the time evolution of the single particle operator involves different states. Here we assumed that the particle is in an eigenstate of a noninteracting Hamiltonian.}⟩ =

To write the spectral representations of (34) and (35), let us consider the integral representation of the Heaviside step function:

(36)

where η → 0+ is a positive infinitesimal real number. Inserting this expression in (34), we obtain

(37)

By performing a change of variables ω + εnω, we have

(38)

Since 8 8 Here we define the Fourier transform of the retarded Green's function as (39) (40) of Gr(ω), we can identify the latter in the integrand of Eq. (38), is the Fourier transform

(41)

Analogously, we obtain for the noninteracting advanced Green's function,

(42)

The Fourier transforms of the retarded/advanced Green's functions have different analyticity properties. This is a consequence of causality, expressed in the step functions of Eq. (28) and (29). The retarded(advanced) Green's function is analytic in the upper(lower) half of the complex ω plane and has poles in the lower(upper) half plane, corresponding to the eigenenergies in this simplified example, and single-particle excitations in the more general case.

Converting to a site basis, Gij = ∑ni|n⟩ ⟨n|jGnn, thus we obtain

(43)

There are many physical properties hidden in the Green's function. At this point we can extract at least two important properties of the retarded and advanced Greens functions:

  1. For the noninteracting Hamiltonian, the poles of the Green's function correspond exactly to the eigenenergies. This can be immediately noticed since εn was assumed to be the eigenenergy of the free particle system, governing the time evolution of the creation and annihilation operators. This property refers only to the simplified case of a noninteracting Hamiltonian.

  2. The imaginary part9 9 One should have in mind that the imaginary part of a matrix A is Im(A) = (A − A†)/(2i). We thank K. Pototzky for this remark. of the diagonal (j = i) retarded or advanced Green's function provides the local density of states of the system:

    (44)

    Here we used the Cauchy relation.10 10 Limits of improper integrals can be obtained by the principal value of the Cauchy relation (45) due to the improper nature of the integrals of Gr/a, e.g. Eq. (38), with poles in different halfplanes. The imaginary part of the diagonal retarded/advanced Green's function recovers the local density of states of a discrete spectrum, .

To generalize property 1, let us consider the expansion of the operators in the complete basis of a generic Hamiltonian. It is possible to show that the poles of the retarded/advanced Green's function contain information about the spectrum of the single-particle excitations (i.e., a single electron excitation) of the system. To show this, let H be the Hamiltonian of the interacting many-body system. The Schrödinger equation is H|n⟩ = εn|n⟩, where |n⟩ and εn are the many-body eigenstates and eigenenergies, respectively. Note that |n⟩ forms a complete basis with closure relation

(46)

Within the Heisenberg picture, a given operator A(t) evolves from t′ to t as A(t) = eiH(tt′) A(t′)eiH(tt′). If H is time-independent, the evolution depends only on the difference tt′. The Green's function (28) becomes

(47)

In the lines above we have performed the quantum statistical average ⟨A⟩ = Z−1Tr[e−βH A], where Z is the partition function and β is proportional to the inverse of the temperature. For the diagonal Green's function j = i we obtain,

(48)

We can now set t′ = 0 and take the Fourier transform, as we did for the noninteracting case:

(49)

This expression is known as the Lehmann or spectral representation of the Green's functions [21[21] H. Bruus and K. Flensberg, Many-Body Quantum Theory in Condensed Matter Physics: An Introduction (Oxford University Press, Oxford, 2004).]. Following property number 2 of the retarded/advanced Green's functions shown above, from the diagonal Green's function we can calculate the local density of states:

(50)

It is possible to show that Eq. (44) is recovered when considering a noninteracting Hamiltonian. In this case the Hamiltonian is separable, and the many-particle eigenstates are a antisymmetrized product of single-particle states. The expectation value in (50) will connect states m that have one additional electron in the site i compared to state n, thus Em = En + εi, where εi is the energy of an additional bare electron at site i. Careful manipulation of (50) and the partition function Z results in a local density of states independent of the temperature, with poles at single-particle energies εi.

Among the many interesting properties of the interacting Green's function (48) we can also emphasize that:

  1. The poles of the interacting Green's function are exactly at the many-body excitations εmεn of the system;

  2. In contrast with the noninteracting case, both the Green's function (48) and the local density of states depend on the temperature. This is characteristic of interacting systems.

Although we have presented a more robust formalism, in the examples treated in this article, we will deal only with noninteracting Hamiltonians, neglecting Coulomb interactions, and our local density of states will map the spectra of each Hamiltonian.

2. The equation of motion technique

One way of obtaining the Green's function is to determine its time evolution via equation of motion (EOM) technique. Using the Heaviside function θ(tt′) and the Heisenberg equation of motion for the operator ci(t), we derive the retarded Green's function (28) with respect to time:

(51)

In the last line, on the right-hand side (rhs) of Eq. (51), there is one propagator that yet needs to be determined, which depends on the commutator of the operator ci with the Hamiltonian. We first note that this result is not restricted to Gr but rather, is general: the equation of motion will couple the original Green's function to a new one. In addition, its dependence with the Hamiltonian will influence the dynamics.

From now on, we shall use more frequently the spectral representation for the Green's functions. Therefore, we present a simplified notation for the retarded Green's function in the energy domain, adapted from Zubarev [35[35] D.N. Zubarev, Soviet Physics Uspekhi 3, 320 (1960).],

(52)

Performing the Fourier transform defined in Eq. (40) on Eq. (51), we will obtain an factor on the left coming from the time derivative. Since the Fourier transform of the δ-function is the unity,11 11 (53) the spectral representation of the equation of motion (51) acquires the form

(54)

We stress that the presence of the commutator on the rhs of Eqs. (51) and (54) tells us that the dynamics of the Green's function is fully determined by the Hamiltonian of the system.

2.1. Simple example: the non-interacting linear chain

Let us consider a linear chain described by the non-interacting Hamiltonian containing a single orbital (energy) per site and a kinetic term that connects all nearest-neighbor sites via a hopping parameter t

(55)

The first sum in Eq. (55) corresponds to a local external potential that is diagonal in a base of sites. The second term corresponds to the kinetic energy, describing the destruction of a particle in the site l + 1 and creation of another particle in the site l with probability amplitude tl+1,l. The third term describes the reverse process. The Hamiltonian is hermitian as it represents an observable, namely, the total energy of the system. To assure hermicity, .

To calculate the commutator [ci, H] we simply use commutation rules12 12 One may find useful to apply [AB, C] = A{B, C} − {A, C}B and [A, B] = −[B, A]. listed in Sec. 1.4, from which we obtain

(56)
(57)

We now introduce these commutators into the equations of motion (EOMs) (51) or (54). In the energy domain13 13 In the time domain the EOM has the form (58) , see Eq. (54), we have

(59)

where the propagator nearest neighbors in different geometries. As the reader becomes familiar with the technique, its operation and usage become clearer. couples to other propagators through first neighbor hopping. In this work we will consider only Hamiltonians that couple

It is important to emphasize that the local potential and the kinetic energy are single particle operators and do not produce many-particle Green's functions. In a more general case where the Hamiltonian has two-particle operators, i.e., a product of four operators, it will generate multi-particle Green's functions. The resulting system of coupled Green's functions is a priori, infinite, but for practical purposes it is truncated at some level. Despite their importance in condensed matter physics, many-particle Hamiltonians are outside the scope of this work, but can be found elsewhere, e.g. Refs. [21[21] H. Bruus and K. Flensberg, Many-Body Quantum Theory in Condensed Matter Physics: An Introduction (Oxford University Press, Oxford, 2004).] and [36[36] H. Haug and A.-P. Jauho, Quantum Kinetics and Optics of Semiconductors (Springer Verlag, Berlin, 2008).] and references therein. In the example treated here, Hamiltonians are noninteracting and we can find exact solutions (at least numerically) for the Green's functions. Even for noninteracting systems, few examples grant an analytical expression for the Green's function. For the others we can at least obtain exact numerical solutions. Indeed, numerical solutions are the main motivation of this work.

2.2. Two-site chain: the hydrogen molecule

The simplest finite lattice has only two sites, see Fig. 1(a). Before deriving an exact expression for the Green's functions of this system, let us review its relevance in quantum chemistry as a prototype of the molecular bond between two hydrogen nuclei. In this model, each atom has its s-type orbital localized around its H nucleus with energy ε0, shown in Fig. 1(b). The proximity of the two atoms allows for the hybridization of their individual orbitals with overlap matrix element (hopping) t. This coupled system has two solutions, two molecular orbitals with even and odd symmetry with respect to spatial inversion,14 14 We should notice that we fully neglect spin-orbit contributions in the Hamiltonian. Thus in this problem spatial degrees of freedom are decoupled from spin, since nor the kinetic energy nor the local potential couples to the spin of the particles. known as bonding and anti-bonding states. They have energies ε0 ∓|t|, illustrated in the energy diagram of Fig. 1(c).

Figure 1
(a) Finite chain with two sites and overlap matrix elements t e t*. (b) The two-site system is a prototype model in chemistry, where each site is pictured as a hydrogen nucleus with a single s-orbital localized around it with energy ε0. (c) Energy level diagram, where we see the formation of two molecular orbitals. The presence of hybridization generates an even ground-state known as the bonding state, and the excited anti-bonding state, which has a node in the spatial wavefunction. Figure adapted from Ref. [37[37] J.C. Cuevas and E. Scheer, Molecular Electronics: An Introduction to Theory and Experiment (World Scientific, Singapore, 2010).].

For the present case, with N = 2, the Hamiltonian (55) reads

(60)

where we define the local energy ε0, the number operator t21 = t. In this problem, we can distinguish the Hamiltonian for the two isolated sites, h0, and a perturbation (inter-site coupling) V. This perturbative perspective allows us to write a Dyson equation for the Green's function of the system, as we will develop below. The matrix representing the Hamiltonian (60) on the local orbitals basis {1, 2} acquires the form and the hopping matrix element

(61)

The energies of the molecular orbitals ε0 ∓|t| are easily obtained by diagonalizing the Hamiltonian above.

Returning to the explicit calculation of the Green's functions, we see that the local Green's function for the first site, Eqs. (51) and (59). In time domain we obtain the following equations of motion (EOMs),, introduced by the commutators indicated in is coupled to the non-local Green's function (propagator)

(62)
(63)

while in energy domain we have,

(64)
(65)

From the equations above we see that is useful to introduce the undressed local Green's functions for the isolated sites (that can be obtained by setting t = 0 in the equations above),

(66)

where we define the lowercase g referring to the Green's function of an isolated site. This function, which we name undressed Green's function, is diagonal on the isolated site basis, similarly to the unperturbed Hamiltonian. For the hydrogen molecule [Fig. 1(a)], the dressed Green's function exhibits non-diagonal terms due to the couplings. In matrix form, the undressed and dressed Green's functions read

(67)

where by inversion symmetry around the center of the mass of the molecule, we can write .

In terms of the undressed Green's function (66), we obtain the coupled system of equations

(68)
(69)

These linear equations are rewritten more compactly in a matrix notation, i.e., in terms of Eq. (67),

(70)

where the coupling potential V was defined in Eq. (61). In this form, the dressed Green's function Gr, is obtained by isolating it as

(71)

To find the explicit expression for the local site Green's function we can eliminate the non-diagonal propagator by replacing Eq. (69) into Eq. (68), or equivalently, (65) in (64)

(72)

In the last term of (72), gr(ω) can contribute with a real and a imaginary part in the denominator. This means that there can be a change of the position of the resonance energy ε0 and a broadening of the correspondent peak. Since gr(ω) is the function of an isolated site, its imaginary part is just a δ-like function, resulting in no effective broadening. In Fig. 2 we plot the density of states, which is proportional to Im[Eq.(44). The broadening of the peaks was artificially increased with η = 0.01 for visualization. Thus the final effect of the tunneling between the two sites on site 1 is a change of the local energy ε0 to ε0 ±|t|. More generally, the coupling of a site to another structure causes a shift of the resonance to a new energy ] via . a broadening Γ, i.e.,

Figure 2
Density of states of the first site in the hydrogen molecule. We have set a large η = 0.01 for visualization of the broadening of the peaks.

In addition, Eq. (72) can be rewritten as a sum of partial fractions,

(73)

where we identify the two eigenvalues of the molecule, shown in Fig. 1(c). As discussed in Sec. 1.4, the poles of the noninteracting Green's function correspond exactly to the eigenenergies, and the imaginary part leads to the density of states, shown in Fig. 2.

It is important to mention that, within the perturbative approach, the Green's function of the system can be obtained by a recursive relation called Dyson equation:

(74)

where G and g are the dressed and undressed (or bare) Green's functions. In writing (74) we assumed that our problem allows a perturbative approach and that we can encapsulate the irreducible diagrams due to many-particle interactions in a operator called self-energy Σ(ω). The self-energy is an energy-dependent operator that accounts for the effects of self-consistent interactions, the dynamic i.e., energy-dependent, renormalization of the single-particle states. This renormalization will change the position of the level, and its width. This broadening is frequently related with the inverse of the lifetime of the dressed particle, the quasiparticle. For interacting problems and more complex structures, the determination of a consistent self-energy is a challenging problem [33[33] G.D. Mahan, Many-Particle Physics (Kluwer Academic, Dordrecht, 2000).,38[38] G. Stefanucci and R. Van Leeuwen, Nonequilibrium Many-Body Theory of Quantum Systems (Cambridge University Press, Cambridge, 2013).]. In our example, see Eq. (70), V has a simple structure and the coupling t is a constant, thus interactions and additional complications in the Hamiltonian are not yet present.

In the next examples we will practice the equations of motion analytically and later numerically, for extended linear lattices.

2.3. Semi-infinite linear chain

An interesting example that provides an analytical closed solution of the equations of motion is the semi-infinite linear chain, shown in Fig. 3. This extended lattice can be considered a simple model of a crystalline solid or a semi-infinite electrode in a junction.

Figure 3
One-dimensional semi-infinite chain of atomic sites.

Note that the infinite number of sites prohibits direct diagonalization of the Hamiltonian or the resolvent operator, and the application of Eq. (59) leads to an infinite hierarchy of propagators, with an infinite continued fraction structure. Already from early days of computational physics recursive techniques in tight-binding lattices were recognized as an efficient tool for the study of solids [39[39] R. Haydock, Solid State Physics, vol. 35 (Academic Press, New York, 1980).]. For instance, the workhorse in quantum transport, the "surface Green's function" method approached in Sec. 3.1, plays an essential role in the simulation of dynamic properties of materials.

The decimation technique is a very useful tool for the recursive procedure. Basically, it is a strategy to approximate the solution of an infinite system starting from a finite one. This technique relies on finding a change of variables that will bring your coupled equations of motion in the same form of a well known result. For instance, suppose we could add many sites to the hydrogen molecule, always renormalizing the Green's functions in a way to recover an effective site Fig. 4 (note that the isolated sites are not identical). Here we assumed that we have already encapsulated a large number of sites into this effective site . Then one would have an effective hydrogen-like molecule, as illustrated in . In the asymptotic limit, this effective site gives the same answer of a semi-infinite lattice.

Figure 4
Effective hydrogen molecule to evaluate the Green's function of the semi-infinite chain.

Let us then consider the effective two-site model, where one undressed surface site is coupled to an effective one. We have already developed the equations of motion of the two-site system, Eq. (68) and (69). For simplicity we will drop the frequency dependence and the retarded index in our notation. The equations of the effective two-site chain read

(75)
(76)

where g1 and are the undressed and the dressed effective Green's function.

In the limit of a infinite number of sites in the effective site G11. With this observation, we solve the system in Eq. (75) and (76), finding a second-order equation for G11: describes itself the semi-infinite chain, i.e., = , the effective propagator

(77)

The two retarded solutions of Eq. (77) are given by

(78)

or, replacing the undressed function, Eq. (66),

(79)

We can determine the physical solution examining the analyticity properties of the Green's function [39[39] R. Haydock, Solid State Physics, vol. 35 (Academic Press, New York, 1980).]. In the asymptotic limit of |ω| → ∞ we must have a vanishing solution, therefore we choose

(80)

One can verify that G11 decays as 1/ω in the asymptotic limit. Since the real and imaginary parts of the Green's functions are related by a Hilbert transform15 15 The Hilbert transform is an improper integral, defined by the principal value (81) For an analytic function in the upper plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. This means that these functions are conjugate pairs. Given a real-valued function f(x), the Hilbert transform finds a imaginary part, a companion function g(x), so that F = f(x) + ig(x) can be analytically extended to the upper half of the complex plane. , this decay assures a bounded density of states [37[37] J.C. Cuevas and E. Scheer, Molecular Electronics: An Introduction to Theory and Experiment (World Scientific, Singapore, 2010).]. Note that, by factoring out −1 from the square root of (79) we obtain the imaginary contribution, which is non-zero only in the region |ω - ϵ0| < 2|t|, i.e., within the bandwidth. This gives the density of states of the edge, or "surface" site:

(82)

which forms a semi-circle, as illustrated in Fig. 5. In this graph we plotted to scale with the real part.

Figure 5
Real and imaginary parts of the surface Green's function of a linear chain. The imaginary part relates to the density of states, which is a semicircle bounded by the bandwidth 2|t|. In this example, η = 0.0001.

2.4. Infinite linear chain

Another interesting model that allows analytical solution is the infinite linear chain. The band structure and density of states can be easily obtained in the tight-binding framework by considering Bloch eigenfunctions [40[40] Michele Cini, Topics and Methods in Condensed Matter Theory (Springer, New York, 2007).]. Here we will show how to obtain the DOS from the equations of motion.

The infinite chain can be viewed as the coupling between two semi-infinite chains, as shown in Fig. 6(a). This would correspond to two effective sites in a two-site model, as in Fig. 6(b), with solution

(83)

where G11 is the diagonal dressed Green's function of the infinite lattice, while the effective propagator Eq. (79). is the previous semi-infinite answer,

Figure 6
(a) Infinite linear chain pictured as the coupling of two semi-infinite lattices.(b) Effective sites that encapsulate the semi-infinite chains.

One might wonder if this solution is unique. Other couplings are possible, for example, in Fig. 7(a) we couple one undressed site with two semi-infinite lattices.

Figure 7
Infinite linear chain pictured as the coupling of two semi-infinite chains with a single site.

In this case the equations of motion go not only forward but also backward. The dressed Green's function of the central site now reads

(84)

where Eq. (79). It can be shown that the expressions (83) and (84) are identical, as long as Eq. (77) (with g1 = g0), which is indeed the case here. Replacing expression (66) for g0 into Eq. (84) one obtains is given by obeys

(85)

In Eq. (85) we can see that the resulting Green's function of the infinite chain has a square root singularity at ωε0 = 2t. The infinitesimal η contributes to a softening around the singularity. For values ωε0 < |2t|, the Green's function is in essence purely imaginary, with roughly the profile of an inverse of the semicircle we have seen in Fig. 5 however, with the presence of singularities at the band edges ωε0 = |2t|. These asymmetric spikes are a hallmark of low-dimensional systems (known as van Hove singularities), and indicate the presence of a flat dispersion curve with large accumulation of states. These singularities have effects on the structural, electrical and optical properties of solids and nanostructured materials, such as carbon nanotubes. The density of states of the inner site, obtained with the imaginary part of the Green's functions Eq. (83) or Eq. (84), is plotted in Fig. 8.

Figure 8
Density of states, Eq. (44), of an infinite linear chain, obtained by merging two semi-infinite Green's functions. At the band edges we have a large accumulation of states, due to a flat band structure. These spikes are characteristic of low-dimensional periodic systems, and are known as van Hove singularities.

In source code 1 (see Appendix A. Source codes Julia is a high-level, high-performance, easy-to-learn scientific language [50]. It is also an open-source project, licensed by MIT. For an introductory course, please see for instance Reference [41]. In source code 1, we define a linearly spaced vector of energies using the command linspace and evaluate the undressed Green's function from this vector. This shortened notation avoids additional and traditional use of the for loop for energies, which is inefficient, since the vector can be stored in memory at once, on the fly. If the amount of data to be stored is under the memory resources, vectorization of loops is a general recommended programming practice, since matrix and vector operations can be performed efficiently in Julia. When we start evaluating more complex Green's functions, stored as large matrices, we return to the conventional loop of energies. Source code 1 Infinite chain Code 2 uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. We use again the vectorized loop of energies w in the linspace command. The explicit for loop runs the recursive decimation procedure for 16 steps. Equations (123), (129) and (132) are implemented inside the loop. Next we renormalize the hoppings and the undressed Green's functions, carrying the decimation. In the last lines we plot the local density of states of site 1, the local Green's function is given by Eq. (124) or by Eq. (92) with effective functions. The results of few steps are plotted in Fig. 13 and Fig. 14. Source code 2 Semi-infinite chain via surface-bulk recursive Green's function In source code 3, we have implemented the decimation using the matrix forms in Julia. We had to define a vertical and horizontal hopping parameters, tv and tw, along with hopping matrices V and W. We now perform an explicit energy and decimation loops, iterating for 1000 energy points and 18 decimation steps. Before decimating, we construct a pair of sites, described by the dressed function gV, Eq. (140), coupling two undressed sites. As shown in Fig. 16, we have three effective sites, each one a vertical pair, and we perform the decimation horizontally, as in the 3-site chain. The decimation loop is the same of source code 2, except for the fact that we have now a hopping matrix W. After the loop, we evaluate the three local functions (as in Eq. (92), (96) and (98), but now with effective functions). Source code 3 Ladder via surface-bulk recursive Green's function ), we have illustrated how to obtain the graph of Fig. 8 using the Julia programming language. For an introductory course in Julia, please see Ref. [41[41] G.J. Ferreira, Introduction to Computational Physics with examples in Julia (unpublished, 2016).].

2.5. Three-site chain: a recipe for recursion

Let us now apply the equation-of-motion technique to a linear chain composed of three sites, shown in Fig. 9. Although it may appear as just another application of Eq. (59), these equations will set our paradigm for the surface-bulk recursive Green's function method presented in Sec. 3.2. For the widely-used surface Green's function, this 3-site model is revisited briefly, however special attention is required by the surface-bulk method that will be presented.

Figure 9
Three-site chain with nearest-neighbors hoppings t and t*. The visualization of the sites may help writing of the equations of motion, making it easier and mechanic. The equations of motion of this system will play an important role for the recursive methods presented later on.

Let us assume that our three-site chain is described by the non-interacting Hamiltonian

(86)

From the local potential term of the Hamiltonian above, we see that the undressed Green's functions (66) can be written as i and j. We will omit the energy dependence (ω) and the index r, for simplicity., and for the non-diagonal propagators that connect the sites . We now will write the EOM for the dressed Green's function

a. Green's function of site 1: G11 — Let us now calculate the Green's function of the first site of the three-site system, according to Eq. (59). Schematically we see in Fig. 9 that the site 1 couples to site 2 via a non-diagonal propagator G21 (where the subindex describes the propagator "from the site 2 to the site 1"),

(87)

One way of visualizing how it works is first to identify the first neighbor of the site in question (see Fig. 9), the direction of the hopping, and the corresponding propagator Gkj, keeping in mind that the last index j of the non-diagonal propagator has to be the same as the one of the Green's function Gij under consideration.

The non-diagonal propagators that point to the first site are

(88)
(89)

Inserting Eq. (89) into (88), we obtain

(90)
(91)

Using the result of Eq. (91) in (87), we can eliminate G21 to obtain the dressed Green's function for the first site as:

(92)

b. Green's function of site 2: G22 — Applying the practical scheme discussed above we can write an expression for the central Green's function as

(93)

Since there are only three sites, the expressions for propagators pointing to site 2 are

(94)
(95)

These expressions are inserted into Eq. (93) to obtain the local dressed Green's function of site 2,

(96)

c. Green's function of site 3: G33 — The equation of motion for the local dressed Green's function of site 3 gives us

(97)

To obtain a closed expression for G33 we can either work on the EOM for the G23 or just make the replacement 1 → 3, 3 → 1 and tt* in Eq. (92). The resulting expression is

(98)

So far these examples not only provided us the opportunity to exercise the method but also introduced the boxed expressions (87), (93) e (97), fundamental to the technique developed in Sec. 3.2 for infinite chains.

3. Recursive Green's function

3.1. Surface Green's functions decimation

In early 80's, the investigation of surface and bulk properties of metals, transition metals and semiconductors motivated the development of effective Hamiltonians and iterative techniques to obtain the density of states [42[42] F. Guinea, C. Tejedor, F. Flores and E. Louis, Physical Review B 28, 4397 (1983).]. The recursive Green's functions (RGF) used computationally efficient decimation techniques from the numerical renormalization group, simulating materials via effective layers [43[43] M.P. Lopez Sancho, J.M. Lopez Sancho, J.M.L. Sancho and J. Rubio, Journal of Physics F: Metal Physics 15, 851 (1985).].

The success of recursive Green's functions was boosted by simulation of transport in materials, in particular in two-terminal ballistic transport. The retarded and advanced Green's functions of the central device in a junction contain information to the calculation of transport properties such as the stationary current and conductivity, or transmission matrix. In essence, the idea of dividing the material in layers, modelling it in a chain, is the spirit of the recursive Green's function method. We will illustrate this procedure using a linear chain of single-site orbitals and two forms of decimation: the most widely-used, the surface technique, and an alternative version that stores information from the central sites.

Let us consider a three-site chain, as shown in Fig. 10(a). We will basically follow the references [43[43] M.P. Lopez Sancho, J.M. Lopez Sancho, J.M.L. Sancho and J. Rubio, Journal of Physics F: Metal Physics 15, 851 (1985).,44[44] C.H. Lewenkopf and E.R. Mucciolo, Journal of Computational Electronics 12, 203 (2013).] except for the fact that in our notation, the first site is labelled as 1 instead of 0, therefore every index will be shifted by one with respect to the ones in [43[43] M.P. Lopez Sancho, J.M. Lopez Sancho, J.M.L. Sancho and J. Rubio, Journal of Physics F: Metal Physics 15, 851 (1985).,44[44] C.H. Lewenkopf and E.R. Mucciolo, Journal of Computational Electronics 12, 203 (2013).]. Again, for the first site we have the equations of motion

(99)
(100)
Figure 10
Possible interpretation of the decimation steps in the surface recursive Green's function. (a) 3-site chain, where the non-diagonal even Green's function from site 2 is eliminated from the equations of motion, shown in a lighter color. (b) Insertion of 21 = 2 new sites, which will be included in a renormalization of the hoppings. (c) In the next iteration, 22 = 4 interstitial sites are inserted and the even non-diagonal propagators to the surface site, related to sites 2 and 4 (in lighter color) will be eliminated. The idea is to keep the three-site chain by renormalizing the hoppings and local energy of the first site.

By replacing (100) in (99), we eliminate the non-diagonal propagator G21:

(101)

As a general rule, the non-diagonal propagator Gn1 relates first neighbors:

(102)

Writing analogous expressions of (102) for Gn−1,1 and Gn+1,1, and replacing back into Eq. (102), we obtain a recursive expression that eliminates the non-diagonal first-neighbors propagators leaving only non-diagonal second-nearest neighbors functions:

(103)

Rewriting Eq. (103) in terms of new variables

(104)
(105)
(106)
(107)

where all undressed functions gi = g are given by (66), we arrive at a shorter recursion relation

(108)

Starting from G11, Eq. (108) generates a recursion relation involving only non-diagonal second-nearest neighbors functions of odd sites. The first iteration is Eq. (101), involving sites 1 and 3. Next, the non-diagonal G31 relates sites 1 and 5, and so on, as follows:

(109)
(110)
(111)
(112)

These equations (except for the first one) are analogous to the first-neighbors recursion, Eq. (102), since their equations have the same structure. However, the variables α1, β1, etc, contain implicitly the nearest neighbors of the original chain, mapping now into a chain with twice the lattice constant, since we connect second-nearest neighbors [42[42] F. Guinea, C. Tejedor, F. Flores and E. Louis, Physical Review B 28, 4397 (1983).].

Starting fromEq. (112), we can now repeat the arguments described above, from Eq. (104) to (112), x times. At each repetition we will obtain a larger effective system with not twice, but 2x the lattice constant. This process is known as decimation, where one encapsulates the numerous sites into a three-point recursion relation using renormalized parameters. This procedure ultimately provides information about the infinite lattice. After x iterations, Eq. (109) to (112) read

for n ≥ 1. The renormalized hoppings are smaller than the original t, since they are multiplied by the undressed g, as in Eq. (104) and (105). Those read

(113)
(114)
(115)
(116)

where g = (ωεx−1 + )−1. After x iterations, we have that site 1 is coupled to a chain of 2x sites where the effective hopping parameter is much smaller. The decimation will stop when ||αx|| and ||βx|| are sufficiently small. At this point εxεx−1, , and

(117)

Thus we have an approximation to the local Green function from the surface site 1, at the edge of the chain:

(118)

To have a picture of the decimation procedure, we illustrated the iterations steps in Fig. 10. Note that it is the reverse of the encapsulating mechanism of the infinite lattice into a finite chain, shown in Fig. 6 and 7. We start with the three-site chain, shown in Fig. 10(a), and eliminate G21, represented in the figure by the site 2 in lighter color. In the first iteration, we add two interstitial sites, growing the lattice to 5, shown in Fig. 10(b). Next, we eliminate the even non-diagonal functions, storing the information of the new sites into parameters α, β, ε. With these renormalized parameters one can simulate a chain that grows exponentially fast keeping the three-point structure of Eq. (102). and

The surface RGF is widely used in transport simulation with several applications [44[44] C.H. Lewenkopf and E.R. Mucciolo, Journal of Computational Electronics 12, 203 (2013).

[45] M.B. Nardelli, Physical Review B 60, 7828 (1999).
-46[46] F. Pauly, J.K. Viljas, U. Huniar, M. Häfner, S. Wohlthat, M. Bürkle, J.C. Cuevas and G. Schön, New Journal of Physics 10, 125019 (2008).] with sophistications [47[47] G. Thorgilsson, G. Viktorsson and S.I. Erlingsson, Journal of Computational Physics 261, 256 (2014).]. In the next section we will present an alternative version, capable to access the Green's functions of the edge and bulk at once, possibly finding usefullness in topological insulators.16 16 In fact, within the surface approach, it is possible to determine the bulk Green's function. One can consider an additional site and couple it from the left and from the right with semi-infinite chains, as we have shown in Fig. 6 in Sec. 2.4. To this, one should first determine the surface GF from both sides, which usually are identical. However, they can differ for instance in topological systems, where each side has its own chirality, or for asymmetric leads in transport devices.

3.2. Surface-bulk Recursive Green's function decimation

Another form of RGF, which we first present here, is based in the 3-site local GF, already introduced in Sec. 2.5. The decimation is similar to the surface procedure, we will insert interstitial sites at each iteration. The difference is in which functions we eliminate in the hierarchy of equation of motions and in the recursive model.

Although the equation of motion (EOM) procedure is quite mechanic, we will exemplify how the decimation develops in the first iteration of the surface-bulk RGF. By now the reader can probably jump into the effective equations, we elaborate them for the sake of clarity.

Let us add two sites a and b to the 3-site chain, shown in Fig. 11:

Figure 11
Illustration of the first decimation step, where we inserted interstitial sites a and b in the three-site chain.

For 5 sites, the equations are more numerous and the surface solution will be more intrincate. We will examine three sites, the edges and the central site.

For the first site of Fig. 11 we know that

(119)
(120)

By replacing (120) in (119), we eliminate the non-diagonal function Ga1

(121)

Eq. (121) can be rewritten in the form of the Eq. (87)

(122)

using the renormalized quantities

(123)

Note that the edge propagator G11 corresponds to Eq. (92),

(124)

with the undressed effective functions , which we will derive, for completeness. e

The Green's function for the central sites of Fig. 11 has EOMs

(125)
(126)
(127)

Eliminating the Green's functions (126) and (127), we obtain Eq. (93),

(128)

where we used the renormalized Green's function

(129)

In Eq. (129), ga = gb., considering undressed propagators e

Finally, the Green's function for the last site of Fig. 11 obeys the following equations,

(130)
(131)

Comparing these expressions with (97), we will consider g3 in the renormalization of

(132)

In this five-site example we explicited the first step of the decimation recursion based on the three-site system. This procedure is different from the surface Green's function approach, since we kept the three local propagators, eliminating the non-diagonal ones. Figure 12 illustrates the renormalization of the interactions and the mapping of the five-site chain onto the effective three-site one.

Figure 12
(a) e (b) Illustration of the iterative process of adding interstitial sites, representing the growth from a three-site to a five site chain. Panels (c) e (d) illustrate the recursive procedure of encapsulating the new sites to obtain the effective three-site system.

In Fig. 13, we plot the imaginary part of the retarded Green's function, associated with the density of states, of the surface site 1, ρ11(ω). As the decimation procedure is carried, the number of peaks grows with the number of sites. The correspondent source code is presented in the Appendix A. Source codes Julia is a high-level, high-performance, easy-to-learn scientific language [50]. It is also an open-source project, licensed by MIT. For an introductory course, please see for instance Reference [41]. In source code 1, we define a linearly spaced vector of energies using the command linspace and evaluate the undressed Green's function from this vector. This shortened notation avoids additional and traditional use of the for loop for energies, which is inefficient, since the vector can be stored in memory at once, on the fly. If the amount of data to be stored is under the memory resources, vectorization of loops is a general recommended programming practice, since matrix and vector operations can be performed efficiently in Julia. When we start evaluating more complex Green's functions, stored as large matrices, we return to the conventional loop of energies. Source code 1 Infinite chain Code 2 uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. We use again the vectorized loop of energies w in the linspace command. The explicit for loop runs the recursive decimation procedure for 16 steps. Equations (123), (129) and (132) are implemented inside the loop. Next we renormalize the hoppings and the undressed Green's functions, carrying the decimation. In the last lines we plot the local density of states of site 1, the local Green's function is given by Eq. (124) or by Eq. (92) with effective functions. The results of few steps are plotted in Fig. 13 and Fig. 14. Source code 2 Semi-infinite chain via surface-bulk recursive Green's function In source code 3, we have implemented the decimation using the matrix forms in Julia. We had to define a vertical and horizontal hopping parameters, tv and tw, along with hopping matrices V and W. We now perform an explicit energy and decimation loops, iterating for 1000 energy points and 18 decimation steps. Before decimating, we construct a pair of sites, described by the dressed function gV, Eq. (140), coupling two undressed sites. As shown in Fig. 16, we have three effective sites, each one a vertical pair, and we perform the decimation horizontally, as in the 3-site chain. The decimation loop is the same of source code 2, except for the fact that we have now a hopping matrix W. After the loop, we evaluate the three local functions (as in Eq. (92), (96) and (98), but now with effective functions). Source code 3 Ladder via surface-bulk recursive Green's function .

Figure 13
Density of states of the surface site at each step x of the decimation, showing the growth of the chain (as 2x + 1) in the number of peaks. We have shifted the curves vertically and set a large η = 0.02 (i.e., broadening of the peaks) for better visualization. The algorithm is shown in the Appendix A. Source codes Julia is a high-level, high-performance, easy-to-learn scientific language [50]. It is also an open-source project, licensed by MIT. For an introductory course, please see for instance Reference [41]. In source code 1, we define a linearly spaced vector of energies using the command linspace and evaluate the undressed Green's function from this vector. This shortened notation avoids additional and traditional use of the for loop for energies, which is inefficient, since the vector can be stored in memory at once, on the fly. If the amount of data to be stored is under the memory resources, vectorization of loops is a general recommended programming practice, since matrix and vector operations can be performed efficiently in Julia. When we start evaluating more complex Green's functions, stored as large matrices, we return to the conventional loop of energies. Source code 1 Infinite chain Code 2 uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. We use again the vectorized loop of energies w in the linspace command. The explicit for loop runs the recursive decimation procedure for 16 steps. Equations (123), (129) and (132) are implemented inside the loop. Next we renormalize the hoppings and the undressed Green's functions, carrying the decimation. In the last lines we plot the local density of states of site 1, the local Green's function is given by Eq. (124) or by Eq. (92) with effective functions. The results of few steps are plotted in Fig. 13 and Fig. 14. Source code 2 Semi-infinite chain via surface-bulk recursive Green's function In source code 3, we have implemented the decimation using the matrix forms in Julia. We had to define a vertical and horizontal hopping parameters, tv and tw, along with hopping matrices V and W. We now perform an explicit energy and decimation loops, iterating for 1000 energy points and 18 decimation steps. Before decimating, we construct a pair of sites, described by the dressed function gV, Eq. (140), coupling two undressed sites. As shown in Fig. 16, we have three effective sites, each one a vertical pair, and we perform the decimation horizontally, as in the 3-site chain. The decimation loop is the same of source code 2, except for the fact that we have now a hopping matrix W. After the loop, we evaluate the three local functions (as in Eq. (92), (96) and (98), but now with effective functions). Source code 3 Ladder via surface-bulk recursive Green's function , source code 2, which simulates the semiinfinite chain.

3.2.1. Semi-infinite lattice

The surface-bulk RGF decimation technique detailed in Sec. 3.2 is an alternative to the widespread surface method that automatically delivers information about the central site. However, both methods scale exponentially with the number of iterations and are easily extended to two-dimensions via a matrix representation. Here we chose to ellaborate better how the proposed surface-bulk decimation works in practice.

We implemented the surface-bulk RGF algorithm in Julia. The source code 2 (see Appendix A. Source codes Julia is a high-level, high-performance, easy-to-learn scientific language [50]. It is also an open-source project, licensed by MIT. For an introductory course, please see for instance Reference [41]. In source code 1, we define a linearly spaced vector of energies using the command linspace and evaluate the undressed Green's function from this vector. This shortened notation avoids additional and traditional use of the for loop for energies, which is inefficient, since the vector can be stored in memory at once, on the fly. If the amount of data to be stored is under the memory resources, vectorization of loops is a general recommended programming practice, since matrix and vector operations can be performed efficiently in Julia. When we start evaluating more complex Green's functions, stored as large matrices, we return to the conventional loop of energies. Source code 1 Infinite chain Code 2 uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. We use again the vectorized loop of energies w in the linspace command. The explicit for loop runs the recursive decimation procedure for 16 steps. Equations (123), (129) and (132) are implemented inside the loop. Next we renormalize the hoppings and the undressed Green's functions, carrying the decimation. In the last lines we plot the local density of states of site 1, the local Green's function is given by Eq. (124) or by Eq. (92) with effective functions. The results of few steps are plotted in Fig. 13 and Fig. 14. Source code 2 Semi-infinite chain via surface-bulk recursive Green's function In source code 3, we have implemented the decimation using the matrix forms in Julia. We had to define a vertical and horizontal hopping parameters, tv and tw, along with hopping matrices V and W. We now perform an explicit energy and decimation loops, iterating for 1000 energy points and 18 decimation steps. Before decimating, we construct a pair of sites, described by the dressed function gV, Eq. (140), coupling two undressed sites. As shown in Fig. 16, we have three effective sites, each one a vertical pair, and we perform the decimation horizontally, as in the 3-site chain. The decimation loop is the same of source code 2, except for the fact that we have now a hopping matrix W. After the loop, we evaluate the three local functions (as in Eq. (92), (96) and (98), but now with effective functions). Source code 3 Ladder via surface-bulk recursive Green's function ) uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. The results of few steps are plotted in Fig. 13 and Fig. 14.

Figure 14
Density of states of the semi-infinite linear chain evaluated with the RGF decimation procedure of Sec. 3.2 and the analytical result of Eq. (82). In Fig. 13 we showed the first steps, here we plot from the 11th to 14th iteration, which exhibit several peaks. For η = 10−4 the numerical RGF recovers the analytical expression around 16 steps, ≈ 66000 sites.

3.2.2. The ladder

In order to approach two-dimensional materials, a generalization of the RGF decimation technique is usually performed by slicing a region (central device or lead) in layers, from which the surface algorithm follows [42[42] F. Guinea, C. Tejedor, F. Flores and E. Louis, Physical Review B 28, 4397 (1983).]. In two dimensions it is convenient to adopt a matrix representation of our Green's functions and hoppings.

We will approach this generalization in the simplest 2D example of a ladder, where we couple two 3-site chains vertically, as shown in Fig. 15. We will take as a convention a hopping t to the right and upwards, and t* to the left or downwards. Each site will be indexed by its column (layer) i and row j. We need to obtain the propagators Gij,ij.

Figure 15
Generalization of the 3-site chain to a 2D design, which we refer as "ladder". The new site indexes ij correspond to the column i and row j.

Let us consider now displacements both on the horizontal as well as in the vertical direction. For example, the electron in the 11 site can visit the two first neighbors 21 or 12 (see Fig. 15). The equation of motion of the G11,11 site will exhibit then a self contribution 11 and two non-diagonal propagators G21,11 e G12,11. The EOMs of this first column i = 1 are

(133)
(134)
(135)
(136)

Arranging these equations in matrix form, we obtain

(137)

Notice that Eq. (137) corresponds only to the first slice (column 1). Casting the left-hand side (l.h.s.) as G1 and the undressed function as g1, we can identify two hopping matrices, one from same-column sites V, and one between columns W:

(138)

By isolating G1 we can write

(139)

where we have defined

(140)

that represents the Gren's function of a single slice.

From Eq. (139), we can identify that the same 3-site structure of Eq. (87) is now recovered in matrix form. This is very convenient, since we will be able to implement decimation in two dimensions.

For the second slice (column i = 2), we have

which is represented as

(141)

Therefore we can also rewrite Eq. (141) in the same form of Eq. (93), from the three-site formulas:

(142)

From the two identifications above we can perform a mapping to three effective sites, corresponding to these slices, shown in Fig. 16. The decimation method applies, allowing the simulation e.g., of a stripe.

Figure 16
Mapping of the slices in 3 new effective sites.

The program in Julia to generate the results of the ladder is shown in the Appendix A. Source codes Julia is a high-level, high-performance, easy-to-learn scientific language [50]. It is also an open-source project, licensed by MIT. For an introductory course, please see for instance Reference [41]. In source code 1, we define a linearly spaced vector of energies using the command linspace and evaluate the undressed Green's function from this vector. This shortened notation avoids additional and traditional use of the for loop for energies, which is inefficient, since the vector can be stored in memory at once, on the fly. If the amount of data to be stored is under the memory resources, vectorization of loops is a general recommended programming practice, since matrix and vector operations can be performed efficiently in Julia. When we start evaluating more complex Green's functions, stored as large matrices, we return to the conventional loop of energies. Source code 1 Infinite chain Code 2 uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. We use again the vectorized loop of energies w in the linspace command. The explicit for loop runs the recursive decimation procedure for 16 steps. Equations (123), (129) and (132) are implemented inside the loop. Next we renormalize the hoppings and the undressed Green's functions, carrying the decimation. In the last lines we plot the local density of states of site 1, the local Green's function is given by Eq. (124) or by Eq. (92) with effective functions. The results of few steps are plotted in Fig. 13 and Fig. 14. Source code 2 Semi-infinite chain via surface-bulk recursive Green's function In source code 3, we have implemented the decimation using the matrix forms in Julia. We had to define a vertical and horizontal hopping parameters, tv and tw, along with hopping matrices V and W. We now perform an explicit energy and decimation loops, iterating for 1000 energy points and 18 decimation steps. Before decimating, we construct a pair of sites, described by the dressed function gV, Eq. (140), coupling two undressed sites. As shown in Fig. 16, we have three effective sites, each one a vertical pair, and we perform the decimation horizontally, as in the 3-site chain. The decimation loop is the same of source code 2, except for the fact that we have now a hopping matrix W. After the loop, we evaluate the three local functions (as in Eq. (92), (96) and (98), but now with effective functions). Source code 3 Ladder via surface-bulk recursive Green's function .

To go beyond the ladder, we can generalize V and W to bigger slices. These matrices will be larger but have a simple form, let us develop them.

First note that, in a given slice, the electron can hop up or down a row. By our definitions (see Fig. 15), the down hopping is t*, i.e., the hopping between (i, j) e (i, j + 1), such as 11 and 12. Ordering the basis according to the row j, for the first column i = 1 we have {11, 12, 13, ⋯} (first index is i = 1 and the second is j = 1, 2, 3,, ⋯). The possible hoppings V in the first slice lead to a tridiagonal matrix with null diagonal, reflecting the fact that the hopping V takes the electron of the slice to different rows, the upper (i, j + 1) or lower one (i, j − 1) one:

(143)

For the W matrix, the hopping takes place between sites of different columns. Presently we deal with three effective sites, but as the decimation proceeds, the lattice will grow horizontally, forming a stripe. In this process, notice that independently of the column i, automatically all rows j of the slice will be connected since the slices will touch each other. For a given column i = 1, for instance, with base order {11, 12, 13, ⋯}, where the second index is the row j = 1, 2, 3, ⋯, every row is self-connected, meaning that we have a diagonal matrix:

(144)

Therefore one can generalize the algorithm of the ladder to a stripe geometry, using the matrices (143) and (144)17 17 To generalize the source code 3 (Appendix) to a stripe, one should define a variable for the stripe size Ly, which in the case of the ladder is Ly=2. The matrices V and W should be defined according to this size, V = diagm(tv*ones(Ly-1),-1)+diagm(zeros(Ly))+diagm(tv*ones(Ly-1),1) and W = tw*eye(Ly), where the command eye in Julia defines an identity matrix and diagm a diagonal matrix. . In Fig. 17 we plot the density of states of the surface Green's functions G1 and G3, and the bulk Green's function G2 at the middle of the ladder. In Fig. 18 we illustrate the bulk density of states for different widths L = 2 (ladder), L = 6, and L = 128. As we increase the width of the stripe, the behavior tends to the limit of an infinite square lattice, given by an analytic expression in terms an ellyptical function of the first kind [48[48] E.N. Economou, Green's Functions in Quantum Physics (Springer, New York, 2006).]. It exhibits a cusp at ω = 0, a logarithmic singularity characteristic of two-dimensional lattices. It is associated with critical saddle points in the two-dimensional band structure [49[49] J. Callaway, Quantum Theory of the Solid State (Academic Press, Cambridge, 1974).].

Figure 17
Density of states of the "ladder", an infinite stripe of width L = 2.
Figure 18
Local density of states of the bulk Green's function G2 evaluated inside a stripe of width L. We plotted the matrix element (L/2, L/2), using η = 10−3. The analytical result of the infinite square lattice [48[48] E.N. Economou, Green's Functions in Quantum Physics (Springer, New York, 2006).] is shown as a reference of the asymptotic limit.

This last example illustrates the power of this technique in simulating finite lattices, which can go beyond the present regular chains to real nano or mesoscopic systems, such as electrodes, cavities, quantum dots and molecular junctions.

4. Conclusions

To conclude, we have presented a pedagogical introduction que the Green's function in the many-body formalism. Starting with a general view of Green's functions, from the classical mathematical origin, going through the many-body definitions, we finally reached a practical application within the recursive Green's functions technique. For a young researcher, it is not easy to grasp the whole power and at the same time, the tiny details of the numerical methods available. Therefore we prepared this introduction based on simple condensed-matter models with additional implementations in Julia, an open-source high-level language for scientific computing.

The surface-bulk recursive Green's function is, to the best of our knowledge, a new proposal to the field, which brings an advantage in the investigation of topological materials, where one is interested in the edge and the bulk properties. Like the surface approach, our surface-bulk recursive Green's function can be generalized to other systems and geometries [44[44] C.H. Lewenkopf and E.R. Mucciolo, Journal of Computational Electronics 12, 203 (2013).,47[47] G. Thorgilsson, G. Viktorsson and S.I. Erlingsson, Journal of Computational Physics 261, 256 (2014).]. We believe this material will be also useful for researchers unfamiliar with the Green's function method, interested in the new challenges of nanosciences and their implementations.

Acknowledgments

This work was partially supported by the Brazilian agencies CAPES, CNPq and FAPEMIG. We would like to acknowledge Ginetom S. Diniz, Gerson J. Ferreira, and Marcel Novaes for suggestions and careful reading.

A. Source codes

Julia is a high-level, high-performance, easy-to-learn scientific language [50[50] J. Bezanson, A. Edelman, S. Karpinski and V.B. Shah,arXiv preprint arXiv:1411.1607 (2014).]. It is also an open-source project, licensed by MIT. For an introductory course, please see for instance Reference [41[41] G.J. Ferreira, Introduction to Computational Physics with examples in Julia (unpublished, 2016).].

In source code 1, we define a linearly spaced vector of energies using the command linspace and evaluate the undressed Green's function from this vector. This shortened notation avoids additional and traditional use of the for loop for energies, which is inefficient, since the vector can be stored in memory at once, on the fly. If the amount of data to be stored is under the memory resources, vectorization of loops is a general recommended programming practice, since matrix and vector operations can be performed efficiently in Julia. When we start evaluating more complex Green's functions, stored as large matrices, we return to the conventional loop of energies.

Source code 1
Infinite chain

Code 2 uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. We use again the vectorized loop of energies w in the linspace command. The explicit for loop runs the recursive decimation procedure for 16 steps. Equations (123), (129) and (132) are implemented inside the loop. Next we renormalize the hoppings and the undressed Green's functions, carrying the decimation. In the last lines we plot the local density of states of site 1, the local Green's function is given by Eq. (124) or by Eq. (92) with effective functions. The results of few steps are plotted in Fig. 13 and Fig. 14.

Source code 2
Semi-infinite chain via surface-bulk recursive Green's function

In source code 3, we have implemented the decimation using the matrix forms in Julia. We had to define a vertical and horizontal hopping parameters, tv and tw, along with hopping matrices V and W. We now perform an explicit energy and decimation loops, iterating for 1000 energy points and 18 decimation steps. Before decimating, we construct a pair of sites, described by the dressed function gV, Eq. (140), coupling two undressed sites. As shown in Fig. 16, we have three effective sites, each one a vertical pair, and we perform the decimation horizontally, as in the 3-site chain. The decimation loop is the same of source code 2, except for the fact that we have now a hopping matrix W. After the loop, we evaluate the three local functions (as in Eq. (92), (96) and (98), but now with effective functions).

Source code 3
Ladder via surface-bulk recursive Green's function

References

  • [1]
    D.M. Cannell, George Green, Mathematician and Physicist 1793–1841: The Background to His Life and Work (Society for Industrial and Applied Mathematics, Philadelphia, 2001).
  • [2]
    D.M. Cannell and S.G. Krantz, The Mathematical Intelligencer 26, 68 (2004).
  • [3]
    D.M. Cannell and N.T. Lord, The Mathematical Gazette 77, 26 (1993).
  • [4]
    G. Green, arXiv preprint arXiv:0807.0088 (2008).
  • [5]
    I. Grattan-Guinness, The American Mathematical Monthly 102, 387 (1995).
  • [6]
    H.N. Jahnke, A History of Analysis (History of Mathematics vol. 24) (American Mathematical Society, Providence, 2003).
  • [7]
    G.L. Trigg, Mathematical Tools for Physicists (John Wiley & Sons, Hoboken, 2006).
  • [8]
    J. Lindström, On the Origin and Early History of Functional Analysis (Uppsala Universitet, Uppsala, 2008).
  • [9]
    C.L.R. Braga, Notas de Física Matemática (Ed. Livraria da Fisica, São Paulo, 2006).
  • [10]
    E. Butkov, Física Matemática (Ed. LTC, São Paulo, 1988).
  • [11]
    D.J. Griffiths, Eletrodinâmica (Pearson, New York, 2011).
  • [12]
    P.A.M. Dirac, The Principles of Quantum Mechanics (Oxford University Press, Oxford, 1981).
  • [13]
    K.D Cole, J.V. Beck, A. Haji-Sheikh and B. Litkouhi, Heat Conduction Using Green's Functions (Taylor & Francis, Milton Park, 2010).
  • [14]
    W.O. Amrein, A.M. Hinz and D.P. Pearson, Sturm-Liouville Theory: Past and Present (Springer, New York, 2005).
  • [15]
    E. Schrödinger, Collected Papers on Wave Mechanics, vol. 302 (American Mathematical Society, Providence, 2003).
  • [16]
    M. Taketani and M. Nagasaki, The Formation and Logic of Quantum Mechanics (World Scientific, Singapore, 2001).
  • [17]
    M. Born, Quantenmechanik der stoßvorgänge, Zeitschrift für Physik 38, 803 (1926).
  • [18]
    J.A. Wheeler and W.H. Zurek, Quantum Theory and Measurement (Princeton University Press, Princeton, 2014).
  • [19]
    B.A. Lippmann and J. Schwinger, Physical Review 79, 469 (1950).
  • [20]
    J.J. Sakurai and J. Napolitano, Modern Quantum Mechanics (Addison-Wesley, Boston, 2011).
  • [21]
    H. Bruus and K. Flensberg, Many-Body Quantum Theory in Condensed Matter Physics: An Introduction (Oxford University Press, Oxford, 2004).
  • [22]
    F.J. Dyson, Physical Review 75, 1736 (1949).
  • [23]
    P.A.M. Dirac, Physikalische Zeitschrift der Sowjetunion 3, 64 (1933).
  • [24]
    R.P. Feynman, Review of Modern Physics 20, 367 (1948).
  • [25]
    R.P. Feynman, Physical Review 76, 749 (1949).
  • [26]
    T. Lancaster and S.J. Blundell, Quantum Field Theory for the Gifted Amateur (Oxford University Press, Oxford, 2014).
  • [27]
    J. Schwinger, arXiv preprint hep-ph/9310283 (1993).
  • [28]
    P.C. Martin and J. Schwinger, Physical Review 115, 1342 (1959).
  • [29]
    G. Baym and L.P. Kadanoff, Physical Review 124, 287 (1961).
  • [30]
    L.P. Kadanoff and G.A. Baym, Quantum Statistical Mechanics (Benjamin, New York, 1962).
  • [31]
    A. Altland and B.D. Simons, Condensed Matter Field Theory (Cambridge University Press, Cambridge, 2010).
  • [32]
    I.C.C. Lima, Manual Prático de Funções de Green Em Física da Matéria Condensada (EdUERJ, Rio de Janeiro, 2010).
  • [33]
    G.D. Mahan, Many-Particle Physics (Kluwer Academic, Dordrecht, 2000).
  • [34]
    R.D. Mattuck, A Guide to Feynman Diagrams in the Many-Body Problem (Dover, Mineola, 1992).
  • [35]
    D.N. Zubarev, Soviet Physics Uspekhi 3, 320 (1960).
  • [36]
    H. Haug and A.-P. Jauho, Quantum Kinetics and Optics of Semiconductors (Springer Verlag, Berlin, 2008).
  • [37]
    J.C. Cuevas and E. Scheer, Molecular Electronics: An Introduction to Theory and Experiment (World Scientific, Singapore, 2010).
  • [38]
    G. Stefanucci and R. Van Leeuwen, Nonequilibrium Many-Body Theory of Quantum Systems (Cambridge University Press, Cambridge, 2013).
  • [39]
    R. Haydock, Solid State Physics, vol. 35 (Academic Press, New York, 1980).
  • [40]
    Michele Cini, Topics and Methods in Condensed Matter Theory (Springer, New York, 2007).
  • [41]
    G.J. Ferreira, Introduction to Computational Physics with examples in Julia (unpublished, 2016).
  • [42]
    F. Guinea, C. Tejedor, F. Flores and E. Louis, Physical Review B 28, 4397 (1983).
  • [43]
    M.P. Lopez Sancho, J.M. Lopez Sancho, J.M.L. Sancho and J. Rubio, Journal of Physics F: Metal Physics 15, 851 (1985).
  • [44]
    C.H. Lewenkopf and E.R. Mucciolo, Journal of Computational Electronics 12, 203 (2013).
  • [45]
    M.B. Nardelli, Physical Review B 60, 7828 (1999).
  • [46]
    F. Pauly, J.K. Viljas, U. Huniar, M. Häfner, S. Wohlthat, M. Bürkle, J.C. Cuevas and G. Schön, New Journal of Physics 10, 125019 (2008).
  • [47]
    G. Thorgilsson, G. Viktorsson and S.I. Erlingsson, Journal of Computational Physics 261, 256 (2014).
  • [48]
    E.N. Economou, Green's Functions in Quantum Physics (Springer, New York, 2006).
  • [49]
    J. Callaway, Quantum Theory of the Solid State (Academic Press, Cambridge, 1974).
  • [50]
    J. Bezanson, A. Edelman, S. Karpinski and V.B. Shah,arXiv preprint arXiv:1411.1607 (2014).
  • 1
    Today, Green's identities are a set of three vector equations relating the bulk with the boundary of a region on which differential operators act, closely related to Gauss' divergence and Stokes' curl theorems. Green's second identity allows the conversion of a triple integral of laplacians within a volume into a double integral of gradients over its surface boundary:
    (1)
    where Ω is a volume bounded by the closed surface S, where S with unit length. This formula holds for regular functions u and v defined in Ω. is the outward normal to the boundary
  • 2
    For a time-independent Hamiltonian and in the Schrödinger picture, the solution to Eq. (14) is r, t′) to Ψ(r, t) in infinitesimal time intervals. It has important properties such as unitarity, U = U, which preserves the norm of the wavefunction.. Here we see the time-evolution operator that evolves the wave function Ψ(
  • 3
    In the position representation (and Dirac notation), the bra ⟨r| is associated to a spatial function base.
  • 4
    To ensure that particles propagate from times t′ < t, we must correct all equations above with a Heaviside function θ(tt′), and change the analyticity domain, by adding an infinitesimal shift in the denominator of Eq. (23). Later we will return to this point.
  • 5
    Theoreticians often make this distinction between interacting and noninteracting systems. This means that the potential V in the Hamiltonian will be present (e.g. due to particle scattering), or not, so that we return to the simple free-particle system (V = 0). In practice, the solvable system will be a building block for the more complex ones.
  • 6
    The Heaviside step function θ(x) is defined by
    (27)
    It has a jump discontinuity at x = 0, for which the value usually taken is 1/2. The derivative of θ(x) is the Dirac delta δ(x).
  • 7
    If there is time translational symmetry, it is possible to describe the system via time differences tt′ and perform a Fourier transform to represent the Green's function in the energy domain. Similarly, in the presence of spacial translational symmetry, the representation in the momentum space is also convenient.
  • 8
    Here we define the Fourier transform of the retarded Green's function as
    (39)
    (40)
  • 9
    One should have in mind that the imaginary part of a matrix A is Im(A) = (AA)/(2i). We thank K. Pototzky for this remark.
  • 10
    Limits of improper integrals can be obtained by the principal value of the Cauchy relation
    (45)
    due to the improper nature of the integrals of Gr/a, e.g. Eq. (38), with poles in different halfplanes. The imaginary part of the diagonal retarded/advanced Green's function recovers the local density of states of a discrete spectrum, .
  • 11
    (53)
  • 12
    One may find useful to apply [AB, C] = A{B, C} − {A, C}B and [A, B] = −[B, A].
  • 13
    In the time domain the EOM has the form
    (58)
  • 14
    We should notice that we fully neglect spin-orbit contributions in the Hamiltonian. Thus in this problem spatial degrees of freedom are decoupled from spin, since nor the kinetic energy nor the local potential couples to the spin of the particles.
  • 15
    The Hilbert transform is an improper integral, defined by the principal value
    (81)
    For an analytic function in the upper plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. This means that these functions are conjugate pairs. Given a real-valued function f(x), the Hilbert transform finds a imaginary part, a companion function g(x), so that F = f(x) + ig(x) can be analytically extended to the upper half of the complex plane.
  • 16
    In fact, within the surface approach, it is possible to determine the bulk Green's function. One can consider an additional site and couple it from the left and from the right with semi-infinite chains, as we have shown in Fig. 6 in Sec. 2.4. To this, one should first determine the surface GF from both sides, which usually are identical. However, they can differ for instance in topological systems, where each side has its own chirality, or for asymmetric leads in transport devices.
  • 17
    To generalize the source code 3 (Appendix A. Source codes Julia is a high-level, high-performance, easy-to-learn scientific language [50]. It is also an open-source project, licensed by MIT. For an introductory course, please see for instance Reference [41]. In source code 1, we define a linearly spaced vector of energies using the command linspace and evaluate the undressed Green's function from this vector. This shortened notation avoids additional and traditional use of the for loop for energies, which is inefficient, since the vector can be stored in memory at once, on the fly. If the amount of data to be stored is under the memory resources, vectorization of loops is a general recommended programming practice, since matrix and vector operations can be performed efficiently in Julia. When we start evaluating more complex Green's functions, stored as large matrices, we return to the conventional loop of energies. Source code 1 Infinite chain Code 2 uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. We use again the vectorized loop of energies w in the linspace command. The explicit for loop runs the recursive decimation procedure for 16 steps. Equations (123), (129) and (132) are implemented inside the loop. Next we renormalize the hoppings and the undressed Green's functions, carrying the decimation. In the last lines we plot the local density of states of site 1, the local Green's function is given by Eq. (124) or by Eq. (92) with effective functions. The results of few steps are plotted in Fig. 13 and Fig. 14. Source code 2 Semi-infinite chain via surface-bulk recursive Green's function In source code 3, we have implemented the decimation using the matrix forms in Julia. We had to define a vertical and horizontal hopping parameters, tv and tw, along with hopping matrices V and W. We now perform an explicit energy and decimation loops, iterating for 1000 energy points and 18 decimation steps. Before decimating, we construct a pair of sites, described by the dressed function gV, Eq. (140), coupling two undressed sites. As shown in Fig. 16, we have three effective sites, each one a vertical pair, and we perform the decimation horizontally, as in the 3-site chain. The decimation loop is the same of source code 2, except for the fact that we have now a hopping matrix W. After the loop, we evaluate the three local functions (as in Eq. (92), (96) and (98), but now with effective functions). Source code 3 Ladder via surface-bulk recursive Green's function ) to a stripe, one should define a variable for the stripe size Ly, which in the case of the ladder is Ly=2. The matrices V and W should be defined according to this size, V = diagm(tv*ones(Ly-1),-1)+diagm(zeros(Ly))+diagm(tv*ones(Ly-1),1) and W = tw*eye(Ly), where the command eye in Julia defines an identity matrix and diagm a diagonal matrix.

Publication Dates

  • Publication in this collection
    2017

History

  • Received
    09 Apr 2016
  • Reviewed
    29 July 2016
  • Accepted
    01 Aug 2016
Sociedade Brasileira de Física Caixa Postal 66328, 05389-970 São Paulo SP - Brazil - São Paulo - SP - Brazil
E-mail: marcio@sbfisica.org.br