Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Dilute Magnetism in Graphene
Frederico João Ferreira de Sousa
Thesis to obtain the Master of Science Degree in
Engineering Physics
Supervisor(s): Prof. Eduardo Filipe Vieira de Castro
Examination Committee
Chairperson: Profa. Susana Isabel Pinheiro Cardoso de FreitasSupervisor: Prof. Eduardo Filipe Vieira de Castro
Member of the Committee: Prof. Pedro Domingos Santos do Sacramento
November 2016
ii
Acknowledgments
The author would like to thank Professor Eduardo Castro for his invaluable help and guidance.
iii
iv
Resumo
Neste estudo analisamos as propriedades magnéticas do grafeno quando dopado com adátomos.
Recorrendo à Teoria de Campo Médio Variacional determinamos a temperatura crı́tica, como função
da concentração de adátomos, para duas fases magnéticas distintas: ferromagnetismo com adátomos
distribuı́dos numa só subrede e antiferromagnetismo. Para uma concentração de impurezas de 10%
e uma constante de acoplamento JS = 3 eV obtivemos temperaturas crı́ticas de 145 K no primeiro
caso e 527 K no último. A introdução de um pouco de anisotropia no sistema altera drasticamente a
temperatura crı́tica, chegando até a gerar uma concentração crı́tica no caso de ferromagnetismo em
apenas uma subrede. O estudo da densidade de estados do sistema nestes regimes foi feito usando
um método recursivo para o cálculo da função de Green do sistema. Analisamos a abertura de hiatos
energéticos e vemos como estes evoluem com a desordem, oriunda quer da posição dos adátomos
quer da temperatura. Finalmente, aplicamos estes métodos na tentativa de explicar os resultados ex-
perimentais obtidos por Hwang et al. no seu trabalho sobre grafeno dopado com átomos de enxofre
[1].
Palavras-chave: grafeno, magnetismo, adátomos, desordem
v
vi
Abstract
In this work we analyse the magnetic properties of graphene when doped with adatoms. Using Varia-
tional Mean Field Theory we determine the critical temperature, as a function of the adatom concentra-
tion, for two different magnetic phases: ferromagnetism with adatoms distributed over only one sublattice
and antiferromagnetism. For a concentration of 10% and JS = 3 eV we obtained critical temperatures
of 145 K in the former case and 527 K in the latter. We found that a small amount of anisotropy greatly
affects the critical temperature, even generating a critical concentration in the one lattice ferromagnetic
case. The study of the density of states of the system for these phases was made using a recursive
method to compute the Green Function of the system. We analyse the opening of gaps in the system
and see how they evolve with disorder, coming either from adatom positioning or from temperature. We
apply these methods to try to explain the experimental results obtained by Hwang et al. in their work on
sulfur decorated graphene [1].
Keywords: graphene, magnetism, adatoms, disorder
vii
viii
Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Resumo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1 Introduction 1
1.1 On the flatland: the rise of 2D materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Magnetism in Graphene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Theoretical Model 7
2.1 Simple Graphene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 The s-d Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 The Variational Mean Field Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 Effective Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 Mean Field Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Mean Field One-Lattice Ferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5 Mean Field Antiferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 Implementation 19
3.1 Exact Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 The Recursive Method for the Green Function . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4 Results 27
4.1 One-Lattice Ferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.1 Ferromagnetic Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.1.2 One Lattice Paramagnetic Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
ix
4.2 Antiferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.1 Full Coverage Antiferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.2 Partial Coverage Antiferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2.3 Paramagnetic Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 Sulfur Decorated Graphene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 Conclusions 39
5.1 Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Bibliography 41
A Appendix 45
A.1 Canonical Transfomation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
A.2 The s-d Interaction with free electrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
A.3 The RKKY Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
A.3.1 3D Metals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
A.3.2 Graphene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
A.4 Bogoliubov Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
x
List of Figures
1.1 Band structure of a graphene monolayer. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Honeycomb lattice unitary cell and structure . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Density of states per site for single layer clean graphene . . . . . . . . . . . . . . . . . . . 10
2.3 Free energy curves using the Variational Mean Field Method . . . . . . . . . . . . . . . . 14
2.4 Energy bands and DOS for one lattice full coverage ferromagnetism . . . . . . . . . . . . 17
2.5 Energy bands and DOS for full coverage antiferromagnetism . . . . . . . . . . . . . . . . 18
3.1 Comparison of the coefficients an and bn for clean graphene . . . . . . . . . . . . . . . . 24
3.2 Density of states per site for single layer clean graphene obtained using the recursive
method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Close up of the DOS for single layer clean graphene obtained using different terminators . 24
3.4 Coefficients an and bn for the antiferromagnetic phase at T = 0 and x = 0.7 . . . . . . . . 25
3.5 DOS per site for the antiferromagnetic phase at T = 0 and x = 0.7 . . . . . . . . . . . . . 25
4.1 Critical temperature for one-lattice ferromagnetism . . . . . . . . . . . . . . . . . . . . . . 28
4.2 DOS at T = 0 for one lattice ferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Evolution of the spin resolved gap with adatom concentration in the one lattice ferromag-
netic phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4 DOS for T < Tc in the one lattice ferromagnetic phase . . . . . . . . . . . . . . . . . . . . 30
4.5 Electronic magnetization for one lattice ferromagnetism . . . . . . . . . . . . . . . . . . . 30
4.6 DOS in the paramagnetic regime with two different adatom concentrations . . . . . . . . . 31
4.7 Critical temperature for antiferromangetism . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.8 DOS for full coverage antiferromagnetism at different temperatures . . . . . . . . . . . . . 32
4.9 DOS for partial coverage antiferromagnetism, at T = 0 . . . . . . . . . . . . . . . . . . . . 33
4.10 DOS for the antiferromagnetic phase at T = 0 and x = 0.99 . . . . . . . . . . . . . . . . . 33
4.11 Evolution of the energy gap with adatom concentration in the antiferromagnetic phase . . 34
4.12 DOS for the paramagnetic phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.13 Close up of the DOS for the paramagnetic phase at x = 0.1 . . . . . . . . . . . . . . . . . 35
4.14 DOS for the antiferromagnetic phase at T = 0.95Tc and x = 0.75 . . . . . . . . . . . . . . 35
4.15 Close up of the DOS around the pseudogaps for several different temperatures . . . . . . 36
xi
4.16 Difference between the DOS at the Fermi level for the sulfur doped graphene and the
clean graphene at different temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
xii
Nomenclature
Greek symbols
β (kBT )−1
µ Chemical Potential.
Roman symbols
N Number of sites in a graphene layer.
Nimp Number of impurities.
d Number of unitary cells in one direction.
h Planck Constant.
kB Boltzmann Constant.
T Temperature.
Tc Critical Temperature.
xiii
xiv
Glossary
DOS Density of States is the number of states per
energy interval.
ED Exact Diagonalization is a numerical method to
obtain the eigenvalues of a matrix.
GF Green Function is related to the inverse of the
Hamiltonian operator, can be used to solve
Shrödinger’s Equation.
LDOS Local Density of States is the contribution of
one state to the total density of states.
NN Nearest Neighbours, are the closest nighbour-
ing sites to a specific lattice site.
xv
xvi
Chapter 1
Introduction
1.1 On the flatland: the rise of 2D materials
The study of two-dimensional materials was initially seen as an academic endeavour, an interesting
exercise that was useful for the development of techniques that would be applicable to more “realistic”
problems, namely three dimensional structures. This because it was thought that a true two dimensional
lattice would be unstable. The idea eventually led to the important Mermin-Wagner Theorem [2]. At
finite temperatures, thermal fluctuactions would be large enough to allow for a reassembly of a two
dimensional layer into some three dimensional structure 1. This view on 2D materials would have come
to be revised when stable 2D materials were discovered.
The first instance of isolation of a two dimensional material in a controlled way was the experimental
discovery of graphene in 2004, in a work by Andre Geim and Konstantin Novoselov [4], which, along with
successive studies of its properties, would give them the 2010 Nobel Prize. Graphene is a monolayer
of graphite formed by carbon atoms arranged in a honeycomb lattice, which can be described as two
intertpenetrating triangular sublattices (A and B). Originally, Geim and Novoselov managed to isolate
graphene by successively removing layers from a graphite bulk until they were left with a layer only one
atom thick, in a process denominated micromechanical cleavage. Other methods for isolating graphene
include [5]:
• Chemical exfoliation: using an acid to oxidise graphite resulting in an insulating layer of graphene
oxide, which could then be reduced using a substance like hydrazine hydrate to form graphene;
• Epitaxial growth by chemical vapor deposition of hydrocarbons on metal substrates. Hydrocarbon
gases are heated in the presence of a metallic substrate. As they decompose, flakes of mono and
few layers graphene are deposited on the substrate surface. The substrate works as a catalyst for
the reaction and affects the quality of the graphene samples obtained;
1The theorem, however, prohibits only the existence of a perfect 2D lattice, with Dirac-delta like mass distribution and Braggpeaks. That does not happen in materials like graphene, as these peaks are in fact smeared out [3]. (For a 2D material to exist,we need only the local normals to the surface to display long-range order (orientational order) with a mass distribution originating,say, Lorentzian-like Bragg peaks, but not perfect Dirac-delta peaks).
1
• Thermal decomposition of SiC. In this method a bulk crystal of silicon carbide (SiC) is heated so
that the Si atoms are sublimated, leaving graphene sheets behind.
Graphene has several properties that contributed to its rise in popularity since its discovery, both in
terms of theoretical physics and real world applications. The carbon atoms in graphene are surrounded
by three other C atoms that belong to a different sublattice, bonding through in-plane hybridized sp 2 or-
bitals (a mixing of 2s with two 2p orbitals), which confer the lattice its robustness. The remaining p orbitals
are oriented perpendicularly to the plane and bond between themselves forming π bands. The electrons
from these bands play a major role on the unusual physical properties of graphene. The electronic bands
of monolayer graphene can be described by a simple nearest-neighbour hopping Hamiltonian, resulting
in a structure where the valence and conduction bands meet at the vertices of the hexagonal Brillouin
Zone (BZ) (as depicted in Fig. 1.1), making it a zero gap semiconductor. As the opening of a sizeable
band gap is a fundamental property of the electronic devices for modern technology, much effort has
been put into engineering a gapped form of graphene. By manipulating the geometry of a graphene
monolayer we can form nanoribbons that exhibit a band gap, but it is too small for application in current
techonological devices [6]. The bilayer graphene, obtained by stacking two monolayers on top of each
other has a small gap that can be controlled by an external electric field [7].
-4-2
0
2
4-4
-2
0
2
4-2
0
2
-4-2
0
2
4
ky
Ek
kx
Figure 1.1: Band structure of a graphene monolayer. The valence and conduction bands touch at thevertices of a hexagonal BZ. At those points, the bands have a conical shape making the charge carrierseffectively massless. Image taken from Ref [8].
Around the Fermi level, the energy bands are linear, so the charge carriers in graphene monolayer
completely lose their effective mass, behaving as massless Dirac fermions. This electronic system, thus
provides a great framework to study fundamental physics such as QED and quantum phenomena. A
prime example is the observation of the anomalous integer quantum Hall Effect (IQHE), possible even
at room temperature [9]. The Klein Paradox (the high probability tunneling of the electron to potential-
induced classically forbidden regions) is another phenomenon that can be studied using graphene [10].
2
The charge carriers suffer very little scattering by lattice vibrations, which results in high mobility
(ballistic transport), making graphene a great electrical and thermal conductor. It is the strongest material
ever tested , with a tensile strength 500 times greater than that of steel. Because of this there are studies
to use graphene as a reinforcement material for biological composites [11]. Finally, one of its greatests
assets is the possibility to easily study, change and control its properties, using dopants, electromagnetic
fields or changing its geometry, making it an incredibly versatile substance. This is possible because,
being a 2D material, the whole sample is easily accessible. An extensive review about the electronic
properties of graphene can be found in Refs [5, 8].
1.2 Magnetism in Graphene
In a technological era where electronic devices permeate the majority of our daily lives, the demand for
more powerful, efficient and faster technology does not seem to be dwindling. The fact is that silicon
based electronics has experienced a tremendous growth in the past decade but that growth is coming
to a halt. Unable to keep up the pace set in the earlier years of this silicon era, the companies and
scientific community of this field have been increasingly concentrating their efforts to find suitable alter-
natives. This search bore promising fruits such as the already mentioned graphene. In the midst of this
paradigm shift, other scientific/technological areas began to gain prominence, particularly spintronics.
Spintronics is the usage and control of the electron spin to build solid state devices. These devices are
then sensitive to spin polarized currents. The prototype device that is already in use in industry as a read
head and a memory-storage cell is the giant-magnetoresistive (GMR) sandwich structure which consists
of alternating ferromagnetic and nonmagnetic metal layers. The resistance of this device depends on
the relative magnetization of these layers. There has been a lot of effort to improve GMR-type devices
as well as findind new ways of generating and use spin polarized currents.
Because of the long spin-relaxation length and its high mobility charge carriers (ballistic fermions),
graphene provides a great arena to develop spin-polarized devices and study spintronics. Now, in-
trinsically magnetic materials are ideal for the development of spintronic devices, as they allow for the
generation of spin polarized currents, reading the associated signal and even the manipulation of spin
polarization. Ideally one would like to have a material displaying both long spin-relaxation length and a
magnetic response.
In the absence of d– or f–electrons, the carbon atoms do not have magnetic moment. Thus graphene
displays a very weak magnetic response. It was found no ferromagnetic phase for temperatures above
2 K. The paramagnetic response is also much smaller than anticipated. In fact, graphene is strongly
diamagnetic, much like its 3D counterpart graphite [12].
This problem can be circumvented using the fact that we can change properties of graphene rather
easily. The question is then, how to induce some sort of controllable magnetic behaviour? Since pure
graphene does not allow for it, the solution resides in introducing defects. This can be realized in
two quite opposite manners: vacancies or doping. Vacancies are, simply put, holes in the structure
of graphene, due to the removal of carbon atoms, and can be created by irradiation with high energy
3
particles. Depending on the irradiating particles, the vacancies can have interstitial impurities, such as
hydrogen atoms. The absence of a carbon atom will create three dangling bonds in the vicinity of the
vacancy and a localized mid-gap state. The filling of these states can create a net magnetic moment.
Theoretical calculations using density functional theory (DFT) have shown that depending on whether
the vacancies are in the same or in different sublattices, the coupling can be either ferromagnetic or
anti-ferromagnetic [13].
Another method for magnetizing graphene is adsorption and consists in the adhesion of atoms to the
two dimensional layer. The adherent atoms can generate magnetic moments localized around them. It
was found that low concentrations of atomic hydrogen can induce a net magnetic moment in graphene
[13].
1.3 Motivation and Objectives
The absence of intrinsic magnetism in graphene is a major hurdle to its applicability in future spintronic
devices, however, this hurdle is not definite. The main conclusion to take from the last section is that
both graphene can display a magnetic behaviour suitable for their usage in developing and study new
spintronics devices, when we include defects. With this in mind it becomes of high importance the study
of how these defects alter the physical properties of the original materials as well as the interaction
between the defects themselves. For example, vacancies and adatoms are mobile (the adatoms having
higher mobillity), leading to effects like recombination and formation of clusters that can significantly
change the magnetic response of the material [13]. Another example would be hydrogenated graphene,
named graphane, where every carbon atom is bound to a hydrogen atom. This bonding occurs in
alternating sides of the plane, and, remarkably, results in an insulator with a band gap of more than 3
eV. The stability of such an extended two-dimensional hydrocarbon was theoretically predicted [14] and
subsequently experimentally confirmed [15].
Our work focuses on the interaction between magnetic adatoms in graphene and their effect on the
electronic properties of the host material. Our main objectives are finding the answers to the following
questions:
• Is there a magnetic phase with magnetic adatom ordering in graphene?
• If so, how does it depend on the adatom concentration?
• How does the Density of States (DOS) of the underlying electronic system depend on the presence
of the adatoms (magnetic and non-magnetic phase, adatom concentration, positional disorder of
adatoms)?
• Is there a gap opening in graphene?
• If so, how does it depend on the underlying magnetic phase?
4
In order to answer these questions we shall undertake a numerical approach using tools like: vari-
ational mean field method; diagonalization of finite size systems and finite-size analysis and recursion
method to determine the local Green function in disordered systems.
1.4 State of the Art
The study of impurities in materials has always been a hot topic in condensed matter physics due to the
fact that the purification of materials is usually an expensive process and due to the effect said impurities
have on the host, as mentioned in previous sections. One of the most used models for the description
of a magnetic impurity in a host is the Anderson Model, developed by P. Anderson in 1961 to study
the localization of magnetic moments in metals [16]. This work had far reaching implications and the
Anderson Model is still used to treat these kinds of problems. More recently, it was used to show the
formation of local magnetic moments in doped graphene and the broadening of the energy levels of
the adatoms, as well as the possibility of controlling these magnetic moments via the application of an
external electric field [17]. The broadening of the adatoms energy levels allows for the existence of a
magnetic phase even when the energy of the unfilled orbital of the adatom lies above the Fermi energy.
The Anderson Model originates an interaction between the spins of the conduction electrons and the
impurity spin: the s–d interaction. As the conduction electrons are mobile, they act as mediators and
couple the different adatoms. This process was first studied in a series of seminal papers by Ruderman,
Kittel, Kasuya and Yosida [18, 19, 20], and is known as the RKKY interaction. In unpolarized graphene
this coupling can be either ferromagnetic (if the impurities are located in the same sublattice) or anti-
ferromagnetic (if they belong to different sublattices), decaying as ∼ 1/R3 in the undoped [21] case and
as ∼ 1/R2 in the other scenario [22], R being the distance between impurities. In studies where an
RKKY like interaction is used for the coupling betwen two spins in the Heisenberg model, the fact that
the coupling should also depend on the impurity concentration is never taken into account [23, 24]. On
one hand, for less impurities, the average distance between the impurities increases. On the other, less
impurities means smaller disorder, so that the electrons couple the magnetic impurities more effectively.
Adatom order/disorder should play an important role in the magnetic properties of graphene, as
Monte Carlo simulations [25] have shown that, while a perfectly covered graphene sheet has a strong
tendency towards antiferromagnetism, acounting the presence of ripples (which affect the adsorption
probability) leads to different magnetic behaviours, namely ferromagnetism and even glassiness (frozen
random orientation spin configuration). For non-magnetic adatoms, recent studies indicate that the inter-
action between randomly positioned adatoms can be strong enough to establish long range correlations,
forming a Kekulé-type lattice, and opening a gap in the electronic spectrum [26]. It remains to be seen
whether the same could be established for magnetic adatoms.
There is still a lack of experimental evidence for adatoms induced magnetism in graphene. Recent
experiments have tried to verify the aforementioned results. There was an observation of an anomalous
saturation length in dilute fluorinated graphene [27], evidence of spin-flip scattering due to local magnetic
moments induced by adatoms. The rate of this scattering is controllable by adatom concentration and
5
carrier density, suggesting that fluorinated graphene could be used as a spin FET. More recently there
was evidence for a gap opening near the Fermi energy in sulfur-decorated graphene acompanied by
magnetic histeresis behaviour [1]. This result was obtained for relatively large sulfur concentrations
(≈ 10%). There have also been signs of magnetic behavior reported in graphene-BN heterostructures
intercalated with gold [28].
1.5 Thesis Outline
This work is structured in the following manner:
• Chapter 2 introduces the Tight Binding model for single layer clean graphene. The energy spec-
trum and density of states will serve as terms of comparison for the results to come. Then the full
Hamiltonian that is used to describe the problem of adatoms doped graphene is explained, with the
analyisis of the Anderson Model and the s-d interaction. To end the chapter, it is outlined how we
can determine the critical temperature for magnetic phases using Variational Mean Field Theory;
• In Chapter 3 we discuss two different numerical methods used to extract the eigenvalues, or the
density of states, for the system: exact diagonalization and a recursive method to compute the
Green Function;
• Chapter 4 presents the results obtained for two magnetic phases: antiferromagnetism and one
lattice ferromagnetism. We discuss the dependence of the critical temperature with the adatom
concentration and analyse the effect of adatom disorder and temperature on the underlying elec-
tronic system, such as possible polarization of charge carriers;
• Chapter 5 serves as a summarization of the results, with some final remarks. We give insight into
possible future work.
Besides these main chapters, there is also the Appendix A where we present the Canonical Trans-
formation and use it to obtain the s-d interaction for the free electron model. From the s-d interac-
tion we show how we can arrive at the RKKY interaction in graphene. A proof of the Bogoliubov
Inequality is presented.
6
Chapter 2
Theoretical Model
In this chapter we lay the theoretical framework that is the basis for our approach of this problem. It starts
with a brief overview of the tight-binding model for single layer graphene, continues with an analysis of
the s-d Interaction and conludes with the Variational Mean Field Method.
2.1 Simple Graphene
a1
a2
d = 1.42 Å
δ1δ2
δ3
x
y
Figure 2.1: Graphene structure with the two atom unitary cell highlighted in dashed green. The twodifferent sublattices are illustrated by the colors red and blue.
The graphene lattice can be described by a two atoms unitary cell formed by the basis vectors:
a1 =a
2
(√3ex + ey
)a2 =
a
2
(√3ex − ey
),
where a is related with the distance d between nearest neighbours (NN) by a =√
3d. This lattice can
be generated by two interpenetrating triangular sublattices A and B, as shown in Figure 2.1. Associated
7
with these basis vectors are the following reciprocal vectors:
b1 =2π
a
(√3
3ex + ey
)b2 =
2π
a
(√3
3ex − ey
).
These vectors form a unitary cell just like the one for the direct space, but rotated. In this hexagonal
cell we define two inequivalent vertices: K = 2πa (√
33 ,
13 ) and K
′ = 2πa (√
33 ,−
13 ), called the Dirac points
(whose meaning will be clarified below). Other mention worthy points are: Γ = (0, 0), and the point
halfway through KK ′, M = 2πa (√
33 , 0).
Each atom is surrounded by three others that belong to a different sublattice. They are connected
by the vectors δ1 = dex, δ2 = d(− 12ex +
√3
2 ey
)and δ3 = −d
(12ex +
√3
2 ey
). Considering only hopping
between these NN we can write the following Hamiltonian for the single layer graphene:
H = −t∑
R,δ,σ
a†R,σbR+δ,σ + h.c. (2.1)
where t ≈ 3eV is the hopping coefficient and σ is the spin label. The operators aR (a†R) and bR (b†R)
are the electronic annihilation (creation) operators for the two different sublattices acting on the unit cell
with position label R.
Making use of the translational symmetry and going to the k-space we can write:
ak =1√NR
∑R
e−ik·RaR , (2.2)
and similarly for bR. NR is the number of unitary cells. Inserting this relation into equation (2.1)
yields:
H0 = −t∑k,σ
fka†k,σbk,σ + h.c. (2.3)
where we have defined:
fk =∑δ
e−ik·δ
= 2 cos(a
2ky
)ei
√3
6 kx + e−ia√
33 kx .
(2.4)
Equation (2.3) can be written in a 2x2 matrix form. For that, let us ignore for the moment the spin
label σ, since both spin projections are degenerate, and use the 2-component spinor: Ψk = (ak, bk). So
now we have:
H0 =∑k
Ψ†kHkΨk , (2.5)
with
8
Hk =
0 −tfk−tf∗k 0
. (2.6)The energy bands in this model have the simple form:
�0(k) = ±t|fk|. (2.7)
At k = K and k = K′, fk = 0, therefore the system is gapless, with six points in the Brillouin Zone
where the two bands meet (see Figure 1.1). The reason why these points are called Dirac points can be
understood if we expand fk around those points, with the displacement vecor q = k−K. Doing that we
get:
fK ≈ a√
3
2ei
5π6 (qx + iqy)
fK′ ≈ a√
3
2ei
5π6 (qx − iqy)
(2.8)
So the expanded form of the Hamiltonian (2.6) is:
HK ≈ −at√
3
2
0 ei 5π6 (qx + iqy)e−i
5π6 (qx − iqy) 0
HK′ ≈ −at
√3
2
0 ei 5π6 (qx − iqy)e−i
5π6 (qx + iqy) 0
(2.9)
The phase 5π/6 (and the minus sign) can be eliminated by means of a unitary transformation. Intro-
ducing the Pauli matrices:
σx =
0 11 0
σy =0 −ii 0
we can then write:
Hξ = hv(σxqx + ξσyqy) , (2.10)
where the valley label ξ = ±1 for K′ and K respectively, and v = at√
32h is the electron velocity.
Equation (2.10) is the 2x2 version of hte Dirac equation for a massless particle. So, at the points K and
K′ the electrons obey a Dirac-like equation (hence the name Dirac points) and are effectively massless,
moving with constant velocity v. From equation (2.7) the energy bands take the form �0 = ±hv|q|
showing a linear dispersion relation. For clean graphene the Fermi level is exactly at E = 0, which, as
we already saw, corresponds to the Dirac points. What we have done then is the low energy description
for the system, where the physics at work are governed mainly by what is happening near the Dirac
points. This low energy limit will not be used in this work, because adding impurities to the graphene
9
layer may cause the energy scale to fall out of this limit.
With the energy spectrum we can obtain the density of states (DOS) for clean graphene. The DOS
of system with a dispersion relation �(k), per site is:
ρ(E) =2
N
∑k
δ (E − �0(k)) , (2.11)
where N is the number of sites in the lattice, and we are summing over the k vectors in the first
Brillouin Zone. The facter of 2 accounts for spin degeneracy (if the system is not spin degenerate we
must also sum over spin projections). This sum can be computed analytically for graphene and the final
expression is [29]:
ρ(E) =4|E|tπ2
1√F (E)
K(
4|E|F (E)
)if |E|/t < 1
1√4|E|
K(F (E)4|E|
)if 1 < |E|/t < 3
(2.12)
where K(m) =∫ 1
0dy[(
1− y2) (
1−m2y2)]−1/2 is the elliptic integral of the first kind and F (x) =
(1 + |x|)2 −(x2 − 1
)2/4. This DOS is depicted in Figure 2.2. The Van Hove singularities are the two
points where the DOS gets infinitely large, located at E = ±t. We can also see the linear behaviour
expected near E = 0.
-3 -2 -1 1 2 3E/t
0.1
0.2
0.3
0.4
DOS
Figure 2.2: Density of states per site for single layer clean graphene. Notice the Van Hove singularitiesat E = ±t. The DOS vanishes linearly at E = 0.
2.2 The s-d Interaction
Let’s now add impurities to the system. We assume adsorbed impurities that do not alter the underlying
lattice structure and describe them using the Anderson Model [16]. The Hamiltonian for the Anderson
model comprises three terms:
The free electron energy for the s-electrons:
10
Hs =∑kσ
�kc†kσckσ, (2.13)
where the energy �k is measured relative to the chemical potential µ. For the case of graphene this
term is obtained from equation (2.3) after diagonalization of the 2x2 Hamiltonian in (2.6).
The d-electron term:
Hd = �d (nd↑ + nd↓) + Und↑nd↓, (2.14)
�d is the energy of the d-orbital of the atom and the last term takes into account the repulsion between
electrons on that orbital, should there be more than one.
Additionally, there is a hybridization term, allowing for the electrons to hop on and off the impurity
orbital:
Hhyb =1√V
∑kσ
(Vdkc
†kσdσ + Vkdckσd
†σ
), (2.15)
with Vdk = V ∗kd. This term is responsible for the coupling between the conduction electrons and the
d-electrons of the impurity.
We see that, in the absence of hybridization, it should be energetically favourable for the system to
have only one electron in the d-orbital if �d + U > 0 as long as �d < 0 (which we shall assume). The
hybridization will allow the electron from the orbital to change places with one from the conduction band.
Furthermore, if Vkd is not too strong (and we can treat it as a perturbation) there will always be one
electron on the atom. However, the spin of that electron may change with each exchange.
Using a Canonical Transformation we can compute the second order effects of this hybridization
arriving at the s-d Interaction:
Hsd = −J∑l
Sl · s (Rl) . (2.16)
This is a spin-spin interaction between the conduction electrons spins, s, and the impurities spins, Sl
located at site l. As the conduction electrons hop on and off the adatom, spin flips may occur, altering the
electronic system. A more detailed analysis of the Canonical Transformation and the Anderson Model,
as well as the derivation of this result, can be found in Appendixes A.1 and A.2. Here, for simplicity, we
take the coupling J to be constant. We also treat the impurities spins as classical variables, meaning
S � 1.
In cases with no disorder such as a ferromagnetic phase in a fully covered lattice, we can easily
tackle the problem in the k-space and get an exact solution. In most scenarios however, arriving at a
solution requires a numerical approach, such as exact diagonalization (ED) or a recursive method for
the Green Function (GF). These will be analysed in Chapter 3.
We can take this analysis one step further if we consider two consecutive interactions between the
conduction electrons and two impurities. In doing so we can write an effective interaction between the
11
impurities, driven by the conduction electrons: this is the RKKY interaction. In section A.3 we derive
this iteraction for the case of graphene and show that it decays with the distance between impurities r
as(1 + cos
[(K −K ′) · r
])/r3. A downside of using this approach is that we lose information about the
electronic system. Since the study of the effect of the adatoms on the electrons of the host is one of our
main concerns, we will instead use the s-d Interaction in our model.
2.3 The Variational Mean Field Theory
In the last section we mentioned the exact diagonalization and the recursive method for the GF, as
numerical approaches to solve the problem of a graphene lattice doped with Nimp adatoms. They alow
us to compute the energy spectrum of the system or its DOS. In this section we will explain how we can
use this information to study magnetic phases, accounting for the effect of the temperature T .
The Variational Mean Field Theory tries to approximate a given system by a mean field description
with a set of variational parameters. By relating this parameter to some physical quantity (magnetization
for example) and analysing the dependance of the free energy, F , on it, we can study the stability of
different regimes. The Variational Mean Field method is based on the Bogoliubov Inequality:
F ≤ FMF + 〈H −HMF 〉MF . (2.17)
Here, the subscript MF referes to the mean field description of the system, HMF is the mean field
Hamiltonian that is used to calculate the average 〈· · · 〉MF , and H is the Hamiltonian describing the
system. The equality in (2.17) happens when HMF = H. A proof of this inequality is provided in
Appendix A.4.
2.3.1 Effective Hamiltonian
Our system is described by the following Hamiltonian:
H = −t∑
R,δ,σ
(a†R,σbR+δ,σ + b
†R+δ,σaR,σ
)− J
Nimp∑i=1
Si · s (Ri) = HTB +Hsd. (2.18)
We will treat the impurity spins Si as classical variables (S � 1). Doing this, the problem becomes
a single particle problem for the graphene electrons under the effect of a (disordered) potential created
by the impurities. For each spin configuration of the classical spins, we can solve exactly the electronic
system. So we can integrate out the electrons and derive an effective Hamiltonian for the classical spins,
Heff . It is this effective Hamiltonian that will be treated within mean field theory.
Let us proceed to the calculation ofHeff . We start off by writing the grand canonical partition function
Z:
Z =∫d[S]Tr e−β(HTB+Hsd−µN̂) (2.19)
12
where µ is the chemical potential, N̂ is the total number operator, and β = (kBT )−1. The integral is
calculated over all directions of all impurities spins: d[S] = dΩ1dΩ2 · · · dΩNimp . We compute the trace
over the electrons states getting:
Z =∫d[S]
∏n
(1 + e−β(En(S)−µ)) =
∫d[S] e−βHeff (S) (2.20)
where have defined the effective Hamiltonian:
Heff (S) = −kBT∑n
ln(
1 + e−β(En(S)−µ)). (2.21)
The energy levels En(S), that are the solutions to the Hamiltonian (2.18), depend on the spin config-
uration of the adatoms and are to be obtained using numerical methods. Equation (2.21) can be written
using the DOS of the system, defined in equation (2.11):
Heff (S) = −kBT∫dEρ
S(E) ln
(1 + e−β(E−µ)
). (2.22)
The information about the classical spins configuration is now contained in ρS(E).
2.3.2 Mean Field Description
We now formulate a mean field description for the effective Hamiltonian (2.21). The way we do this is
by admitting that each impurity in a site i is under the effect of a mean magnetic field hi created by all
others impurities:
HMF = −Nimp∑i=1
hi · Si. (2.23)
In this model we have 3Nimp parameters and we cannot to solve it exactly. However, we are inter-
ested in magnetic phases like ferromagnetism. In this phase there is one preferred orientation for the
spins and the mean field should be constant over all sites, so that we can greatly simplify our mean field
Hamiltonian using a constant h:
HMF = −hNimp∑i=1
Si. (2.24)
Although the physical meaning of the variational parameter h is clear, it’s hard to know the range of
values it can take. However, we can relate it to the magnetization m, defined as:
m =1
Nimp〈Nimp∑i=1
cos θi〉MF = coth y −1
y, (2.25)
with y = βhS and θi the angle between the impurity spin and h. With this we know that when all the
impurities spins are aligned, m = 1, and in the paramagnetic phase, m = h = 0, giving us two limits for
this order parameter.
13
We can now calculate the free energy using (2.17). The procedure from this point is as follows:
• We start by choosing a temperature and impurity concentration;
• For a given value of magnetization, we calculate the energy spectrum/DOS of the system using
numerical methods;
• Repeating the last step for different values of m we obtain a list free energy points that we fit to the
polynomial:
F = a(T )m2 + b(T )m4 + c(T )m6; (2.26)
• Finally we do the same for different temperatures, which allows us to determine a critical temper-
ature Tc using the relation: a(Tc) = 0, characteristic of a second order transition.
The kind of free energy curves that we obtain are depicted in Figure 2.3. There are three main types.
The one with the free energy continuously increasing with a minimum atm = 0 is the paramagnetic case,
where any magnetic ordering is energetically unfavourable, thus, we are above the critical temperature.
As we lower the temperature, the free energy gets closer and closer to the x axis, and then, as we cross
the critical temperature value, a minimum appears at a magnetization 0 < m < 1. It is a stable phase
where the system is not paramagnetic but not completely magnetized either, favouring nonetheless a
finite total magnetization. Further lowering the temperature results in a more pronounced minimum that
gets closer to m = 1. Until finally, the minimum free energy is obtained by a completely magnetized
system.
0.2 0.4 0.6 0.8m
-0.002
-0.001
0.001
0.002
0.003
ℱ(a.u.)
T=1.32TC
T=0.82TC
T=0.33TC
Figure 2.3: Free energy curves for three different temperatures. Above the critical temperature (blueline) the minimum corresponds to m = 0, which means that the system is paramagnetic. Under, butclose to the critical temperature (green line) there is a minimum for a value m 6= 0 where the systemdisplays some magnetic ordering but without full magnetization. Far below the critical temperature (redline) the minimum of the free energy corresponds to m = 1, corresponding to a full magnetization of thesystem.
The mean field approach is an approximation to the exact Hamiltonian (2.18), so we can have a
situation where the system has an ordered magnetic phase, with a finite Tc, within mean field theory, that
14
is not present for the exact Hamiltonian. One way we can probe the exact results is using the Mermin-
Wagner Theorem [2]. An ordered magnetic phase in our mean field approach breaks the continuous
rotation symmetry and, in light of this theorem, is therefore ruled out at finite temperatures. We must
stress, however, that due to the 2D nature of graphene the s-d interaction coupling is expected to be
anisotropic, on physical grounds. One possible source for the anisotropy is the fact that, appart from the
s-d interaction induced by the magnetic character of the adatom, it has been demonstrated [30] that it
should also lead to spin-orbit like terms which break the SU(2) rotation symmetry of the electron spin.
It has been shown that 2D long range order at finite temperatures is stabilized even for a small amount
of anisotropy [31]. Moreover, taken as an example, the Heisenberg model with long range interaction
decaying as 1/rα, the Mermin-Wagner theorem proves the absence of long range order at finite T only if
α > D+2 [23], D being the dimensionality of the system. Even though we cannot generally demonstrate
that the effective interaction between magnetic impurities is long range, in the limit where RKKY model
applies we know that the interaction should decay as 1/r3 (see Appendix A.3.2), which is below the
critical α = 4 in 2D. Even though stronger conditions exist for oscillatory interactions [32], we note that,
for the case of graphene, these oscillations do not lead to a change of sign of the coupling.
In the next sections we will illustrate the mean field description using two examples which we can
solve analytically.
2.4 Mean Field One-Lattice Ferromagnetism
The first case is the One-Lattice Ferromagnetism, where only one of the sublattices is covered with
adatoms. From expressions (2.24) and (2.25) we immediately get:
〈HMF 〉MF = −NimphSm. (2.27)
To compute FMF we need the partition function corresponding to HMF , that we write as: Z = zNimp ,
with the single impurity partition function:
z =
∫dΩeβhS cos θ = 4π
sinh y
y. (2.28)
Which gives us the mean field free energy:
FMF = −NimpkBT ln(
4πsinh y
y
)(2.29)
Now we only need the energy levels to be able to calculate the free energy of the system. We can
easily do that in the case where one of the sublattices is completely covered (x = 0.5), at T = 0. The
translational invariance allows us to go to k–space. From Eqs. (2.3) and (2.16), defining the z-direction
as the direction of the spins, we have, for impurities only on sublattice B:
15
H = −t∑k,σ
(fka†k,σbk,σ + h.c.
)− JS
∑k,σ,σ′
b†k,σσzσ,σ′bk,σ′
=∑k
Ψ†kHkΨk, (2.30)
where we have used the Pauli matrix σz = diag(1,−1) and defined the spinor:
Ψk =
ak,↑
ak,↓
bk,↑
bk,↓
. (2.31)
So now we need to compute the eigenvalues of the matrix:
Hk =
0 0 −tfk 0
0 0 0 −tfk−tf∗k 0 −JS 0
0 −tf∗k 0 JS
.
Getting the eigenvalues from this Hamiltonian is straightforward. The result are four energy bands:
E↑ =1
2
(−J S ±
√J2S2 + 4|fk|2
)(2.32a)
E↓ =1
2
(J S ±
√J2S2 + 4|fk|2
)(2.32b)
The two spin projected DOS are the same but shifted by JS and the system opens a spin-resolved
gap of value ∆ = JS centered in±JS/2 at the Dirac points, where fk = 0. There,the spin up high energy
band and the spin down low energy band touch. In Figure 2.4(a) and (b) are depicted the energy bands
along the path in the Brillouin Zone KMΓK, and the DOS per site for each spin projection, respectively.
More on this subject can be found in [33].
2.5 Mean Field Antiferromagnetism
Lastly we shall look at the antiferromagnetic case. Now we have Nimp = NA +NB impurities distributed
over both sublattices A and B. In this case we simplify the Hamiltonian (2.23) in a different way. We can
think of the antiferromagnetic case the same way as one lattice ferromagnetism on both sublattices, with
an opposite preferred orientation from one another. So our simplified mean field Hamiltonian is:
HMF = −h
NA∑i=1
SAi −NB∑j=1
SBj
, (2.33)16
Up
Down
K M Γ K
-3
-2
-1
0
1
2
3
K M Γ K
-3
-2
-1
0
1
2
3
k space
E/t
(a) Energy bands for one lattice full coverage ferromagnetism-3 -2 -1 1 2 3
E/t
0.2
0.4
0.6
DOS
Up
Down
(b) DOS per site for one lattice full coverage ferromagnetism
Figure 2.4: One lattice ferromagnetism with the sublattice completely covered with adatoms. In (a) wehave the four energy bands along the closed line connecting the BZ points K M and Γ. Figure (b) showsthe DOS per site for each spin projection
that is to say that the two sublattices have a preferred spin orientation direction that is opposite to
each other. Note that the impurities need not be in the same unit cell. Now we can define magnetizations
for each of the sublattices:
mA,B =1
NA,B〈NA,B∑i=1
cos θA,Bi 〉MF . (2.34)
And we have for the total magnetization mT = mA − mB = 2mA = 2m. The average mean field
Hamiltonian is:
〈HMF 〉MF = −NAhSmA +NBhSmB = −NimphSm, (2.35)
which means that we can use the magnetization for a single sublattice as an order parameter. And
since the result for the mean field free energy is exactly the same as in Eq. (2.29) the only formal
difference between the antiferromagnetic case and the one-lattice ferromagnetism will be in the effectve
Hamiltonian Heff , namely in the energy levels.
To compute the energy levels we shall focus ourselves on the fully covered case, at zero temperature.
Once again, we can use the k–space to solve this problem. We simply need to compute the eigenvalues
of:
H = −t∑k,σ
(fka†k,σbk,σ + h.c.
)− JS
∑k,σ,σ′
(a†k,σσ
zσ,σ′ak,σ′ − b
†k,σσ
zσ,σ′bk,σ′
), (2.36)
or, in matrix form:
Hk =
−JS 0 −tfk 0
0 JS 0 −tfk−tf∗k 0 JS 0
0 −tf∗k 0 −JS
.
17
The energy bands in this case are spin degenerate and we get two symmetric bands:
E± = ±√J2S2 + |fk|2. (2.37)
These two bands never touch and the system opens a gap with value ∆ = 2JS, centered in the
Dirac point. In Figure 2.5(a) and (b) are depicted the energy bands along the path in the Brillouin Zone
KMΓK, and the DOS per site, respectively. This antiferromagnetic phase is also studied in [33].
K M Γ K
-3
-2
-1
0
1
2
3
K M Γ K
-3
-2
-1
0
1
2
3
k space
E/t
(a) Energy bands for full coverage antiferromagnetism-3 -2 -1 1 2 3
E/t
0.2
0.4
0.6
0.8
DOS
(b) DOS per site for full coverage antiferromagnetism
Figure 2.5: Full coverage antiferromagnetism. In (a) we have the energy bands along the closed lineconnecting the BZ points K M and Γ. Figure (b) shows the DOS per site for each spin projection
18
Chapter 3
Implementation
In the last chapter we covered the theoretical framework used to describe the problem and study mag-
netic phases that might be relevant. This process hinges on the computation of the energy spec-
trum/DOS of the system. As we lose the translational symmetry due to disorder in impurity distribution
and spins orientations, numerical approaches become more and more appealing. From these, the most
straightforward one is the exact diagonalization (ED) of the Hamiltonian in Eq. (2.18). Although simple,
this method quickly becomes highly inefficient as we deal with big systems. On the other hand, when we
consider the highly sparse nature of these matrices, it becomes clear that there should be other meth-
ods more suited to the task. In this chapter we briefly discuss some features of the ED and introduce
a recursive method to calculate the Green Function of a system, from which the DOS can be efficiently
obtained.
3.1 Exact Diagonalization
The exact diagonalization (ED) method is the most straightforward method of solving our problem. It
amounts to simply take the Hamiltonian matrix and compute its eigenvalues/eigenvectors, giving direct
access to the energy spectrum of the system. Although simple in theory, the development of fast and
efficient numerical algorithms to calculate the eigenvalues of a general matrix of dimensions N ×N is a
complicated subject. Most of the algorithms nowadays take as the first step the reduction of the original
matrix to a simpler form, usually tridiagonal, since these matrices have very useful iterative properties.
So a significant part of the computation time is spent in ”processing” the initial matrix. The number of
operations needed to complete this endeavour scales with N2 (N lnN for fast special cases), which is
why this method quickly becomes impossibly slow: problems that require N ∼ 108 − 109 are completely
ruled out. Besides computation time, another shortcoming of the ED is the fact that it requires the
storage of the whole matrix. So it is not unikely one to run out of memory before running out of time.
There are algorithms that make use of the sparse nature of matrices to reduce the memory requirements
when calculating the eigenvalues. Our case falls into this category, so the main issue is the computation
time (which can be quite acceptable when only a portion of the spectrum is of interest). The use of
19
any symmetries that the target matrix might display is incredibly important, as well as using the most
appropriate algorithm for each case. In our work we used the Linear Algebra Package LAPACK to
compute the eigenvalues of the Hamiltonian. It first reduces the matrix to a tridiagonal form and then
computes its eigenvalues, making use of the hermiticity of the matrix.
3.2 The Recursive Method for the Green Function
Recursive methods are advantageous due to their easy to implement nature and their lightness in terms
of computational resources that they generally require, compared to others such as the already men-
tioned ED. The price paid is not having access to the energy spectrum. In this section we will explain
how we can calculate the Green Function and use it to obtain the DOS of the system.
The Green Function (GF) is defined by:
G(E) = (E −H)−1 . (3.1)
The method we use, introduced by Haydock [34], allows us to calculate the local GF (the diagonal
elements of (3.1)), which we can then use to compute the local density of states (LDOS), ρr(E), making
use of the relation:
ρr(E) = −1
πlimη→0
Im {Gr(E + iη)} . (3.2)
The LDOS is defined as:
ρr(E) =∑i
|〈r|i〉|2δ (E − Ei) , (3.3)
and measures the contribution of a single state |r〉 for the whole energy spectrum of the system. In
systems with no disorder all states correspondent to the same atom in the unit cell contribute equally,
therefore, to get the DOS we need only to sum the LDOS for each atom in a unit cell.
We look at a finite lattice with d unit cells in each direction and two atoms per unit cell, meaning
N = 2d2 sites, and Nimp impurities. The impurity concentration is then x = Nimp/N . The system is
closed using periodic boundary conditions. Our Hamiltonian is then:
H = −t∑
R,δ,σ
(a†R,σbR+δ,σ + b
†R+δ,σaR,σ
)− J
Nimp∑i=1
Si · s (Ri) . (3.4)
The recursive method for the GF requires the Hamiltonian to be in a tridiagonal form:
20
H =
a0 b1 0 . . . 0
b1 a1 b2 . . . 0
0 b2 a2 . . . 0...
......
. . ....
0 0 0 . . . an
. (3.5)
Writing the Hamiltonian in such a way will allow us to express the GF in a continued fraction form.
We now have to find the basis {u0, u1, ...un} that allows us to do this. For that we use Lanczos algorithm
[35]. In the tridiagonal basis the Hamiltonian can be written as:
H|un〉 = an|un〉+ bn|un−1〉+ bn+1|un+1〉. (3.6)
Looking at this expression the way to determine an, bn and un is clear. We must first choose an initial
normalized state u0, giving us immediately:
a0 = 〈u0|H|u0〉. (3.7)
The next state u1 is determined by applying H to u0, removing the contribution from a0, and normal-
izing the result. The norm is precisely b1:
b1|u1〉 = H|u0〉 − a0|u0〉. (3.8)
From this point on the iteration ensues. Having the nth state, we calculate the diagonal element an
as the expected value of the Hamiltonian in that state 〈un|H|un〉 and the next state |un+1〉 is simply:
bn+1|un + 1〉 = H|un〉 − an|un〉 − bn|un−1〉. (3.9)
The normalization condition for un+1 fixes |bn+1|2. We choose the bn to be positive real numbers.
The choice of u0 is not inconsequential, as it is precisely the state |r〉 in equation (3.3). Most of the times
we will be interested in the total DOS of the system: ρ(E) = 1N∑r ρr(E) , but going through all this
process N times to calculate the contribution from every state would mean to cast away the advantage
this method has. Instead we can use a stochastic initial state with contribution from every state |i〉:
|u0〉 = |φ〉 =∑i
φi|i〉 , (3.10)
where φi is a set of random independent variables that obey: 〈φiφj〉 = δij . Equation (3.10) implies
that we only need to average over the stochastic state to get the total DOS: ρ(E) = 〈ρφ(E)〉. In practice,
for sufficiently large systems, we only need to do one realization since it is respresentative of the average
outcome, i.e. they are self-averaging systems.
21
The Green Function as a Continued Fraction
Having found the tridiagonal basis for H the GF becomes:
G(E) =
E − a0 b1 0 . . . 0
b1 E − a1 b2 . . . 0
0 b2 E − a2 . . . 0...
......
. . ....
0 0 0 . . . E − an
−1
. (3.11)
Our object of interest is its first diagonal element G0(E), which is given by:
G0(E) =(C(E))11
det(E −H), (3.12)
where C is the cofactor matrix of E−H. We can calculate det(E−H) using the recursion properties
of tridiagonal matrices. Introducing the first minors of i-th order, Mi, which are the determinants of the
submatrices created by removing the first i rows and columns of a matrix, equation (3.12) is rewritten
as:
G0(E) =M1(E)
M0(E)=
M1(E)
(E − a0)M1(E)− b21M2(E). (3.13)
Dividing by M1 we immediately see the recursive behaviour that gives us the continued fraction:
G0(E) =M1(E)
M0(E)=
1
E − a0 − b21M2(E)M1(E)
=1
E − a0 −b21
E − a1 −b22
. . . −b2n−1
E − an
. (3.14)
We must not forget that our main goal is to ultimately be able to simulate infinite systems, for which
the recursions would extend indefinitely. Eventually we would need to truncate the fraction at some
order. One could think that we should go until the maximum order possible within the actual finite model
used, however, to avoid any kind of edge effects, it is best to stop earlier. We use n = D/2.
To calculate the continued fraction we make use of the fact that the coefficients an and bn quickly
converge to assymptotic values a∞ and b∞, which means that the continued fraction from (3.14) will
start repeating itself, allowing us to use a terminator, t(E):
t(E) =b2∞
E − a∞ − t(E). (3.15)
Solving for t(E) and keeping only the solution that assures that t(E)→ 0 as E → 0 we get the square
root terminator:
22
t(E) =E − a∞
2−
√(E − a∞
2
)2− b2∞. (3.16)
We can even determine the band edges of the spectrum by looking at the square root in (3.16) and
determining when its argument is negative, so that we get the imaginary part needed in (3.2). Doing
that we get the energy band: [a∞ − 2b∞, a∞ + 2b∞]. So physically, a∞ is the band center and 4b∞ is the
bandwidth. We can see then, that this terminator is tailored to compute symmetric DOS with no gaps.
In some cases the coefficients an and bn converge not to one single value but they alternate between
two different ones: aA, aB , bA and bB . This is indicative of a two energy band system with a gap. In this
case the expression for the terminator is:
t(E) =(E − aA)(E − aB) + b2A − b2B
2(E − aA)−
√((E − aA)(E − aB) + b2A − b2B
2(E − aA)
)2− b2A
E − aBE − aA
. (3.17)
This two bands terminator is suited for the computation of DOS with a symmetric band gap. For a
generic case these expressions are approximations.
3.2.1 Examples
To illustrate some characteristics and behaviours of the coefficients an and bn and the DOS that they
yield we will show a few examples. These results were obtained for a d = 1000 lattice and the coefficients
were calculated up to the 500th order. We start with the case of clean single layer graphene. The exact
DOS is depicted in Figure 2.2. The coefficients an and bn obtained using the recursive method are in
Figures 3.1 a) and b), respectively. There are two different sets: red colored points obtained using a
single local state initial state |u0〉 = |i〉, and blue points where the initial state is stochastic, with |u0〉 given
by equation (3.10). Let us focus on the red points first. Clean graphene is a gapless system, and that is
reflected in a single value for a∞. Actually, in this case, we have for every n, an = 0 (Figure 3.1(a)). The
graphene spectrum range is, in units of the hopping coefficient, [−3; 3], so we expect b∞ = 1.5. Indeed
we see in Figure 3.1(b) the bn approaching 1.5. The convergence is slow, however, with the coefficients
oscillating around that value. For that reason we used expression (3.17) for the terminator. The resulting
DOS fails then to accurately replicate the behaviour near the Dirac point. We see in Figure 3.3 that there
is a gap created by the use of two different values of b∞. If we use instead the square root terminator
from expression (3.16) the DOS does not vanish at E = 0. There are also oscillations caused by using
the terminator when the bn have not yet fully converged to a single value. These problems are typical
for numerical methods, when we try to use them to replicate sharp features present in exact results. It is
usually necessary to use bigger lattices and go up to higher orders. The blue points in Figures 3.1 and
3.2 show that using a stochastic initial state introduces a sort of disorder that creates a dispersion in the
coefficients, which result in oscillations in the DOS.
23
Local State
Stochastic State
0 100 200 300 400 500
-0.003
-0.006
0.000
0.003
0.006
0 100 200 300 400 500
-0.003
-0.006
0.000
0.003
0.006
n
a
(a) Coefficients an
Local State
Stochastic State
0 100 200 300 400 500
1.48
1.49
1.50
1.51
1.52
0 100 200 300 400 500
1.48
1.49
1.50
1.51
1.52
n
b
(b) Coefficients bn
Figure 3.1: Comparison of the coefficients an, on the left, and bn, on the right, for clean graphene. Thered dots are the result of using a single lcal state as the initial state and the blue dots are obtained usinga stochastic state. The effect of the stochastic state is to introduce dispersion in the coefficients.
-3 -2 -1 1 2 3E/t
0.2
0.1
0.3
0.5
0.4
0.6
DOS
Figure 3.2: Density of states per site for single layer clean graphene obtained from the recursive methodusing, in blue, a stochastic initial state, and in red a single local state. Notice how the stochastic natureof the initial state creates oscillations in the DOS, a result of the dispersion of the coefficients an and bnfor that case.
-0.02-0.04 0.02 0.04 0.06-0.06E/t
0.002
0.004
0.006
0.008
DOS
Square Root Terminator
Two Bands Terminator
Figure 3.3: Density of states per site for single layer clean graphene near the Dirac point obtained usingthe recursive method with, in black, the square root terminator, and in red with the two bands terminator.The square root terminator yields a DOS that does not vanish at E = 0 and displays oscillations due tothe fact that the coefficients bn have not yet fully converged to a single b∞. With the two bands teminatorgenerates a gap in the spectrum.
24
Now we will se what happens when disorder is already a characteristic of the system: the antiferro-
magnetic phase at T = 0 with an impurity concentration of 70%. We used the stochastic initial state in
this case. There are then two sources of disorder: the initial state and the position of the adatoms. In
Figure 3.4 are represented the coefficients an and bn. Compared to the clean case, the convergence
is much less monotonous. The consequences are the oscillations visible in the DOS in Figure 3.5. To
eliminate the oscillations we can take averages over disorder realizations of the system.
0 100 200 300 400 500
-0.1
-0.2
0.0
-0.3
0.1
0.3
0.2
0 100 200 300 400 500
-0.1
-0.2
0.0
-0.3
0.1
0.3
0.2
n
a/t
(a) Coefficients an
0 100 200 300 400 500
1.4
1.5
1.6
1.7
1.8
1.9
0 100 200 300 400 500
1.4
1.5
1.6
1.7
1.8
1.9
n
b/t
(b) Coefficients bn
Figure 3.4: Coefficients an, on the left, and bn, on the right, for the antiferromagnetic phase at T = 0 andx = 0.7. There are two values of convergence for both coefficients and indication of a gapped energyspectrum. The disorder present, originating from the stochastic state and the adatom position, createsbumps and makes the convergence less smooth.
-3 -2 -1 1 2 3E/t
0.2
0.1
0.3
DOS
Figure 3.5: Density of states per site for the antiferromagnetic phase at T = 0 and x = 0.7 obtained fromthe recursive method. The oscillations in the DOS are the result of the disorder in a system. They canbe eliminated using averages over disorder realizations.
25
26
Chapter 4
Results
We focused our attention on two different possible magnetic phases:
• Antiferromagnetism: The adatoms are placed randomly in the graphene lattice. Whith each sub-
lattice having an opposite preferred orientation for the impurities spins;
• One Lattice Ferromagnetism: The adatoms are placed randomly over only one of the sublattices.
With exact diagonalization we were able to work with d = 32, and up to d = 1000 with recursion
method for the GF. However, since disorder is a major factor, statistical error must be taken into account.
In fact, we found that the recursive method introduces a bigger uncertainty than the exact diagonaliza-
tion, which in some cases prevents a proper analysis of the free energy. Due to this fact we used exact
diagonalization to calculate the free energy and the recursive method when our only intereste was to
obtain the DOS. The free energy is calculated taking an average over 1000 disorder realizations. The
DOS that follow are averages over 100 disorder realizations and are all normalized to unity.
4.1 One-Lattice Ferromagnetism
In Figure 4.1 it is shown the critical temperature as a function of the impurity concentration x.
For the calculations in the isotropic case we used JS = t [36]. Looking at the Tc(x) curve we see
that this mean field approach predicts no critical concentration in this case, and a linear behaviour for
small x. For concentrations bellow 10% the critical temperature is of the order of 150K. Since Mean Field
Theory tends to be an overestimation, this value should in fact end up being even lower. The value of the
coupling JS obviously affects the critical temperature and is dependant on the impurity species used,
with higher values resulting in higher Tc.
As argued in Section 2.3.2, we expect the real system not to be completely isotropic. We can easily
change our model to encompass anisotropy, by distinguishing between the plane directions and the
perpendicular direction: Jz 6= Jx = Jy = J‖, and JzS = t. We have found that even a small amount
of anisotropy (5%) substantially alters the Tc, increasing when Jz > J‖, and decreasing in the opposite
27
J∥ J = 1
0.0 0.1 0.2 0.3 0.4 0.5x
100
200
300
400
500
600
T(K)
Figure 4.1: Critical temperature for one-lattice ferromagnetism. The full line represents the isotropic caseand it shows a linear behaviour. The dashed and dotted lines are for two different cases of anisotropy,Jz < J‖ and Jz > J‖, respectively. In these cases the critical temperature is greatly affected, not only interms of its value but also by becoming non linear. For Jz < J‖ there is a critical concentration.
case, and introducing non linearity. The decrease in the case Jz < J‖ leads to a critical concentration
xc. We are then in the presence of a quantum phase transition in this case of anisotropy.
4.1.1 Ferromagnetic Phase
In the ferromagnetic phase the two spin projections are not degenerate. When the ipurities have a
preferred direction, on average, one spin projection will gain energy with interactions while the other will
lose energy. At T = 0 the system displays only a spin-resolved gap that decreases as x is lowered, as
shown in Figure 4.2. For full single lattice coverage this gap is JS and centered at ±JS/2, depending
on the spin projection.
28
-3 -2 -1 1 2 3E/t
0.2
0.4
0.6
DOS
Up
Down
(a) T = 0 x = 0.5-3 -2 -1 1 2 3
E/t
0.1
0.2
0.3
DOS
Up
Down
(b) T = 0 x = 0.15
Figure 4.2: DOS at T = 0 for two different concentrations. For x = 0.5 we get the expected result withgaps of value JS for each spin projection. At lower adatom concentrations the states become moreuniformly distributed as the gaps decrease.
In order to predict how the spin resolved gap decreases with the impurity concentration we can think
that, as the concentration decreases, the effect of the coupling constant gets diluted over every site
(in just one sublattice), so we can recover the result for a translational invariant system, Eq. (2.32),
using a modification of the coupling: JS → 2xJS. In Figure 4.3 we see that the results follow that
tendency, although the agreement is not total. For high impurities concentrations the gap is lower than
the predicted by this simple model. Both methods agree for concentrations below 20%.
Δ=2xJS
0.0 0.1 0.2 0.3 0.4 0.5
0.0
0.2
0.4
0.6
0.8
1.0
x
Δ/t
Figure 4.3: Spin resolved gap at T = 0 for different concentrations. The line represents the valueexpected using a model where every site is occupied but with a weakened coupling.
Since there is never an overlap between the gaps in each spin projected DOS, the total DOS displays
a pseudogap, a region of energies where there is a depletion of states. Looking at Figure 4.4 we see
that, for T > 0, the gaps of each spin projection are no longer present, since Lifshitz tails effectively
close the gaps [37]. Both in this case and the T = 0 case, there is always a region of energies where
the electrons spin polarization is not balanced. This is particularly true around the Dirac energy, which
means that any kind of electron doping will be polarized. In fact, since the states that form the Lifshitz
tails are localized, we expect charge carriers to be 100% spin polarized in the pseudogap region. It is
also worth mentioning the assymmetry observed for each spin projected DOS.
29
-3 -2 -1 1 2 3E/t
0.1
0.2
0.3
0.4
DOS
Up
Down
(a) T = 0.72Tc x = 0.5-3 -2 -1 1 2 3
E/t
0.1
0.2
0.3
DOS
Up
Down
(b) T = 0.82Tc x = 0.15
Figure 4.4: DOS at finite temperature bellow Tc for different impurity concentrations. Temperature effectsdestroy the gaps in the spin projected DOS, creating Lifshitz tails. For high values of x the spectrum foreach spin is highly assymetric.
In these cases there is a non zero magnetization of the electronic system per lattice site, me. This
quantity is obtained using the relation:
me = µB∑E
4.1.2 One Lattice Paramagnetic Phase
In the paramagnetic regime, since the impurities spins are randomly oriented, there is no overall energy
gain or loss for either spin projection, hence the degeneracy observed in Fig 4.6. Here, much like the
clean graphene case, the DOS vanishes linearly at E = 0, so we can define an electron velocity near the
Dirac points, and compare it with the clean case. In this low energy limit, the DOS for the clean case is
ρ(E) = |E|/(hv)2, v being the electronic velocity. We get for x = 0.5 a velocity of 0.66v and, for x = 0.15,
0.85v. For x = 0.05 we get a velocity of 0.94v, so we can actually infer a relation for the eletronic velocity
in this regime of low concentrations: vpar ≈ (1 − x)v. So electrons near the Dirac points move slower
when we include adatoms with their spins randomly orientated. We keep in mind however, that we are
not considering the scattering of electrons in the adatoms, which should further slow the movement of
electrons on the lattice.
-3 -2 -1 1 2 3E/t
0.10
0.20
0.05
0.15
DOS
(a) T > Tc x = 0.5-3 -2 -1 1 2 3
E/t
0.1
0.2
0.3
DOS
(b) T > Tc x = 0.15
Figure 4.6: DOS in the paramagnetic regime with two different adatom concentrations. In both casesthe DOS vanishes linearly at E = 0, with a slope that increases with the adatom concentration. So, asexpected, the associated electronic velocity decreases when we include impurities in the system.
4.2 Antiferromagnetism
In Figure 4.7 it is shown the critical temperature as a function of the impurity concentration x.
For the calculations we have, once again, used JS = t. This case displays similar features to the
one-lattice ferromagnetism, also predicting no critical concentration. The values of critical temperature,
however, are one order of magnitude greater. Note that Quantum Monte Carlo results for spin-only
(RKKY like) models lead to a much higher critical temperature for the antiferromagnetic case [23] when
the oscillatory component is not taken into account. So mean field seems to be a satisfactory approxi-
mation in this case.
31
0.0 0.2 0.4 0.6 0.8 1.0x
1000
2000
3000
4000
T(K)
Figure 4.7: Critical temperature for antiferromagnetism.
4.2.1 Full Coverage Antiferromagnetism
In the antiferromagnetic case the two spin projections are degenerate. This happens because, since
impurities in different sublattices tend to align in opposite directions, electrons with one spin projection
will gain energy in, say, sublattice A and lose energy in sublattice B. The same happens for the electrons
with the other spin projection (with the roles of sublattices A and B reversed), so the two spin projections
are equivalent.
At T = 0 we get the expected gap centered at the Dirac Point, with value 2JS, as shown in Figure 4.8
(a). If we increase the temperature, but keep x = 1, we can isolate the effect of spin orientation disorder.
The gap decreases significantly, with Lifshitz tails smoothing its edges, and the states get much more
uniformly distributed along the whole energy spectrum, erasing the singularities.
-3 -2 -1 1 2 3E/t
0.2
0.4
0.6
0.8
DOS
(a) T = 0 x = 1-3 -2 -1 1 2 3
E/t
0.05
0.10
0.15
0.30
DOS
(b) T = 0.73Tc x = 1
Figure 4.8: DOS for the full coverage antiferromagnetic phase at zero temperature (no disorder) and0.73Tc. Notice how singularities and band edges get smoothed out due to temperature disorder.
32
4.2.2 Partial Coverage Antiferromagnetism
Now we keep T = 0 and study the effect of adatom position disorder. In Figure 4.9 it is shown the DOS
for x = 0.8 and x = 0.1. Immediately we see that the energy gap decreases, although it is present at any
concentration. For high impurity concentrations there are some structures that develop right next to the
gap. These merge with the Van Hove singularities at x ≈ 0.5. Increasing the temperature also destroys
these features.
-3 -2 -1 1 2 3E/t
0.10
0.30
0.20
DOS
(a) T = 0 x = 0.8-3 -2 -1 1 2 3
E/t
0.10
0.30
0.20
DOS
(b) T = 0 x = 0.1
Figure 4.9: DOS for the partial coverage antiferromagnetic phase at zero temperature for x = 0.8 andx = 0.1. The energy gap decreases as we lower the value of x. At x = 0.8 we can see the peak-likestructures near the gap edges.
To understand the origin of these states let us go back to the full coverage case at T = 0. The entire
lattice is covered with impurities whose spins are oriented either up or down, depending on the sublattice
they are located at. It is this full coverage that is responsible for the gap between −JS and JS. If we
remove a few impurities from the lattice, the gap should suffer only a slight perturbation. Additionally,
these missing impurities create local states with midgap energies that start off as delta-like peaks in
the DOS, as we can see in Figure 4.10. Lowering the concentration broadens these sates and they
eventually get included in the bands as the gap decreases.
-3 -2 -1 1 2 3E/t
0.4
0.2
0.6
DOS
Figure 4.10: DOS for the antiferromagnetic phase at T = 0 and x = 0.99. Mid gap there are some peaksthat correspond to the states created by the absence of impurities on the lattice.
33
We can make an analysis similar to the one in Section 4.1, spreading the effect of impurities over all
sites, so we get for the gap, ∆ = x2JS using equation (2.5) . In Figure 4.11 we can see how this model
matches up with the results. As soon as we leave the full coverage case there is a steep decrease of
gap relative to the x2JS line. This is due to the states already mentioned that are created in that energy
region. As the concentration decreases the approaches start to yield similar results. This is especially
evident for concentrations under 10%.
Δ=x2JS
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.5
1.0
1.5
2.0
x
Δ/t
Figure 4.11: Energy gap at T = 0 for different concentrations. The line represents the value expectedusing a model where every site is occupied but with a weakened coupling.
4.2.3 Paramagnetic Phase
Above the critical temperature, the imppurity spins are all randomly oriented. The fact that now we have
impurities on both sublattices, leads to a finite DOS at E = 0, as shown in Figures 4.12 and 4.13, in
stark contrast with the one lattice case. As we lower x the DOS in that point decreases. Below x ≈ 0.10
we no longer have the required numerical resolution conclude whether the DOS is still finite or not.
-3 -2 -1 1 2 3E/t
0.10
0.05
0.15
DOS
(a) T > Tc x = 0.75-3 -2 -1 1 2 3
E/t
0.10
0.30
0.20
DOS
(b) T > Tc x = 0.1
Figure 4.12: DOS for the paramagnetic phase with impurities on both sublattices. There is always afinite DOS at E = 0.
34
-0.03-0.06-0.09 0.03 0.06 0.09E/t
0.010
0.015
0.005
DOS
Figure 4.13: Close up of the DOS for the paramagnetic phase (T > Tc) with an impurity concentrationof 10% distributed over both sublattices showing the finite value of the DOS at E = 0.
The transition from finite DOS at E = 0 in the paramagnetic phase to an energy gap in the antiferro-
magnetic one seems to be abrupt, since for a temperature T = 0.95Tc there is already a gap (see Figure
4.14). So most likely the gap is formed as soon as there is a preferred direction for the impurity spins,
regardless of the value of the DOS at the Dirac point during the paramagnetic phase.
-3 -2 -1 1 2 3E/t
0.10
0.15
0.05
DOS
Figure 4.14: DOS for the antiferromagnetic phase at T = 0.95Tc and x = 0.75. Even with a temperatureclose to the critical temperature the system still displays an energy gap.
4.3 Sulfur Decorated Graphene
In this section we will try to bring our model closer to experimental values and see if it can be used
to understand some of the results obtained by Hwang et al. in their work concerning sulfur decorated
graphene [1]. They report a sulfur concentration of x ≈ 0.1 and perform ARPES and magnetotransport
measurements. Their main findings for graphene with S can be summarized as follows:
• Depletion of states at EF which depends on temperature;
35
• Magnetoresistance compatible with magnetic hysteresis.
We should note that S atoms are not expected to be magnetic. However, in this case, there is a finite
charge transfer from graphene to the S atoms that is measured experimetally and which, according
to DFT calculations [1], is reponsible for the formation of a magnetic moment of 0.63µB per S atom.
Moreover, according to the same DFT calculations, a possible position for the S atoms is between the
top graphene layer and the buffer layer (a carbon layer with the same structure of graphene but without
the π-bands due to strong hybridization with the SiC substrate). When the buffer layer and graphene are
Bernal stacked, the two sublattices of the top graphene layer are no longer equivalent: one sublattice
has buffer layer C atoms below, while the other sublattice occurs at the buffer layer hollow position where
the S atoms sit.
Under the setup just presented, our model for one sublattice ferromagnetism may be seen as an
adequate starting point. This model explains qualitatively the two main experimental observations of
Ref. [1], which we have itemized above. Regarding the magnetoresistance, and assuming that a ferro-
magnetic state develops as predicted by the present theory, the resistance should be maximized when
the applied magnetic field reaches the value of the coercive field. In this situation the misalignement be-
tween the magnetization of different magnetic domains is maximum, so electron scattering by impurity
spins is also maximum. This explains the two peaks observed in Fig. 4D of Ref. [1] at the two opposite
values of the coercive field.
We now turn to the depletion of states seen at the Fermi level. This result was interpreted in Ref. [1]
as a signature of the opening of a gap at the Fermi level. This interpretation is hard to justify because
of the lack of a nesting vector in the system, assuming the S atoms are randomly distributed, which is
the relevant situation experimentally. In the present theory, the system does not open a true gap, but a
pseudogap at the Dirac point, which could also explain the depletion of states near the Fermi level that is
observed experimentally. In Figure 4.15 we show the evolution of the pseudogap with the temperature.
0.0 0.05 0.10 0.15-0.05-0.10-0.15E/t
0.005
0.010
0.015
0.020
0.025
0.030
DOS
Clean μ=0.116
0K μ=0.099
17K μ=0.099
28K μ=0.099
35K μ=0.099
Figure 4.15: Close up of the DOS around the pseudogaps for several different temperatures. Thechemical potentials were determined with an electronic density of 0.75× 1013 cm−2.
What is happening is the following: the presence of the impurities creates a gap in the spectrum
36
for each spin direction, one at positive energies and another at negative energies, so there is a point
where one of the spin DOS gets highly supressed, creating this depletion of states relative to the clean
graphene layer. The point where the depletion starts (half the value of the pseudogap) gets closer and
closer to zero as we increase the temperature. This is the effect of disorder destroying the spin resolved
gaps. As we get closer to the Dira