|
In this subchapter we will look at the classical treatment
of the movement of electrons inside a material in an electrical field. |
|
|
In the preceding subchapter we obtained the most basic formulation of Ohm's
law, linking the specific conductivity to two fundamental material parameters: |
| |
|
|
For a homogeneous and isotropic material (e.g. polycrystalline metals or single crystal of
cubic semiconductors), the concentration of carriers n and their mobility
µ have the same value everywhere in the material, and the specific conductivity s
is a scalar . |
|
|
This is boring, however. So let's look at useful complications: |
|
In general terms, we may have more than one kind of carrier (this is the common situation in
semiconductors) and n and µ could be functions of the temperature T, the local field strength Eloc resulting from an applied external
voltage, the detailed structure of the material (e.g. the defects in the lattice), and so on. |
| |
We will see that these complications are the essence of advanced electronic materials (especially semiconductors),
but in order to make life easy we first will restrict ourselves to the special class of ohmic
materials. |
|
|
We have seen before that this requires
n and µ to be independent of the local field strength. However, we still may have a temperature
dependence of s; even commercial ohmic resistors, after all, do show a more or less pronounced
temperature dependence - their resistance increases roughly linearly with T. |
|
In short, we are treating metals, characterized by a constant density of one
kind of carriers (= electrons) in the order of 1...3 electrons per atom in the metal. |
| |
|
Basic Equations and the Nature of the "Frictional
Force" |
| |
|
We consider the electrons in the metal to be "free", i.e. they can move freely in
any direction - the atoms of the lattice thus by definition do not impede their movement. |
|
|
The (local) electrical field Eloc then exerts a force F=–
e · Eloc on any given electron and thus accelerates the electrons in the field direction
(more precisely, opposite to the field direction because the field vector points from + to – whereas
the electron moves from – to +). |
|
|
In the fly swarm analogy, the electrical field would
correspond to a steady airflow - some wind - that moves the swarm about with constant drift velocity. |
|
Now, if a single electron with the (constant) mass m and momentum p
is subjected to a force F, the equation of motion from basic mechanics is |
| |
|
|
|
Note that p does not have to be zero when the field is switched on. |
|
If this would be all, the velocity of a given electron would acquire an ever increasing component
in field direction and eventually approach infinity. This is obviously not possible, so we have to bring in a mechanism
that destroys an unlimited increase in v. |
|
|
In classical mechanics this is done by introducing a frictional
force Ffr that is proportional to the velocity: |
| |
|
|
|
Here, kfr is some friction constant. But this, while mathematically sufficient,
is devoid of any physical meaning with regard to the moving electrons. |
|
|
There is no "friction " on an atomic scale! Think about it! Where should a friction force come from?
An electron feels only
forces from two kinds of fields - electromagnetic and gravitational (neglecting strange stuff from particle physics).
|
|
|
It thus makes no sense to complement the differential equation above with a friction term
- we have to look for a better approach. |
|
All that friction does to big classical bodies is to dissipate ordered kinetic energy of the
moving body to the environment. Any ordered movement gets slowed down to zero surplus
speed, and the environment gets somewhat hotter instead, i.e. unordered movement has increased.
|
|
|
This is called energy
dissipation, and that is what we need: Mechanisms that take kinetic energy
away from an electron and "give" it to the crystal at large. The science behind that is called (Statistical) Thermodynamics -
we have encountered it before. |
|
The best way to think about this is to assume that the electron, flying along with increasing
velocity, will hit something else along its way every now and then; it has a collision with something else, or, as we will say from now on, it will be scattered by something else. |
|
|
This collision or scattering event will change its momentum, i.e. the
magnitude and the direction of v, and thus also its kinetic energy Ekin
, which is always given by |
| |
Ekin = |
m · v2 2 | =
| p · v 2 |
|
|
|
In other words, we consider collisions with something else, i.e. other particles
(including "pseudo" particles), where the total energy and momentum of all the particles is preserved, but the
individual particle looses its "memory" with respect to its velocity before the collision, and starts with a new
momentum after every collision. |
|
What are the "partners" for collisions of an electron, or put in
standard language, what are the scattering mechanisms? There are several possibilities: |
|
|
Other electrons. While
this may happen, it is not the most important process in most cases. It also does not decrease the total energy contained
in the electron movement - the losses of some electrons are the gains of others. |
| |
Defects
, e.g. foreign atoms, other point defects (i.e. voids, interstitials) or dislocations. This is a more important scattering
mechanism, moreover, it is a mechanism where the electron can transfer its surplus energy (obtained through acceleration
in the electric field) to the atoms of the lattice, which means that the material heats up. |
| |
Phonons,
i.e. "quantized" lattice vibrations traveling through the crystal. This is the most important scattering mechanism. |
|
The last aspect is a bit strange. While we (hopefully) have no problem imagining a crystal
lattice with all atoms vibrating merrily, there is no immediate reason to consider these vibrations as being localized
(whatever this means) and particle-like. |
|
|
You are right – but nevertheless: The lattice vibrations indeed are best described by a bunch of
particle-like phonons careening through the crystal. |
| |
This follows from a quantum mechanical treatment of lattice vibrations. Then it can be shown that these
vibrations, which contain the thermal energy of the crystal, are quantized and show typical properties of (quantum) particles:
They have a momentum
, and an energy given by hn (h= Planck's
constant, n= frequency of the vibration). |
|
Phonons are a first example of "pseudo" particles;
but there is no more "pseudo" to phonons than there is to photons. (Both of them are bosons, by the way.) |
|
|
We will not go into more details here. All we need to know is that a hot crystal has more phonons and more energetic phonons than a cold crystal, and treating the interaction of an electron with
the lattice vibration as a collision with a phonon gives not only correct results, it
is the only way to get results at all. |
|
At this point comes a crucial insight: It would be far from the truth to assume that only
accelerated electrons scatter; scattering happens all the time to all the electrons
moving randomly about because they all have some thermal energy. Generally, scattering is the mechanism to achieve thermal
equilibrium and equidistribution of the energy of the crystal. |
|
|
If electrons are accelerated in an electrical field and thus gain energy in excess of thermal equilibrium,
scattering is the way to transfer this surplus energy to the lattice which then will heat up. If the crystal is heated up
from the outside, scattering is the mechanism to turn heat energy contained in lattice vibrations to kinetic energy of the
electrons. |
|
|
Again: Even without an electrical field, scattering is the mechanism to transfer thermal energy from the
lattice to the electrons (and back). Generally, scattering is the mechanism to achieve thermal
equilibrium and equidistribution of the energy of the crystal. |
|
|
Our free electrons in metals behave very much like a gas in a closed container. They careen around with some average
velocity that depends on the energy contained in the electron gas, which is – in
classical terms – a direct function of the temperature. |
| | |
| | |
|
|
From classical thermodynamics
we know that the (classical) electron gas in thermal equilibrium with the environment contains the energy Ekin=(1/2)kT
per particle and degree of freedom, with k= Boltzmann's constant and T = absolute temperature.
If you forgot all about this, check this link, too. The three degrees
of freedom are the possible movements in x-, y- and z-direction, so we have (considering
just one of them) |
| |
Ekin,x | = |
½ · m · <vx2> = ½
· kT |
|
|
|
|
For the other directions we have exactly the same relations, of course. For the total
energy we obtain |
| |
Ekin = |
m · <vx2> 2 | + |
m · <vy2>
2 | + |
m · <vz2
> 2 |
= | m · <
v2> 2 |
= | m · (v0)2
2 | = |
3kT 2 |
|
|
|
|
with
v0=(<v2>)½.
v0 is thus the average thermal velocity of a
carrier careening around in a crystal. We can easily calculate it from the formula given above; we have |
| |
|
|
|
Note that by using classical thermodynamics to derive this result, all processes involved in an ideal gas
(here formed by the free electrons) are included. This means that electron–electron scattering is already covered
by this expression. Therefore, from now on electron–electron scattering events do not to play any role anymore. |
|
At this point you should stop a moment and think about just how fast those electrons will be
careening around at room temperature (300 K) – without plugging numbers in the equation! |
| |
Got a feeling for it? Probably not. So look at the exercise question (and the solution) further down! |
|
|
Now you should stop another moment and become very aware of the fact that this equation
is from purely classical physics. It is absolutely true for classical
particles - which electrons are not, actually. Electrons obey the Pauli principle, i.e. they behave about as non-classical as possible. This should make you feel a bit uncomfortable.
Maybe the equation from above is not correct for electrons then? Indeed - it isn't. Why this is so we will see later, and
also how we can "repair" the situation! |
|
Now lets turn on an electrical field. It will accelerate
the electrons between the collisions (which now are collisions with defects and phonons
only, since we stick to the classical treatment from above). Their velocity in field direction then increases linearly from
whatever value it had right after a collision to some larger value right before the next collision. |
| |
In our diagram from above this looks like this: |
| | |
| |
|
|
Here we have an electrical field that accelerates electrons in x-direction
(and "brakes" in –x direction). Between collisions, the electron gains velocity in +x-direction
at a constant rate (=identical slope). |
|
The average velocity in +x direction, < v+x>, has now a larger absolute value than that in –x direction, <v–x>. |
| |
|
However, beware of the pitfalls of schematic drawings: For real electrons the difference is very small
as we shall see shortly; the slope in the drawing is very exaggerated. |
|
| |
|
|
The drift velocity
is contained in the difference <v+x> – <v–x>; it is completely described by the velocity
gain between collisions. For obtaining a value, we may neglect the instantaneous velocity right after a scattering event
because they average to zero anyway and just plot the velocity gain in a simplified
picture; always starting from zero after a collision. |
|
| |
| |
|
|
The picture now looks quite simple; but remember that it contains some not
so simple averaging. |
|
At this point it is time to define a very meaningful new average quantity
to describe the influence of the scattering processes on the drift velocity: |
| |
|
A certain mean time t between collisions,
which for certain reasons (becoming clear only later) is defined as the mean time for reaching
the drift velocity vD
in the simplified diagram. We also call t the mean
scattering time or just scattering time for
short. |
| |
|
|
This is most easily illustrated by simplifying the scattering diagram once more: We simply
use just onetime - the average - for the time that elapses between scattering events
and obtain: |
| |
|
|
|
|
|
This is the standard diagram illustrating the scattering
of electrons in a crystal usually found in textbooks; the definition of the scattering time t
is included. |
|
It is highly idealized (if not to say just wrong) if you compare it to the correct picture above. Of course, the average velocity of both pictures will give the same value, but that's like saying
that the average speed va of all real cars driving around in a city is the same as the average speed of
ideal model cars, which are going at va all the time. |
|
Note that t is only half
of the average time between collisions. |
|
|
|
So, while this diagram is not wrong, it is a highly abstract rendering of the underlying processes
obtained after several averaging procedures. From this diagram only, no conclusion whatsoever can be drawn as to the average
velocities of the electrons without the electrical field! |
|
|
|
New Material Parameters and Classical Conductivity |
| |
|
With the scattering concept, we now have two new (closely related) material parameters: |
|
|
The mean (scattering) time
t between two collisions as defined before. |
|
|
The mean free path
l between collisions; i.e. the distance travelled by an electron (on average) before it collides with something
else and changes its momentum. We have |
| |
|
|
|
Note that v0 enters the defining equation for l, and that we have to take
twice the scattering time t because it only refers to half the time between collisions! |
|
After we have come to this point, we now can go on: Using t
as a new parameter, we can rewrite Newtons equation from above for an electron (q=-e)
as follows: |
| |
m · | dv dt |
= m · | Dv
Dt |
= m · |
vD t |
= F = q · E = – e · E |
|
|
|
|
We now only consider what happens to the electron as long as it doesn't hit anything. Then it is possible
to equate the differential quotient with the difference
quotient, because the velocity change is constant. After a scattering event has taken place, the process is completely interrupted
and starts under "virgin" conditions again. |
|
|
We obtain immediately the relation between the drift velocity vD and the applied field
E: |
| |
| vD
t | = – |
E · e m | |
| | |
Þ |
vD | = – |
E · e · t m |
|
|
|
Inserting this equation for vD in the old definition
of the current density j=– n · e · vD and invoking the general
version of Ohm's law, j =s ·
E, yields |
| |
j = |
n · e2 · t m |
· E | =: |
s · E |
|
|
|
|
This gives us the final result |
| |
|
|
This is the classical formula for the conductivity of a
classical "electron gas" material; i.e. metals. The conductivity contains the density n of the free
electrons and their mean classical scattering time t
as material parameters. |
|
|
We have a good idea about n, but we do not yet
know tclass, the mean classical scattering
time for classical electrons. However, since we know the order of magnitude
for the conductivity of metals, we may turn the equation around and use it to calculate the order of magnitude of tclass. If you do the exercise further down, you will see that the result is: |
| |
tclass = |
s · m
n · e2 | » |
(1014 ... 1013 ) s |
|
|
|
|
"Obviously" (as stated in many text books), this is a value that is far too small and thus the classical approach must be wrong.
But is it really too small? How can you tell without knowing a lot more about electrons in metals?
|
|
|
Let's face it: you can't !! So let's look at the
mean free path l instead. We have |
|
|
|
|
The last equation gives us a value v0
» 104 m/s at room temperature! Now we need vD , and this
we can estimate from the equation given above to vD=– E
· t · e/m
» 1 mm/s, if we use the value for t dictated by the measured conductivities. It is much smaller than v0 and can
be safely neglected in calculating l . |
|
We thus can rewrite the equation for the conductivity and obtain |
| |
s = |
n · e2 · l
2 · m · (v0 + vD) |
» |
n · e2 · l
2 · m · v0 |
|
|
|
Knowing
s from experiments, but not l, makes possible to determine l.
The mean free path l between collisions (for vD=0) for a typical metals thus is |
| |
l = |
2 · m · v0 · s
n · e2 |
= 2 · v0 · t |
» (100 – 101) nm |
|
|
|
|
And this is certainly too small! |
|
But before we discuss these results, let's see if they are actually true by doing an exercise: |
| |
|
|
Now to the important question: Why is a mean free path
in the order of the size of an atom too small? |
|
|
Well, think about the scattering mechanisms. The distance between lattice
defects is certainly much larger, and a phonon itself is "larger", too. |
|
It does not pay to spend more time on this. Whichever way you look at it,
whatever tricky devices you introduce to make the approximations better (and physicists have tried very hard!), you will
not be able to solve the problem: The mean free paths are never even coming close to
what they need to be, and the conclusion which we will reach - maybe reluctantly, but unavoidably - must be: |
| |
There is no way to describe conductivity (in metals) with classical
physics! |
|
|
Scattering and a New Look on Mobility
|
| |
|
Somewhere on the way, we have also indirectly found that the
mobility
µ as defined before is just another way to look
at scattering mechanisms. Let's see why. |
|
|
All we have to do is to compare the equation s=(n
· e2 · t)/m for the conductivity from
above with the master equation
s=q · n · µ. |
|
This gives us immediately |
| |
µ | = |
e · t m |
| | |
| µ |
» |
e · l 2 · m · v0 |
|
|
|
|
In other words: |
|
|
The decisive material property determining the mobility µ
is the average time between scattering events or the mean free path between those events. |
|
|
|
The mobility µ thus is a basic material property, well-defined even without electrical
fields, and just another way to characterize the scattering processes taken place by a single number. |
|
We even can go one stage further with this: If we envision the movement of an
electron again, as described above in many words, analogies ("fly swarm"), graphs and equations, we "see"
exactly the same thing we envisioned when we looked a diffusing particle or vancany when we learned about diffusion and random
walk. |
|
|
"Something" bounced round in a random matter, and everything important about the
"something" was captured in its diffusion coefficient D. This diffusion coefficient was either defined
via Fick's laws (e.g. Fick's first
law jx=–D · dn/dx
) or by looking at the atomic mechanisms that got us something like D
»
a2 · r (a= lattice constant, r=jump rate). From the
random walk consideration we had for the "diffusion
length" L=(D · t)½, a relation that also could be used
to define D. |
|
|
You should now have a certain feeling that all this old stuff from diffusion and what we just
learned about the random bouncing around of electrons, must be somehow connected. After all, we always have the element
of something moving around (mostly) at random. |
|
Right you are! Again, it was Einstein (and independently Smoluchowski) who found the proper relation, the Einstein-Smoluchowski relation
hinted at a chapter ago: |
| |
D | = |
µ · kT e | |
| | µ |
= | D · e kT |
|
|
|
The mobility µ thus is "almost" the same as the diffusion
coefficient D; for a given temperature T they are proportional to each other, |
|
|
How do we obtain this simple relation? Well - we won't
at this point. It's not all that difficult to derive, but it is no accident either that it's called after Einstein (that's
actually part of what he got the Nobel prize for). |
|
|
If you are not satisfied with that, check this
link for a derivation, or this one for an alternative
way. More to the relation between diffusion coefficient and mobility in this
(German) link. |
| |
If you are a bit exhausted and confused by now - that's OK!
This German link might help,
where things are summed up once more. |
|
|
Mobility and Speed of Electronic Devices
|
| |
|
In the equations above slumbers an extremely important aspect of semicoductor
technology: |
|
|
In all electronic devices carriers have to travel some distance before a signal
can be produced. A MOS transistor, for example, switches
currents on or off between its "Source" and "Drain" terminals depending on what voltage is applied to
its "Gate". Source and drain are separated by some distance lSD, and the "Drain"
only "feels" the "on" state after the time it takes the carriers to run the distance lSD
. |
|
How long does that take if the voltage between Source and Drain is USD?
|
|
|
Easy. If we know the mobility µ of the carriers, we also know their (average)
velocity vSD in the source-drain region, which by definition
is vSD=µ · USD/lSD. |
|
|
The traveling time tSD between source and drain
for obvious reasons defines roughly the maximum frequency fmax
the transistor can handle, we have tSD=lSD / vSD
or |
| |
tSD = |
lSD2 µ · USD |
» |
1 fmax |
|
|
|
|
The maximum frequency of a MOS transistor thus is directly proportional to the mobility
of the carriers in the material it is made from (always provided there are no other limiting factors). And since we used
a rather general argument, we should not be surprised that pretty much the same relation is also true for most electronic
devices, not just MOS transistors. |
|
|
This is a momentous statement: We linked a prime material parameter, the material
constant
µ, to one of the most important parameters of electronic circuits. We would like µ to be as large
as possible, of course, and now we know what to do about it! |
|
|
Actually, we do not really know what to do, but other
people do - and act on it. See the link to find out how it is done. |
|
A simple exercise is in order to see the power of this knowlegde: |
| |
|
| |
|
© H. Föll (MaWi 2 Skript)