Monday, June 30, 2014

What are universal conductance fluctuations?

Another realization I had at the Gordon Conference:  there are plenty of younger people in condensed matter physics who have never heard about some mesoscopic physics topics.   Presumably those topics are now in that awkward purgatory of being so established that they're "boring" from the research standpoint, but they are beyond what is taught in standard solid state physics classes (i.e., they're not in Ashcroft and Mermin or Kittel).  Here is my attempt to talk at a reasonably popular level about one of these, so-called "Universal Conductance Fluctuations" (UCF).

In physics parlance, sometimes it can be very useful to think about electrons in solids as semiclassical, a kind of middle ground between picturing them as little classical specks whizzing around and visualizing them as fuzzy, entirely wavelike quantum states.  In the semiclassical picture, you can think of the electrons as following particular trajectories, and still keep in mind their wavelike aspect by saying that the particles rack up phase as they propagate along.  In a typical metal like gold or copper, the effective wavelength of the electrons is the Fermi wavelength, \( \lambda_{\mathrm{F}} \sim 0.1~\)nm.  That means that an electron propagating 0.1 nm changes its quantum phase by about \(2 \pi\).  In a relatively "clean" metal, electrons propagate along over long distances, many Fermi wavelengths, before scattering.  At low temperatures, that scattering is mostly from disorder (grain boundaries, vacancies, impurities).

The point of keeping track of the quantum phase \(\phi\) is that this is how we find probabilities for quantum processes.  In quantum mechanics, if there are two paths to do something, with (complex) amplitudes \(A_{1}\) and \(A_{2}\), the probability of that something is \(|A_{1} + A_{2}|^{2}\), which is different than just adding the probabilities of each path, \(|A_{1}|^{2}\) and \(|A_{2}|^{2}\).  For an electron propagating, for each trajectory we can figure out an amplitude that includes the phase.  We add up all the (complex) amplitudes for all the possible trajectories, and then take the (magnitude) square of the sum.  The cross terms are what give quantum interference effects, such as the wavy diffraction pattern in the famous two-slit experiment.  This is how Feynman describes interference in his great little book, QED

Electronic conduction in a disordered metal then becomes a quantum interference experiment.  An electron can bounce off various impurities or defects in different sequences, with each trajectory having some phase.  The exact phases are set by the details of the disorder, so while they differ from sample to sample, they are the same within a given sample as long as the disorder doesn't change.  The conduction of the electrons is then something like a speckle pattern.  The typical scale of that speckle is a change in the conductance \(G\) of something like \(\delta G \sim e^{2}/h\).  Note that inelastic processes can change the electronic wavelength (by altering the electron energy and hence the magnitude of its momentum) and also randomize the phase - these "dephasing" effects mean that on length scales large compared to some coherence length \(L_{\phi}\), it doesn't make sense to worry about quantum interference.

Now, anything that alters the relative phases of the different trajectories will lead to fluctuations in the conductance on that scale (within a coherent region).  A magnetic field can do this, because the amount of phase racked up by propagating electrons depends not just on their wavelength (basically their momentum), but also on the vector potential, a funny quantity discussed further here.  So, ramping a magnetic field through a (weakly disordered) metal (at low temperatures) can generate sample-specific, random-looking but reproducible, fluctuations in the conductance on the order of \(e^{2}/h\).  These are the UCF. 

By looking at the UCF (their variation with magnetic field, temperature, gate voltage in a semiconductor, etc.), one can infer \(L_{\phi}\), for example.  These kinds of experiments were all the rage in ordinary metals and semiconductors in the late 1980s and early 1990s.  They enjoyed a resurgence in the late '90s during a controversy about coherence and the fate of quasiparticles as \(T \rightarrow 0\), and are still used as a tool to examine coherence in new systems as they come along (graphene, atomically thin semiconductors, 2d electron gases in oxide heterostructures, etc.). 

Thursday, June 26, 2014

Gordon Conference thoughts

Because of travel constraints I'm missing the last day of the meeting, but here are some thoughts, non-science first:
  • These meetings remain a great format - not too big, a good mix of topics, real opportunities for students and postdocs to interact w/ lots of people, chances for older researchers to play soccer and ultimate frisbee, etc.  As travel costs rise and internet connectivity improves, there are going to be sensible reasons to have fewer in-person meetings of otherwise distant participants, but there remains no substitute for a good conversation face-to-face over a coffee or a beer.
  • College dorm rooms, while better than when I was a student, are still not high on ambiance.  Generic fitted and top sheets for bedding appear to be made from dryer lint.
  • Food options have become progressively healthier and tastier in general.
  • Mount Holyoke is a lovely campus, with very loud and happy frogs.
  • A session about cuprate superconductors correlated with the literal gathering of storm clouds in an otherwise sunny week.
  • About 30% of the audience got the reference (after about a 5 second delay) when, on a slide about magnetic interactions (\(J_{zz} S^{z}_{i} \cdot S^{z}_{j}\)), there was an unlabeled picture of Jay-Z.  
A couple of science thoughts (carefully brief to avoid violating the GRC policy about discussing conference talks and posters):
  • Cuprate superconductors remain amazingly complicated, even after years of improving sample quality and experimental techniques. 
  • Looking at driven systems is becoming very exciting.  Basically under some circumstances you can use light to flip on or off topological changes in band structure, for example.
  • It remains very challenging to figure out how to think about systems with low energy excitations that don't look like long-lived quasiparticles. 

Sunday, June 22, 2014

Gordon Conference

I am going to be at the Gordon Research Conference on correlated electrons for the next few days. Should be fun, but blogging about such meetings is generally frowned upon (don't want to discourage people from frank discussions and showing brand new, untried stuff).  There are rules about confidentiality for these meetings.  I'll write more later in the week on other topics.

Sunday, June 15, 2014

FeSe on SrTiO3: report of 100 K superconductivity

I'd heard rumors about this for a while.  I presume that the posting of this on the arxiv means that some form of this paper is in submission out there to a suitably glossy, high impact journal that requires reference citations in its abstracts.  Background:  Bulk FeSe superconducts below around 8 K at ambient pressure (see here).  Under pressure, that transition can be squeezed up beyond 35 K (see here).  The mechanism for superconductivity in this material is up for debate, as far as I know (please feel free to add a reference or two in the comments). 

These investigators have a very fancy ultrahigh vacuum system, in which they are able to grow single layer FeSe on top of SrTiO3 (with the substrate doped with niobium in this case).  This material is not stable in air, and apparently doesn't do terribly well even when coated with some protective layer.  However, these folks have a multi-probe scanning tunneling microscope system in their chamber, along with a cold stage, so that they can perform electrical measurements in situ without ever exposing their single layer to air.  They find that the electrical resistance measured in their four-point-probe configuration drops to zero below around 100 K (as high as 109 K, depending on the sample).  One subtle point that clearly worried them:  SrTiO3 is know to have a structural phase transition (the onset of ferroelasticity - see here) at around 105 K, so they wanted to be sure that what they saw wasn't somehow an artifact of that substrate effect.  (Makes me wonder what happens to superconductivity in the FeSe depending on the ferroelastic domain orientation underneath it.)  For the lay audience:  liquid nitrogen boils at ambient pressure at 77 K.  This would be the first iron-based superconductor to cross that threshold, a domain previously limited to the copper oxides.   Remember, if the bulk transition is at 8 K and the single layer case exceeds 100 K, it doesn't seem crazy to hope for some related system with an additional factor of three or four that takes us beyond room temperature.

Important caveats:  Right now, they have resistance measurements and tunneling spectroscopy measurements.  Because of the need for in situ measurement they don't have Meissner data.  It's also important to realize that the restrictions here (not air stable; only happens in single layer material when ultraclean) are not small.  At the same time, this is potentially very exciting, and hopefully it holds up well and can be the foundation for more exciting materials.

Saturday, June 14, 2014

750th post - blog demographics

This is the 750th post since this blog's inception.  Fun facts gleaned from google analytics:

1) Unsurprisingly, the US leads in blog hits over that time, with 270,648.  In second place, the UK with 26,698.

2) According to google's tracking, over the last nine years there have been hits from every country in North, Central, and South America, as well as Europe.  In Asia, the only countries with zero hits are Turkmenistan and New Guinea.  In Africa, I'm missing about a dozen, basically the sub-Saharan region plus Somalia. 

3) In the US, the state with the least hits is South Dakota (84 visits over nine years), narrowly edging out Wyoming (88) and Alaska (91).   The states with the most hits are Texas, California, New York, and Massachusetts.

4) Most common browser, by a wide margin, is Firefox, followed by Chrome.  I like the idea that someone has read the blog on a PlayStation 3, and someone else on a PlayStation Portable.  Disappointed (and showing my age by that fact) that no one used lynx or emacs.  

5) Most-viewed post of all time was the meme contest.  Most-viewed physics posts were these on plasmons and polarons

Thank you all for reading!

Friday, June 13, 2014

"Seeing" chemical bonds with sub-molecular resolution

Chemists (and physicists) often draw molecular bonds as little lines connecting atoms, but actually imaging the bonds themselves is very hard.  With the advent of the scanning tunneling microscope, it's become almost commonplace to be able to image the position of atoms.  STM images the ability of electrons to enter or leave a conducting surface, and since an atom on the surface carry electrons within itself, the presence of an atom on the surface strongly modulates the STM signal.  This doesn't show anything direct about bonding between atoms, however.

Wilson Ho's group at UC Irvine has published another gem.  The paper is here (unfortunately behind the Science paywall), and the news release is here.  The new STM-based imaging technique, "itProbe", is based in inelastic tunneling, which I've described before.  (One advantage in being an ancient blogger - I can now refer back to my old stuff, with google helping me remember what I wrote.)  The Ho group deliberately attaches a CO molecule to their STM tip.  The CO molecule has a couple of very sharp vibrational (and "hindered translational") modes at low energies that can be seen electrically through inelastic electron tunneling spectroscopy (IETS) - basically sharp features in (the second derivative of) the tunneling current-voltage curve.  In the itProbe technique, the experimenters map out spatially what happens to those modes.  The idea is, as the CO molecule interacts with the sample close by, the precise energies of those vibrational modes shift - the environment of the CO molecule tweaks the effective spring constant for the CO's motion.  Imaging in this way, they find that maps of the inelastic signal seem to show the bonds between the atoms in an underlying molecule, rather than the atom positions themselves.  I admit I don't understand the precise mechanism here, but the images are eye-popping.  A similar idea, involving atomic force microscopy with CO attached to an AFM tip, was demonstrated before (here and here, for example).  In those experiments, the investigators looked at how interactions between the CO on the tip and the sample affected the mechanical properties of the tip as a whole.   

This is an example of a tour de force experiment that can be accomplished by long, sustained effort - the Ho group has been refining their IETS measurements for nearly two decades, and it's really paid off.  Hopefully these kinds of efforts will not become even less common as research funding seems to be focused increasingly on short time horizons and rapid changes in fashion.

Sunday, June 08, 2014

Bad physics as a marker for tracking text recycling

A colleague of mine was depressed to find, in a reasonably high impact journal, a statement that magnetic nanoparticles obey Coulomb's law, and thus can be manipulated by external magnetic fields.  As far as physics goes, this is just wrong.  Coulomb's law is the mathematical relationship that says that the force between two charges is proportional to the product of their charges and inversely proportional to the distance between them.  This has nothing to do with magnetic nanoparticles. 

I was curious - where did this weird, incorrect statement come from?  I turned to google to find out.  The earliest result I can find is from this paper by Pankhurst, Connolly, Jones, and Dobson.  The paper seems quite good, and the (strange to me) Coulomb's Law language appears to be some shorthand for a physically sound description of the interactions of magnetic materials with magnetic fields.  The Pankhurst paper includes the following sentence:  "Second, the nanoparticles are magnetic, which means that they obey Coulomb’s law, and can be manipulated by an external magnetic field gradient." This is part of a paragraph that lists three virtues of magnetic nanoparticles for biological applications.  


For fun, try copy/pasting that sentence into google.  Look at how many times that sentence (indeed, that whole introductory paragraph with very minimal changes) shows up nearly verbatim in other publications.  At the risk of saying something actionable, this is plagiarism.   This tends to happen in obscure proceedings, edited book chapters, etc., rather than high impact literature.  The proliferation of shady publication houses and vanity press journals only aggravates this situation.  Very depressing.

Wednesday, June 04, 2014

My views on teaching "nano"

Blatant self-promotion time:  I was grateful for the invitation to write an editorial about teaching "nano" for Nature Nanotechnology.  The full text is available for free at the above link, and comments and feedback are invited below.  (As a blog reader, you get the added bonus of reading the analogy I made that was cut due to space constraints.  When I advise becoming an expert in a traditional discipline first before tackling an interdisciplinary field, I had written:  "To make a food analogy, it would be very difficult to become an expert at Korean/Mexican fusion cuisine if you did not first know Korean and/or Mexican cooking at a high level.") 

Monday, June 02, 2014

What is chemical potential?

I've been meaning to do a post on this for a long time.  Five years ago (!) I wrote a post about the meaning of temperature, where I tried to go from the intuitive understanding given by common experience (temperature has something to do with energy content, and that energy flows from hot things to cold things) to a deeper view (that flow of energy comes from the tendency of the universe to take on macroscopic configurations that correspond to the most common ways of arranging microscopic degrees of freedom - the 2nd law of thermodynamics, basically).  I wasn't very satisfied with how the post turned out, but c'est la vie.

Chemical potential is a similar idea, but with added complications - while touch gives us an intuition for relative temperatures, we have no corresponding sense for chemical potential; and the rigorous definition of chemical potential is more complicated.  (For another take on this, see this article, available in full text via google from a variety of sources.)

Let's reason by analogy with temperature.  Energy tends to flow from a system at high temperature to a system at low temperature; when systems with identical temperatures are brought into contact so that they may exchange energy (e.g., by thermal conduction), there is no net flow of energy.  Now suppose systems are able to exchange particles as well as energy.  If two systems are at the same temperature, then particles will tend to flow from the system of higher chemical potential (one of the several parameters denoted by the symbol \(\mu\)) to that of lower chemical potential.  If two systems have identical chemical potentials for a particular kind of particle, there will be no net flow of particles.  In general, particles tend to flow from regions of high \(\mu/T\) to regions of low \(\mu/T\).  The classic example of this is the case of a closed bottle of perfume in a room full of (non-perfumed) air.  The perfume molecules have a high \(\mu\) in the bottle relative to the rest of the room.  When the bottle is opened, perfume molecules will tend to diffuse out of the bottle, simultaneously lowering their \(\mu)\) in the bottle and increasing their \(\mu\) in the room.  This will continue until the chemical potentials equalize.  From the point of view of entropy, there are clearly very many more arrangements of molecules with them roughly spread throughout the room+bottle than the number of arrangements with the molecules happening to occupy just the bottle.  Hence, the universe tends toward the macroscopic configuration corresponding to the most microscopic configurations.  Bottom line:  equilibrium between two systems that can exchange particles requires equal temperatures and equal chemical potentials. 

Where this also gets tricky is that thermodynamics tells us that \(\mu\) also corresponds to the energy per particle required to add (or remove) one particle from the system at constant temperature and pressure (!).  This identity is not at all obvious from the above description, but it's nevertheless true.  This latter way of thinking about chemical potential means that when particles can couple to some "real" potential (gravitational, electrical), it is possible to tune their total \(\mu\).  The connection to the entropic picture is the idea that particles will tend to "fall downhill" (there are usually fewer configurations of the combined system that have some particles "stacked up" in a region of high potential energy with others in a region of low potential energy, than the situation when the energy gets spread around among all the particles).