Saturday, February 26, 2011

Of gaps and pseudogaps

ZapperZ's recent post about new work on the pseudogap in high temperature superconductors has made me think about how to try to explain something like this to scientifically literate nonspecialists. Here's an attempt, starting from almost a high school chemistry angle. Chemists (and spectroscopists) like energy level diagrams. You know - like this one - where a horizontal line at a certain height indicates the existence of a particular (electronic) energy level for a system at some energy. The higher up the line, the higher the energy. In extended solid state systems, there are usually many, many levels. That means that an energy level diagram would have zillions of horizontal lines. These tend to group into bands, regions of energy with many energy levels, separated by gaps, regions of energy with no levels.

Let's take the simplest situation first, where the energies of those levels don't depend on how many electrons we actually have. This is equivalent to turning off the electron-electron interaction. The arrangement of atoms gives us some distribution of levels, and we just start filling it up (from the bottom up, if we care about the lowest energy states of the system; remember, electrons can be spin-up or spin-down, meaning that each (spatial state) level can in principle hold two electrons). There's some highest occupied level, and some lowest unoccupied level. We care about whether the highest occupied level is right up against an energy gap, because that drastically affects many things we can measure. If our filled up system is gapped, that means that the energetically cheapest (electronic) excitation of that system is the gap energy. Having gaps also restricts what processes can happen, since any quantum mechanical process has to take the system from some initial state to some final state. If there's no final state available that satisfies energy conservation, for example, the process can't happen. This means we can map out the gaps in the system by various spectroscopy experiments (e.g., photoemission; tunneling).

So, what happens in systems where the electron-electron interaction does matter a lot? In that case, you should think of the energy levels as rearranging and redistributing themselves depending on how many electrons are in the system. This all has to happen self-consistently. One particularly famous example of what can happen is the Mott insulating state. (Strictly speaking, I'm going to describe a version of this related to the Hubbard model.) Suppose there are N real-space sites, and N electrons to place in there. In the noninteracting case, the highest occupied level would not be near a gap - it would be in the middle of a band. Because the electrons can shuffle around in space without any particular cost to doubly occupying a site, the system would be a metal. However, suppose it costs an energy U to park two electrons on any site. The lowest energy state of the whole system would be each of the N sites occupied by one electron, with an energy gap of U separating that ground state from the first excited state. So, in the presence of strong interactions, at exactly "half-filling", you can end up with a gap. Even without this lattice site picture, in the presence of disorder, it's possible to see signs of the formation of a gap near the highest occupied level (for experts, in the weak disorder limit, this is the Altshuler-Aronov reduction in the density of states; in the strong disorder limit, it's the Efros-Shklovskii Coulomb gap).

Another kind of gap exists in the superconducting state. There is an energy gap between the superconducting ground state and the low lying excitations. In the high temperature superconductors, that gap is a bit weird, since there actually are low-lying excitations that correspond to electrons with very specific amounts of momentum ("nodal quasiparticles").

A pseudogap is more subtle. There isn't a "hard" gap, with zero states in it. Instead, the number of states near the highest occupied level is depressed relative to noninteracting expectations. That reduction and how it varies as a function of energy can tell you a lot about the underlying physics. One complicated aspect of high temperature superconductors is the existence of such a pseudogap well above the superconducting transition temperature. In conventional superconductors (e.g., lead), this doesn't exist. So, the question has been lingering for 25 years now, is the pseudogap the sign of incipient superconductivity (i.e., electrons are already pairing up, but they lack the special coherence required for actual superconductivity), or is it a sign of something else, perhaps something competing with superconductivity? That's still a huge question out there, complicated by the fact that doping the high-Tc materials to be superconductors adds disorder to the problem.

Monday, February 21, 2011

This is why micro/nanofab with new material systems is hard.

Whenever I read a super-enthusiastic news story about how devices based on new material XYZ are the greatest thing ever and are going to be an eventual replacement for silicon-based electronics, I immediately think that the latter clause is likely not true. People have gotten very spoiled by silicon (and to a lesser degree, III-V compound semiconductors like GaAs), and no wonder: it's at the heart of modern technology, and it seems like we are always coaxing new tricks out of it. Of course, that's because there have been millions of person-years worth of research on Si. Any new material system (be it graphene, metal oxide heterostructures, or whatever) starts out behind the eight ball by comparison. This paper on the arxiv this evening is an example of why this business is hard. It's about Bi2Se3, one of the materials classified as "topological insulators". These materials are meant to be bulk insulators (well, at low enough temperature; this one is actually a fairly small band gap semiconductor), with special "topologically protected" surface states. One problem is, very often the material ends up doped via defects, making the bulk relatively conductive. Another problem, as studied in this paper, is that exposure to air, even for a very brief time, dopes the material further, and creates a surface oxide layer that seems to hurt the surface states. This sort of problem crops up with many materials. It's truly impressive that we've learned how to deal with these issues in Si (where oxygen is not a dopant, but does lead to a surface oxide layer very quickly). This kind of work is very important and absolutely needs to be done well....

Tuesday, February 15, 2011

You could, but would you want to?

Texas governor Rick Perry has proposed (as a deliberately provocative target) that the state's (public) universities should be set up so that a student can get a bachelor's degree for $10,000 total (including the cost of books).  Hey, I'm all for moon shot-type challenges, but there is something to be said for thinking hard about what you're suggesting.  This plan (which would set costs per student cheaper than nearly all community colleges, by the way) is not well thought-out at all, which is completely unsurprising.  To do this, the handwave argument is that professors should maximize online content for distance learning, and papers could be graded by graduate students or (apparently very cheaply hired) instructors.  Even then, it's not clear that you could pull this off.  Let me put it this way:  I can argue that the world would benefit greatly from a solar electric car that costs $1,000, but that doesn't mean that one you'd want to own can actually be produced in an economically sustainable way at that price.  This is classic Perry, though. 

Sunday, February 13, 2011

Battle hymn of the Tiger Professor

Like Amy Chua, I'm choosing to be deliberately provocative in what I write below, though unlike her I don't have a book to sell. I recently heard a talk where a well reputed science educator (not naming names) argued that those of us teaching undergraduates need to adapt to the learning habits of "millennials". That is, these are a group of people who have literally grown up with google (a thought that makes me feel very old, since I went to grad school w/ Sergei Brin) - they are used to having knowledge (in the form of facts) at their fingertips in a fraction of a second. They are used to nearly continuous social networking, instantaneous communication, and constant multitasking (or, as a more stodgy person might put it, complete distraction, attention deficit behavior, and a chronic inability to concentrate). This academic argued that we need to make science education mimic real research if we want to produce researchers and get students jazzed about science. Moreover, this academic argued that making students listen to lectures and do problem sets was (a) ineffective, since that's not how they were geared to learn, and (b) somewhere between useless and abusive, being slavishly ruled by a culture of "covering material" without actually educating. Somehow we should be more in tune with how Millennials learn, and appeal to that, rather than being stodgy fogies who force dull, repetitious "exercises at the end of the chapter" work.

While appealing to students' learning modalities has its place, I contend that this concept simply will not work well in some introductory, foundational classes in the sciences, math, and engineering. Physical science (chemistry, physics) and math are inherently hierarchical. You simply cannot learn more advanced material without mastery of the underpinnings. Moreover, in the case of physics (with which I am most familiar), we're not just teaching facts (which can indeed be looked up easily on the internet); we're supposedly teaching analytical skills - how to think like a physicist; how to take a physical situation and translate it into math that enables us to solve for what we care about in terms of what we know. Getting good at this simply requires practice. To take the Amy Chua analogy, hard work is necessary and playdates are not. There literally is no substitute for doing problems and getting used to thinking this way. While open-ended reasoning exercises can be fun and useful (and could be a great addition to the standard curriculum, or perhaps a way to run a lab class to be more like real research), at some point students actually do need to become proficient in basic problem-solving skills. I really don't like the underlying assumption that this educator was making: that the twitter/facebook/short-attention-span approach is unavoidable and possibly superior to focused hard work. Hey, I'm part of the distractable culture as much as anyone in the 21st century, but you'll have to work hard to convince me that it's the right way to teach foundational knowledge in physics, math, and chemistry.

Wednesday, February 09, 2011

Science and the nation

(The US, that is.) More people need to read this.

Sunday, February 06, 2011

Triboelectricity and enduring mysteries of physics

This past week I hosted Seth Putterman for a physics colloquium here at Rice, and one of the things he talked about is some of his group's work related to triboelectricity, or the generation of charge separation by friction/rubbing.  When you think about it, it's quite amazing that we have no first-principles explanation of a phenomenon we're all shown literally as children (rub a balloon on your hair and it builds up enough "static" charge that it will stick to a plaster wall, unless you live in a very humid place like Houston).  The amount of charge that may be moved is on the order of 1012 electrons per square cm, and the resulting potential differences can measure in the tens of kilovolts (!), leading to remarkable observations like the generation of x-rays from peeling tape, or UV and x-ray emission from a mercury meniscus moving along a glass surface.  In fact, there's still some disagreement about whether the charge moving in some triboelectric experiments is electrons or ions!  Wild stuff.