Friday, December 30, 2011

Tidbits.

First, I have a guest post on the Houston Chronicle's science blog today.  Thanks for the opportunity, Eric.

Second, here is a great example of science popularization from the BBC.  We should do things like this on US television, instead of having Discovery Channel and TLC show garbage about "alien astronauts" and "ghost hunting".

Third, if you see the latest Sherlock Holmes flick, keep an eye out for subtle details about Prof. Moriarty - there's some fun math/physics stuff hidden in there (pdf) for real devotees of the Holmes canon.

Wednesday, December 28, 2011

Shifting gears

One of the most appealing aspects of a career in academic science and engineering is the freedom to choose your area of research. This freedom is extremely rare in an industrial setting, and becoming more so all the time. Taking myself as an example, I was hired as an experimental condensed matter physicist, presumably because my department felt that this was a fruitful area in which they would like to expand and in which they had teaching needs. During the application and interview process, I had to submit a "research plan" document, meant to give the department a sense of what I planned to do. However, as long as I was able to produce good science and bring in sufficient funding to finance that research, the department really had no say-so at all about what I did - no one read my proposals before they went out the door (unless I wanted proposal-writing advice), no one told me what to do scientifically. You would be very hard-pressed to find an industrial setting with that much freedom.

So, how does a scientist or engineer with this much freedom determine what to do and how to allocate intellectual resources? I can only speak for myself, but it would be interesting to hear from others in the comments. I look for problems where (a) I think there are scientific questions that need to be answered, ideally tied to deeper issues that interest me; (b) my background, skill set, or point of view give me what I perceive to be either a competitive advantage or a unique angle on the problem; and (c) there is some credible path for funding. I suspect this is typical, with people weighting these factors variously. Certainly those who run giant "supergroups" in chemistry and materials science by necessity have more of a "That's where the money is" attitude; however, I don't personally know anyone who works in an area in which they have zero intellectual interest just because it's well funded. Getting resources is hard work, and you can't do it effectively if your heart's not in it.

A related question is, when and how do you shift topics? These days, it's increasingly rare to find a person in academic science who picks a narrow specialty and sits there for decades. Research problems actually get solved. Fields evolve. There are competing factors, though, particularly for experimentalists. Once you become invested in a given area (say scanned probe microscopy), this results in a lot of inertia - new tools are expensive and hard to get. It can also be difficult to get into the mainstream of a new topic from the outside, in terms of grants and papers. Jumping on the latest bandwagon is not necessarily the best path to success. On the other hand, remaining in a small niche isn't healthy. All of these are "first-world problems", of course - for someone in research, it's far better to be wrestling with these challenges than the alternative.

Saturday, December 17, 2011

students and their mental health

There was an interesting article earlier this week in the Wall Street Journal, on mental health concerns in college students. It's no secret that mental illness often has an onset in the late teens and early twenties. It's also not a surprise that there are significant stressors associated with college (or graduate school), including being in a new environment w/ a different (possibly much smaller) social support structure, the pressure to succeed academically, the need to budget time much more self-sufficiently than at previous stages of life, and simple things like lack of sleep. As a result, sometimes as a faculty member you come across students who have real problems.

In undergrads, often these issues manifest as persistent erratic or academically self-destructive behavior (failure to hand in assignments, failure to show up for exams). Different faculty members have various ways to deal with this. One approach is to be hands-off - from the privacy and social boundaries perspective, it's challenging to inquire about these behaviors (is a student just having a tough time in college or in a particular class, or is a student afflicted with a debilitating mental health issue, or are is the student somewhere on the continuum in between). The sink-or-swim attitude doesn't really sit well with me, but it's always a challenge to figure out the best way to handle this stuff.

In grad students, these issues can become even more critical - students are older, expectations of self-sufficiency are much higher, and the interactions between faculty and students are somewhere between teacher/student, boss/employee, and collaborator/collaborator. The most important thing, of course, is to ensure that at the end of the day the student is healthy, regardless of degree progress. If the right answer is that a student should take time off or drop out of a program for treatment or convalescence, then that's what has to happen. Of course, it's never that simple, for the student, for the advisor, for the university.

Anyway, I suggest reading the WSJ article if you have access. It's quite thought-provoking.

Friday, December 16, 2011

Universality and "glassy" physics

One remarkable aspect of Nature is the recurrence of certain mathematically interesting motifs in different contexts.  When we see a certain property or relationship that shows up again and again, we tend to call that "universality", and we look for underlying physical reasons to explain its reappearance in many apparently disparate contexts.  A great review of one such type of physics was posted on the arxiv the other day. 

Physicists commonly talk about highly ordered, idealized systems (like infinite, perfectly periodic crystals), because often such regularity is comparatively simple to describe mathematically.  The energy of such a crystal is nicely minimized by the regular arrangement of atoms.   At the other extreme are very strongly disordered systems.  These disordered systems are often called "glassy" because structural glasses (like the stuff in your display) are an example.  In these systems, disorder dominates completely; the "landscape" of energy as a function of configuration is a big mess, with many local minima - a whole statistical distribution of possible configurations, with a whole distribution of energy "barriers" between them.  Systems like that crop up all the time in different contexts, and yet share some amazingly universal properties.  One of the most dramatic is that when disturbed, these systems take an exceedingly long time to respond completely.  Some parts of the system respond fast, others more slowly, and when you add them all together, you get total responses that look logarithmic in time (not exponential, which would indicate a single timescale for relaxation).  For example, the deformation response of crumpled paper (!) shows a relaxation that is described by constant*log(t) for more than 6 decades in time!  Likewise, the speed of sound or dielectric response in a glass at very low temperatures also shows logarithmic decays.  This review gives a great discussion of this - I highly recommend it (even though the papers they cite from my PhD advisor's lab came after I left :-)  ).

Monday, December 12, 2011

Higgs or no

The answer is going to be, to quote the Magic 8-Ball, "Ask again later." Sounds like the folks at CERN are on track to make a more definitive statement about the Higgs boson in about one more Friedman Unit. That won't stop an enormous surge of media attention tomorrow, as CERN tries very hard to have their cake and eat it, too ("We've found [evidence consistent with] the God Particle! At least, it's [evidence not inconsistent with] the God Particle!"). What this exercise will really demonstrate is that many news media figures are statistically illiterate.

I should point out that, with the rumors of a statistically not yet huge bump in the data near 125 GeV, there has suddenly been an uptick in predictions of Higgs bosons with just that mass. How convenient. 

Update - Interesting.  For the best write-up I've seen about this, check out Prof. Matt Strassler.  Seems like the central question is, are the two detectors both seeing something in the same place, or not?  That is, is 123-ish GeV the same as 126-ish GeV?  Tune in next year, same Stat-time, same Stat-channel!  (lame joke for fans of 1960s US TV....)

Saturday, December 10, 2011

Nano book recommendation

My colleague in Rice's history department, Cyrus Mody, has a new book out called Instrumental Community, about the invention and spread of scanned probe microscopy (and microscopists) that's a very interesting read. If you've ever wondered how and why the scanning tunneling microscope and atomic force microscope took off, and why related ideas like the topografiner (pdf) did not, this is the book for you. It also does a great job of giving a sense of the personalities and work environments at places like IBM Zurich, IBM TJ Watson, IBM Almaden, and Bell Labs.

There are a couple of surprising quotes in there. Stan Williams, these days at HP Labs, says that the environment at Bell Labs was so cut-throat that people would sabotage each others' experiments and steal each others' data. Having been a postdoc there, that surprised me greatly, and doesn't gibe with my impressions or stories I'd heard. Any Bell Labs alumni readers out there care to comment?

The book really drives home what has been lost with the drastic decline of long-term industrial R&D in the US. You can see it all happening in slow motion - the constant struggle to explain why these research efforts are not a waste of shareholder resources, as companies become ever more focused on short term profits and stock prices.

Friday, December 02, 2011

Priorities

My colleagues at Texas A&M University must be so happy to hear that in these troubled economic times, their university is rumored to be offering the current University of Houston football coach a $4M/yr salary to come to College Station. I like college sports as much as the next person, but what does it say about higher education in the US that a public university, dealing with tight budgets, thinks that this is smart?

Wednesday, November 30, 2011

Antennas for light + ionics at the nanoscale

A (revised) particularly excellent review article was posted on the arxiv the other day, about metal nanostructures as antennas for light. This seems to be an extremely complete and at the same time reasonably pedagogical treatment of the subject. While in some sense there are no shocking surprises (the basic physics underlying all of this is, after all, Maxwell's equations with complicated boundary conditions and dielectric functions for the metal), there are some great ideas and motifs: the importance of the optical "near field"; the emergence of plasmons, the collective modes of the electrons, which are relevant at the nanoscale but not in macroscopic antennas for, e.g., radio frequencies; the use of such antennas in real quantum optics applications. Great stuff.

I also feel the need for a little bit of shameless self-promotion. My colleague http://physics.ucsd.edu/~diventra/ and I have an article appearing in this month's MRS Bulletin, talking about the importance of ion motion and electrochemistry in nanoscale structures. (Sorry about not having a version on the arxiv at this time. Email me if you'd like a copy.) This article was prompted in part by a growing realization among a number of researchers that the consequences of the motion of ions (often neglected at first glance!) are apparent in a number of nanoscale systems. Working at the nanoscale, it's possible to establish very large electric fields and concentration/chemical potential gradients that can drive diffusion. At the same time, there are large accessible surface areas, and inherently small system dimensions mean that diffusion over physically relevant distances is easier than in macroscale materials. While ionic motion can be an annoyance or an unintended complication, there are likely situations where it can be embraced and engineered for useful applications.

Saturday, November 26, 2011

Nano"machines" and dissipation

There's an article (subscription only, unfortunately) out that has gotten some attention, discussing whether artificial molecular machines will "deliver on their promise".  The groups that wrote the article have an extensive track record in synthesizing and characterizing molecules that can undergo directed "mechanical" motion (e.g., translation of a rod-like portion through a ring) under chemical stimuli (e.g., changes in temperature, pH, redox reactions, optical excitation).  There is no question that this is some pretty cool stuff, and the chemistry here (both synthetic organic, and physical) is quite sophisticated.  

Two points strike me, though.  First, the "promise" mentioned in the title is connected, particularly in the press writeup, with Drexlerian nanoassembler visions.  Synthetic molecules that can move are impressive, but they are far, far away from the idea of actually constructing arbitrary designer materials one atom at a time (a goal that is likely impossible, in my opinion, for reasons stated convincingly here, among others).  They are, however, a possible step on the road to designer, synthetic enzymes, a neat idea.

Second, the writeup particularly mentions how "efficient" the mechanical motions of these molecules are.  That is, there is comparatively little dissipation relative to macroscopic machines.  This is actually not very surprising, if you think about the microscopic picture of what we think of as macroscopic irreversibility.  "Loss" of mechanical energy takes place because energy is transferred from macroscopic degrees of freedom (the motion of a piston) to microscopic degrees of freedom (the near-continuum of vibrational and electronic modes in the metal in the piston and cylinder walls).  When the whole system of interest is microscopic, there just aren't many places for the energy to go.  This is an example of the finite-phase-space aspect that shows up all the time in truly nanoscale systems. 

Thursday, November 17, 2011

Superluminal neutrinos - follow-up

The OPERA collaboration, or at least a large subset of it, has a revised preprint out (and apparently submitted somewhere), with more data on their time-of-flight studies of neutrinos produced at CERN. Tomasso has a nice write-up here. Their previous preprint created quite a stir, since it purported to show evidence of neutrino motion faster than c, the speed of light in vacuum. The general reaction among physicists was, that's really weird, and it's exceedingly likely that something is wrong somewhere in the analysis. One complaint that came up repeatedly was that the pulses used by the group were about 10000 nanoseconds long, and the group was arguing about timing at the 60 ns level. You could readily imagine some issues with their statistics or the functioning of the detector that could be a problem here, since the pulses were so long compared to the effect being reported. To deal with this, the group has now been running for a while with much shorter pulses (a few ns in duration). While they don't have nearly as much data so far (in only a few weeks of running), they do have enough to do some analysis, and so far the results are completely consistent with their earlier report. Funky. Clearly pulse duration systematics or statistics aren't the source of the apparent superluminality, then. So, either neutrinos really are superluminal (still bloody unlikely for a host of reasons), or there is still some weird systematic error in the detector somewhere. (For what it's worth, I'm sure they've looked a million ways at the clock synchronization, etc. now, so that's not likely to be the problem either.)

Update:  Matt Strassler has an excellent summary of the situation.

So you want to compete w/ fossil fuels (or silicon)

Yesterday I went to an interesting talk here by Eric Toone, deputy director of ARPA-E, what is supposed to be the blue-sky high-risk/high-reward development portion of the US Department of Energy. He summarized some basic messages about energy globally and in the US, gave quite a number of examples of projects funded by ARPA-E, and had a series of take-home messages. He also gave the most concise (single-graph) explanation for the failure of Solyndra: they bet on a technology based on CIGS solar cells, and then the price of silicon (an essential component of the main competing technology) fell by 80% over a few months. It was made very clear that ARPA-E aims at a particular stage in the tech transfer process, when the basic science is known, and a technology is right at the edge of development.

The general energy picture was its usual fairly depressing self. There are plenty of fossil fuels (particularly natural gas and coal), but if you think that CO2 is a concern, then using those blindly is risky. Capital costs make nuclear comparatively uncompetitive (to say nothing of political difficulties following Fukushima). Solar is too expensive to compete w/ fossil fuels. Other renewables are also too expensive and/or not scalable. Biomass is too expensive. Batteries don't come remotely close to competing with, e.g., gasoline in terms of energy density and effective refueling times.

The one thing that really struck me was the similarity of the replacing-fossil-fuels challenge and the replacing-silicon-electronics challenge. Fossil fuels have problems, but they're sooooooo cheap. Likewise, there is a great desire to prolong Moore's law by eventually replacing Si, but Si devices are sooooooo cheap that there's an incredible economic barrier to surmount. When you're competing against a transistor that costs less than a millionth of a cent and has a one-per-billion failure rate over ten years, your non-Si gizmo better be really darn special if you want anyone to take it seriously....

Monday, November 14, 2011

Bad Astronomy day at Rice

Today we hosted Phil Plait for our annual Rorschach Lecture (see here), a series in honor of Bud Rorschach dedicated to public outreach and science policy. He kept us fully entertained with his Death from the Skies! talk, with a particularly amusing litany of (a small subset of) the scientific flaws in "Armageddon". There was a full house in our big lecture hall - there's no question that astro has very broad popular appeal (though it did bring out the "Obama should be impeached immediately because he's not protecting us from possible asteroid impacts!" crowd).

Sunday, November 06, 2011

Teaching - Coleman vs. Feynman

As pointed out by Peter Woit, Steve Hsu recently posted a link to an interview with (the late) Sidney Coleman, generally viewed as one of the premier theoretical physicists of his generation. Ironically, for someone known as an excellent lecturer, Coleman apparently hated teaching, likening it to "washing dishes" or "waxing floors" - two activities he could do well, from which he derived a small amount of "job well done" satisfaction, but which he would never choose to do voluntarily.

It's fun to contrast this with the view of Richard Feynman, as he put it in Surely You Must Be Joking, Mr. Feynman:
I don't believe I can really do without teaching. The reason is, I have to have something so that when I don't have any ideas and I'm not getting anywhere I can say to myself, "At least I'm living; at least I'm doing something; I am making some contribution" -- it's just psychological.... The questions of the students are often the source of new research. They often ask profound questions that I've thought about at times and then given up on, so to speak, for a while. It wouldn't do me any harm to think about them again and see if I can go any further now. The students may not be able to see the thing I want to answer, or the subtleties I want to think about, but they remind me of a problem by asking questions in the neighborhood of that problem. It's not so easy to remind yourself of these things. So I find that teaching and the students keep life going, and I would never accept any position in which somebody has invented a happy situation for me where I don't have to teach. Never.
I definitely lean toward the Feynman attitude. Teaching - explaining science to others - is fun, important, and helpful to my own work. Perhaps Coleman was simply so powerful in terms of creativity in research that teaching always seemed like an annoying distraction. In these days when there are so many expectations on faculty members beyond teaching, I hope we're not culturally rewarding a drift toward the Coleman position.

Tuesday, November 01, 2011

Science - what is it up to?

Hat tip to Phil Plait, the Bad Astronomer, for linking to this video from The Daily Show.  My apologies to non-US readers who won't be able to watch this.  It's a special report from Asif Mandvi, complete with remarks from a Republican "strategist" / Fox News talking head, who explains how science is inherently corrupt, because only scientists are really qualified to review the work of scientists.  Seriously, she really makes that argument, and more.

Update:  I've decided to ditch the embedded video.  Here's a link to the video on the Daily Show's site, and here's a link that works internationally.

Wednesday, October 26, 2011

Faculty search process, 2011 version.

As I have done in past years, I'm revising a past post of mine about the faculty search process. My thoughts on this really haven't changed much, but it's useful to throw this out there rather than hope people see it via google.

Here are the steps in the typical faculty search process:

  • The search gets authorized. This is a big step - it determines what the position is, exactly: junior vs. junior or senior; a new faculty line vs. a replacement vs. a bridging position (i.e. we'll hire now, and when X retires in three years, we won't look for a replacement then). The main challenges are two-fold: (1) Ideally the department has some strategic plan in place to determine the area that they'd like to fill. Note that not all departments do this - occasionally you'll see a very general ad out there that basically says, "ABC University Dept. of Physics is authorized to search for a tenure-track position in, umm, physics. We want to hire the smartest person that we can, regardless of subject area." The danger with this is that there may actually be divisions within the department about where the position should go, and these divisions can play out in a process where different factions within the department veto each other. This is pretty rare, but not unheard of. (2) The university needs to have the resources in place to make a hire.  In tight financial times, this can become more challenging. I know anecdotally of public universities having to cancel searches in 2008/2009 even after the authorization if the budget cuts get too severe. A well-run university will be able to make these judgments with some leadtime and not have to back-track.
  • The search committee gets put together. In my dept., the chair asks people to serve. If the search is in condensed matter, for example, there will be several condensed matter people on the committee, as well as representation from the other major groups in the department, and one knowledgeable person from outside the department (in chemistry or ECE, for example). The chairperson or chairpeople of the committee meet with the committee or at least those in the focus area, and come up with draft text for the ad.  In cross-departmental searches (sometimes there will be a search in an interdisciplinary area like "energy"), a dean would likely put together the committee.
  • The ad gets placed, and canvassing begins of lots of people who might know promising candidates. A special effort is made to make sure that all qualified women and underrepresented minority candidates know about the position and are asked to apply (the APS has mailing lists to help with this, and direct recommendations are always appreciated - this is in the search plan). Generally, the ad really does list what the department is interested in. It's a huge waste of everyone's time to have an ad that draws a large number of inappropriate (i.e. don't fit the dept.'s needs) applicants. The exception to this is the generic ad like the type I mentioned above. Historically MIT and Berkeley had run the same ad every year, trolling for talent. They seem to do just fine. The other exception is when a university already knows who they want to get for a senior position, and writes an ad so narrow that only one person is really qualified. I've never seen this personally, but I've heard anecdotes.
  • In the meantime, a search plan is formulated and approved by the dean. The plan details how the search will work, what the timeline is, etc. This plan is largely a checklist to make sure that we follow all the right procedures and don't screw anything up. It also brings to the fore the importance of "beating the bushes" - see above. A couple of people on the search committee will be particularly in charge of oversight on affirmative action/equal opportunity issues.
  • The dean usually meets with the committee and we go over the plan, including a refresher for everyone on what is or is not appropriate for discussion in an interview (for an obvious example, you can't ask about someone's religion, or their marital status).
  • Applications come in and are sorted; rec letters are collated.  Each candidate has a folder. Every year when I post this, someone argues that it's ridiculous to make references write letters, and that the committee should do a sort first and ask for letters later.  I understand this perspective, but I largely disagree. Letters can contain an enormous amount of information, and sometimes it is possible to identify outstanding candidates due to input from the letters that might otherwise be missed. (For example, suppose someone's got an incredible piece of postdoctoral work about to come out that hasn't been published yet. It carries more weight for letters to highlight this, since the candidate isn't exactly unbiased about their own forthcoming publications.)  There is a trend toward electronic application review, and that is likely to continue, though it can be complicated if committee members are not very tech-savvy.
  • The committee begins to review the applications. Generally the members of the committee who are from the target discipline do a first pass, to at least wean out the inevitable applications from people who are not qualified according to the ad (i.e. no PhD; senior people wanting a senior position even though the ad is explicitly for a junior slot; people with research interests or expertise in the wrong area). Applications are roughly rated by everyone into a top, middle, and bottom category. Each committee member comes up with their own ratings, so there is naturally some variability from person to person. Some people are "harsh graders". Some value high impact publications more than numbers of papers. Others place more of an emphasis on the research plan, the teaching statement, or the rec letters. Yes, people do value the teaching statement - we wouldn't waste everyone's time with it if we didn't care. Interestingly, often (not always) the people who are the strongest researchers also have very good ideas and actually care about teaching. This shouldn't be that surprising. Creative people can want to express their creativity in the classroom as well as the lab.
  • Once all the folders have been reviewed and rated, a relatively short list (say 20-25 or so out of 120 applications) is formed, and the committee meets to hash that down to, in the end, four or five to invite for interviews. In my experience, this happens by consensus, with the target discipline members having a bit more sway in practice since they know the area and can appreciate subtleties - the feasibility and originality of the proposed research, the calibration of the letter writers (are they first-rate folks? Do they always claim every candidate is the best postdoc they've ever seen?). I'm not kidding about consensus; I can't recall a case where there really was a big, hard argument within the committee. I know I've been lucky in this respect, and that other institutions can be much more fiesty. The best, meaning most useful, letters, by the way, are the ones who say things like "This candidate is very much like CCC and DDD were at this stage in their careers." Real comparisons like that are much more helpful than "The candidate is bright, creative, and a good communicator." Regarding research plans, the best ones (for me, anyway) give a good sense of near-term plans, medium-term ideas, and the long-term big picture, all while being relatively brief and written so that a general committee member can understand much of it (why the work is important, what is new) without being an expert in the target field. It's also good to know that, at least at my university, if we come across an applicant that doesn't really fit our needs, but meshes well with an open search in another department, we send over the file. This, like the consensus stuff above, is a benefit of good, nonpathological communication within the department and between departments.
That's pretty much it up to the interview stage. No big secrets. No automated ranking schemes based exclusively on h numbers or citation counts.

Tips for candidates:
  • Don't wrap your self-worth up in this any more than is unavoidable. It's a game of small numbers, and who gets interviewed where can easily be dominated by factors extrinsic to the candidates - what a department's pressing needs are, what the demographics of a subdiscipline are like, etc. Every candidate takes job searches personally to some degree because of our culture and human nature, but don't feel like this is some evaluation of you as a human being.
  • Don't automatically limit your job search because of geography unless you have some overwhelming personal reasons.  I almost didn't apply to Rice because neither my wife nor I were particularly thrilled about Texas, despite the fact that neither of us had ever actually visited the place. Limiting my search that way would've been a really poor decision - I've now been here 12 years, and we've enjoyed ourselves (my occasional Texas-centric blog posts aside).
  • Really read the ads carefully and make sure that you don't leave anything out. If a place asks for a teaching statement, put some real thought into what you say - they want to see that you have actually given this some thought, or they wouldn't have asked for it.
  • Research statements are challenging because you need to appeal to both the specialists on the committee and the people who are way outside your area. My own research statement back in the day was around three pages. If you want to write a lot more, I recommend having a brief (2-3 page) summary at the beginning followed by more details for the specialists. It's good to identify near-term, mid-range, and long-term goals - you need to think about those timescales anyway. Don't get bogged down in specific technique details unless they're essential. You need committee members to come away from the proposal knowing "These are the Scientific Questions I'm trying to answer", not just "These are the kinds of techniques I know". I know that some people may think that research statements are more of an issue for experimentalists, since the statements indicate a lot about lab and equipment needs. Believe me - research statements are important for all candidates. Committee members need to know where you're coming from and what you want to do - what kinds of problems interest you and why. The committee also wants to see that you actually plan ahead. These days it's extremely hard to be successful in academia by "winging it" in terms of your research program.
  • Be realistic about what undergrads, grad students, and postdocs are each capable of doing. If you're applying for a job at a four-year college, don't propose to do work that would require an experienced grad student putting in 60 hours a week.
  • Even if they don't ask for it, you need to think about what resources you'll need to accomplish your research goals. This includes equipment for your lab as well as space and shared facilities. Talk to colleagues and get a sense of what the going rate is for start-up in your area. Remember that four-year colleges do not have the resources of major research universities. Start-up packages at a four-year college are likely to be 1/4 of what they would be at a big research school (though there are occasional exceptions). Don't shave pennies - this is the one prime chance you get to ask for stuff! On the other hand, don't make unreasonable requests. No one is going to give a junior person a start-up package comparable to a mid-career scientist.
  • Pick letter-writers intelligently. Actually check with them that they're willing to write you a nice letter - it's polite and it's common sense. (I should point out that truly negative letters are very rare.) Beyond the obvious two (thesis advisor, postdoctoral mentor), it can sometimes be tough finding an additional person who can really say something about your research or teaching abilities. Sometimes you can ask those two for advice about this. Make sure your letter-writers know the deadlines and the addresses. The more you can do to make life easier for your letter writers, the better.
As always, more feedback in the comments is appreciated.

Wednesday, October 19, 2011

Science, communication, and the public

This week's issue of Nature includes an interesting editorial emphasizing how crucial it is that scientists and engineers learn how to communicate their value to the general populace.  This is something I've thought about for quite some time, as have a number of other people - see this article in Physics Today (subscription only, I'm afraid), this related blog post, and a discussion in the Houston Chronicle's science blog

It's hard not to get down about this whole topic.  Industrial R&D funding (for projects with more than a year lead time) is a shadow of what it used to be, and looming fiscal austerity may well cripple federally funded basic research.  If companies aren't willing to invest for the long term, and government is unable or unwilling to invest for the long term, then technological innovation may shift away from the US.  If more of the general public and politicians appreciated that things like the iPad, XBox, the internet, and flat screen TVs didn't come out of nowhere, maybe the situation would be different. 

By the way, I find it interesting that the Nature editorial discusses looming cuts to Texas physics departments, a topic I mentioned here and was discussed in the New York Times, and yet our own Houston Chronicle hasn't bothered to write about them.  At all.  Even on their online science blog.  Yes, they're aware of the topic, too.  Clearly they've had more newsworthy things to worry about.

Monday, October 17, 2011

A few fun links.

I'm buried under a couple of pieces of work right now, but I did want to share a couple of fun science videos.

Here is a great example of magnetic levitation via superconductivity.

Those Mythbusters guys had a great time trying to make a giant Newton's Cradle using wrecking balls.  It didn't work well (and I assigned a homework problem looking at why this was the case).

I just heard yesterday that there's a full-length version of the theme song to the Big Bang Theory.  Pretty educational, though the lyrics imply that there'll be a Big Crunch, and we now know that's unlikely (see this year's Nobel in physics).

Here is a cool collection of videos, from minutephysics.  Good stuff! 

Tuesday, October 11, 2011

What's wrong with modern American economics.

According to this, Google stock may take a hit because their revenues only grew year-over-year by 30% this past quarter. Specifically, analysts are worried because Larry Page said that he cares more about the long term health of the company than goosing the stock price.  

What the hell is wrong with these people?  It's not enough that Google is making enormous profits.  It's not enough that Google's enormous revenues are 30% larger than they were last year.  Rather, apparently the free market will penalize Google because they expected the earnings to be 32% larger than last year, and it's apparently a bad thing that the management has talked about prioritizing long-term health and growth.   How is this attitude by the financial sector at all a good thing?  This attitude is exactly why corporate long-term R&D has been nearly obliterated in the US.

Monday, October 10, 2011

Quasicrystals

I was going to do a post about quasicrystals and this year's chemistry Nobel, but Don Monroe has done such a good job in his Phys Rev Focus piece that there's not much more to say.  Read it!

The big conceptual change brought about by the discovery of quasicrystals was not so much the observation of five-fold and icosahedral symmetries via diffraction.  That was certainly surprising, since you can't tile a plane with pentagons; it was very hard to understand how you could end up with a periodic arrangement of atoms that could fill space and give diffraction patterns with those symmetries.  The real conceptual shift was realizing that it is possible to have nice, sharp diffraction patterns from nonperiodic (rather, quasiperiodic) arrangements of atoms.   The usual arguments about diffraction that are taught in undergrad classes emphasize that diffraction (of electrons or x-rays or neutrons) is very strong (giving 'spots') in particular directions because along those directions, the waves scattered by subsequent planes of atoms all interfere constructively.   Changing the direction leads to crests and troughs of waves adding with some complicated phase relationship, generally averaging to not much intensity.  In particular symmetry directions, though, the waves scattered by successive planes of atoms arrive in phase, as the distances traveled by the various scattered contributions all differ by integer numbers of wavelengths.  Without a periodic arrangement of atoms, it was hard to see how this could happen nicely.

It turns out that quasicrystals really do have a hidden sort of symmetry.  They are projections onto three dimensions of structures that would be periodic in a higher dimensional space.  The periodicity isn't there in the 3d projection (rather, the atoms are arranged "quasiperiodically" in space), but the 3d projection does contain information about the higher dimensional symmetry, and this comes out when diffraction is done in certain directions.  The discovery of these materials spurred scientists had to reevaluate their ideas about what crystallinity really means - that's why it's important.  For what it's worth, the best description of this that I've seen in a textbook is in Taylor and Heinonen.

Thursday, October 06, 2011

A modest proposal for Google, Intel, or the like.

A post on quasicrystals will be coming eventually....

Suppose you're an extremely successful tech company, and you want to make a real, significant impact on university research for the long term, because you realize that you need an educated, technically sophisticated workforce.  Rather than endowing individual professorships, or setting up one or two research centers, I have a suggestion.  Take $250M, and set up research equipment endowments at, say, the 50 top research universities.  Give each one $5M, with the proviso that the endowment returns be used for the purchase or maintenance of research equipment, and/or technical staff salary lines, as the institution sees fit.  That could buy one good-sized piece of equipment per year, or pay for several technical staff.  This would be a way for universities to replenish their research infrastructure over time without being dependent on federal equipment grants (which are undoubtedly useful, but tend to favor the exotic over the essential, and are likely to become increasingly scarce as fiscal austerity takes over for the foreseeable future).  Universities could also charge depreciation on that equipment when assessing user fees, making the whole system self-sustaining even beyond endowment returns.  Alternately, critical staff lines could be supported.  Anyone at a research university knows that a good technical staff member can completely reshape the way facilities (e.g., a cleanroom; a mass spec center) operate.  You put all the decision making on the university, with the proviso that they can't spend down the principal.   This strategy would boost research productivity across the country over time, get more and better equipment into the hands of future tech workers, and be a charitable write-off for the company that does it.  It could really make a difference.

I'm completely serious about this, and would be happy to talk to any corporations (or foundations) about how this might work.  

Sunday, October 02, 2011

Nobel speculation time again

It's that time of year again - time to speculate about the Nobel Prizes.  Physics gets announced Tuesday, followed by Chemistry the next day.  While I feel almost obligated to mention my standard speculation (Aharonov and Berry for geometrical phases), it seems likely that this year's physics prize will be astro-themed, since there hasn't been one of those in a while.  Something related to dark matter perhaps (Vera Rubin for galaxy rotation curves?), though direct detection of dark matter may be a necessary precursor for that.  Inflationary cosmology gets mentioned (Guth and Linde?) by some.  Extrasolar planets?  Fine scale structure in the cosmic microwave background as a constraint on what the universe is made of?   There certainly has been a lot of astro excitement in the last few years....  Feel free to speculate in the comments.

Update:  The 2011 Nobel in Physics has been awarded to Saul Perlmutter, Brian Schmidt, and Adam Riess, for their discovery (via observations of type IA supernova) that not only is the universe expanding, but that expansion is (apparently) accelerating.  Makes sense, in that this work certainly altered our whole view of the universe's fate.  Combined with other observations (e.g., detailed measurements of the cosmic microwave background), it would now seem that the universe's total energy density is 4% ordinary matter, 23% dark matter (gravitates but otherwise interacts very weakly with the ordinary matter), and 73% "dark energy" (energy density associated with space itself).  For a nice summary of the science, see here (pdf).  Congratulations to all!

Thursday, September 22, 2011

Superluminal neutrinos - a case study in how good science is done

As many people have now heard, the OPERA collaboration is reporting a very surprising observation.  The OPERA experiment is part of CERN, and is an experiment meant to study neutrino flavor oscillations.  The idea is, the proton beam at CERN creates a beam of neutrinos.  Since neutrinos hardly interact with normal matter, they move in a straight line right through the earth, and pass through the experimental station in Gran Sasso, Italy, where some small fraction of them are then detected.  There are (according to the Standard Model) three flavors of neutrinos, the electron neutrino, muon neutrino, and tau neutrino.  It has been determined experimentally that those flavors are not exact "mass eigenstates".  That means that if you start off with a tau neutrino of particular energy, for example, and let it propagate for a while, it will change into a muon neutrino with some probability that oscillates in time.  Anyway, OPERA wanted to study this phenomenon, and in doing so, they measured the time it takes neutrinos to go from their production point at CERN to the detector in Gran Sasso, using precisely synchronized special clocks.  They also used differential GPS to measure the distance between the production point and the detector to within 20 cm.  Dividing the distance by the time, they found much to their surprise that the neutrinos appear to traverse the distance about 60 ns faster than would be expected if they traveled at the speed of light in vacuum.

So, what could be going on here?  There are a few possibilities.  First, they could have the distance measurement wrong.  This seems unlikely, given the use of differential GPS and the sensitivity (they could clearly see the change in the distance due to a 2009 earthquake, as shown in Fig. 7 of the paper).  Second, they could have a problem in their synchronization of the clocks.  That seems more likely to me, given that the procedure is comparatively complicated.  Third, there is some other weird systematic at work that they haven't found.  Fourth, neutrinos are actually tachyons.  That would be all kinds of awesome, but given how challenging it would be to reconcile that with special relativity and causality, I'm not holding my breath.

Why is this an example of good science?  The collaboration spent three years looking hard at their data, analyzing it many different ways, checking and cross-checking.  They are keenly aware that a claim of FTL neutrinos would be the very definition of "extraordinary" in the scientific sense, and would therefore require extraordinary evidence.  Unable to find the (highly likely) flaw in their analysis and data, they are showing everything publicly, and asking for more investigation.  I want to point out, this is the diametric opposite of what happens in what I will term bad science (ahem.  Italian ecat guys, I'm looking at you.).   This is how real experimental science works - they're asking for independent reproduction or complementary investigation.  I hope science journalists emphasize this aspect of the story, rather than massively sensationalizing it or portraying the scientists as fools if and when a flaw is found.

Thursday, September 15, 2011

State of Texas threatens physics departments at smaller public universities

This article is both sad and frustrating.  The coordinating body of the Texas state government that runs the public universities in this state has recommended that a number of places shut down their physics departments.   In particular, this affects two schools near Rice that are historically African American serving, Prairie View A and M and Texas Southern.   (Unfortunately, the article doesn't have a link to the actual Texas Higher Education Coordinating Board recommendations, so I don't have any further information, like which other universities here may be affected.)   

Depressingly updated:  see NY Times story here.

I understand that financial times are tight for the state.  (Look at the "Texas Miracle" in action as we slash the state's education budget.)  The bit that really galls me is the rationale:  enrollment in the upper division courses is small, so we should eliminate the whole department.  This idea that somehow the only valuable and cost effective courses are those with large enrollment is ridiculous, and it seems to have infected the public university system in this state, driven by misguided, bean-counting thinktank types.  If you follow this reasoning all the way, we should only have large service courses, and never have upper division, specialized courses in anything, and of course all of these should be taught by non-tenure-track, non-research-active instructors.  That would surely cut costs.  It would also be a disaster in the long run. As is stated in this article, if you used the same criteria in terms of size of upper division courses across the country, you'd end up shutting down 2/3 of the physics departments in the US, to say nothing of other disciplines.  I can't imagine the situation is any better in, e.g., math, or chemical engineering, or any technical discipline.  I'd also love to see numbers about how much collegiate athletics is net costing the state in public funds, vs. how much it costs to keep these programs going.  Hint: most universities lose money on athletics.

I'd love to try to fix this, but given the politics here (hint:  Rick Perry likes these policies, and his political party controls both houses of the state legislature), it's hard to see a workable path forward.  It's not like this is going to be an honest debate about how to structure the state's higher education system (which we can and should have) - it's an ideological full-court press.   

Think I'm exaggerating?  The superintendent of the THECB, Raymund Paredes, is a close buddy of both Rick Perry and his pal Rick O'Connell, the guy who thinks that a bachelor's degree even in a technical field should be obtainable for $10000 total, period.  You could do that, of course, but it would involve converting our colleges and universities essentially into community colleges or correspondence schools.  I've yet to see any evidence that these guys have an appreciation for science or engineering at all.  They want UT and TAMU to play good football, and they espouse populist rhetoric about wanting to cut costs, but they don't seem to want academic excellence at universities.

Wednesday, September 14, 2011

Lab habits + data management

The reason I had been looking for that Sydney Harris cartoon is that I was putting together a guest lecture for our university's "Responsible Conduct of Research" course. I was speaking today about data management and retention, a topic I've come to know well over the last year through some university service work working on policies in that area. After speaking, it occurred to me that it's not a bad idea to summarize important points on this for the benefit of student readers of this blog.  In brief:
  • Everything is data.  Not just raw numbers or images, but also the final analyzed graphs, the software used to do the analysis, the descriptions of the instrument settings used to acquire the raw numbers - everything.
  • The data are the science.  The data are the foundation for all the analysis, model-building, papers, arguments, further refinements, patents, etc.  Protect the data!
  • If you didn't document it, you didn't do it.
  • Write down everything.  Fill up notebooks.  Annotate liberally, including false starts, what you were thinking when you set up the little sub-experiments or trials that go into any major research endeavor.  I guarantee, you will never, ever in your life look back and say, "I regret that I was so thorough, and I wish I had written down less."  After years of observation, I am convinced that good notebook skills genuinely reduce mean time to thesis completion in many cases.  If you actually keep track of what you've been doing, and really write down your logic, you are less likely to go down blind alleys or have to repeat mistakes.
  • You may think that you own your data.  You don't, technically.  In an academic setting, the university has legal title to the data (that gives them the legal authority that they need to adjudicate disputes about access to data, including those that arise in the rare but unfortunate cases of research misconduct), while investigators are shepherds or custodians of the data.  Both have their own responsibilities and rights.  Some of those responsibilities are inherent in good science and engineering (e.g., the duty to do your best to make sure that the published results are accurate and correct, as much as possible), and others are imposed externally (e.g., federal funding agencies require preservation of data for some number of years beyond the end of an award).
  • Back everything up.  In multiple ways.  With the advent of scanners, digital cameras, cheap external hard drives, laptops, thumbdrives, "the cloud" (as long as it's better than this), etc., there is absolutely no excuse for not properly backing up data.  To repeat, back everything up.  No, seriously.  Have a backup copy at an off-site location, as a sensible precaution against disaster (fire, hurricane, earthquake, zombie apocalypse).
  • Good habits are habits, and must be habituated.  It took me more than 25 years to get in the habit of really flossing.  Do yourself a favor, and get in the habit of properly caring for your data.  Please.

Monday, September 12, 2011

Help finding a Syndey Harris cartoon

I am trying to find a particular Syndey Harris physics cartoon, and google has let me down. The one I'm picturing has an obvious experimentalist at a workbench strewn with lab equipment. There's an angel on one shoulder, and a devil on the other. Anyone who has this cartoon, I'd be very grateful for a link to a scanned version! Thanks.

Wednesday, September 07, 2011

Single-molecule electric motor

As a nano person, I feel like I'm practically obligated to comment on this paper, which has gotten a good deal of media attention. In this experiment, the authors have anchored a single small molecule down to a single-crystal copper surface, in such a way that the molecule can pivot about the single anchoring atom, rotating in the plane of the copper surface. Because of the surface atom arrangement and its interactions with the molecule, the molecule has six energetically equivalent ways that it can be oriented on the metal surface. It's experimentally impressive that the authors came up with a way to track the rotation of the molecule one discrete hop between orientations at a time. This is only do-able when the temperature is sufficiently low that thermally driven orientational diffusion is suppressed. When a current of electrons is properly directed at the molecule, the electrons can dump enough energy into the molecule (inelastically) to kick the molecule around rotationally. In that sense, this is an electric motor. (Of course, while the rotor is a single small molecule, the metal substrate and scanning tunneling microscope tip are macroscopic in size.) The requirements for this particular scheme to work include cryogenic temperatures, ultrahigh vacuum, and ultraclean surfaces. In that sense, talk in the press release about how this will be useful for pushing things around and so forth in, e.g., medical devices is a bit ridiculous. Still a nice experiment, though.  I continue to find the whole problem of nanoscale systems driven out of thermal equilibrium (e.g., by the flow of "hot" electrons) to be fascinating - how is a steady state established, where does the energy go, where does irreversibility come into play, etc.

Friday, September 02, 2011

Playing with interfaces for optical fun and profit

A team at Harvard has published in Science a fun and interesting result.  When light passes from one medium to another, there are boundary conditions that have to be obeyed by the electromagnetic field (that is, light still has to obey Maxwell's equations, even when there's a discontinuity in the dielectric function somewhere).  Because of those boundary conditions, we end up with the familiar rules of reflection and refraction.  Going up a level in sophistication and worrying about multiple interfaces, we are used to having to keep track of the phase of the electromagnetic waves and how those phases are affected by the interfaces.  In fact, we have gotten good at manipulating those phases, to produce gadgets like antireflection coatings and dielectric mirrors (and on a more sophisticated level, photonic band gap materials).  What the Harvard team does is use plasmonic metal structures to pattern phase effects at a single interface.  The result is that they can engineer some bizarre reflection and refraction properties when they properly stack the deck in terms of phases.  Very cute.  I must confess, though, that since Federico Capasso was once my boss's boss at Bell Labs, I'm more than a little disturbed by the photo accompanying the physorg article.

Tuesday, August 30, 2011

Supersymmetry, the Higgs boson, the LHC, and all that

Lately there has been a big kerfluffle (technical term of art, there) in the blog-o-sphere about what the high energy physics experimentalists are finding, or not finding, at the LHC. See, for example, posts here and here, which reference newspaper articles and the like. Someone asked me what I thought about this the other day, and I thought it might be worth a post.

For non-experts (and in high energy matters, that's about the right level for me to be talking anyway), the main issues can be summarized as follows. There is a theoretical picture, the Standard Model of particle physics, that does an extremely good job (perhaps an unreasonably good job) of describing what appear to be the fundamental building blocks of matter (the quarks and leptons) and their interactions. Unfortunately, the Standard Model has several problems. First, it's not at all clear why many of the parameters in the model (e.g., the masses of the particles) have the values that they do. This may only be a problem with our world view, meaning the precise values of parameters may come essentially from random chance, in which case we'll just have to deal with it. However, it's hard to know that for sure. Moreover, there is an elegant (to some) theoretical idea called the Higgs mechanism that is thought to explain at the same time why particles have mass at all, and how the electroweak interaction has the strength and symmetry that it does. Unfortunately, that mechanism predicts at least one particle which hasn't been seen yet, the Higgs boson. Second, we know that the Standard Model is incomplete, because it doesn't cover gravitational interactions. Attempts to develop a truly complete "theory of everything" have, over the last couple of decades, become increasingly exotic, encompassing ideas like supersymmetry (which would require every particle to have a "superpartner" with the other kind of quantum statistics), extra dimensions (perhaps the universe really has more than 3 spatial dimensions), and flavors of string theory, multiverses, and whatnot. There is zero experimental evidence for any of those concepts so far, and a number of people are concerned that some of the ideas aren't even testable (or falsifiable) in the conventional science sense.

So, the LHC has been running for a while now, the detectors are working well, and data is coming in, and so far, no exotic stuff has been seen. No supersymmetric partners, no Higgs boson over the range of parameters examined, etc. Now, this is not scientifically unreasonable or worrisome. There are many possible scales for supersymmetric partners and we've only looked at a small fraction (though this verges into the issue of falsifiability - will theorists always claim that the superpartners are hiding out there just beyond the edge of what's measurable?). The experts running the LHC experiments knew ahead of time that the most likely mass range for the Higgs would require a *lot* of data before any strong statement can be made. Fine.

So what's the big deal? Why all the attention? It's partly because the LHC is expensive, but mostly it's because the hype surrounding the LHC and the proposed physics exotica has been absolutely out of control for years. If the CERN press office hadn't put out a steady stream of news releases promising that extra dimensions and superpartners and mini black holes and so forth were just around the corner, the reaction out there wouldn't be nearly so strong. The news backlash isn't rational scientifically, but it makes complete sense sociologically. In the mean time, the right thing to do is to sit back and wait patiently while the data comes in and is analyzed. The truth will out - that's the point of science. What will really be interesting from the history and philosophy of science perspective will be the reactions down the line to what is found.

Wednesday, August 24, 2011

great post by ZZ

Before I go to teach class this morning, I wanted to link to this great post by ZapperZ about the grad student/research adviser relationship.  Excellent.

Saturday, August 20, 2011

Gating and "real" metals.

Orientation week has kept me very busy - hence the paucity of posts.  I did see something intriguing on the arxiv recently (several things, actually, but time is limited at the moment), though.

Suppose I want to make a capacitor out of two metal plates separated by empty space.  If I apply a voltage, V, across the capacitor using a battery, the electrons in the two plates shift their positions slightly, producing a bit of excess charge density at the plate surfaces.  One electrode ends up with an excess of electrons at the surface, so that it has a negative surface charge density.  The other electrode ends up with a deficit of electrons at the surface, and the ion cores of the metal atoms lead to a positive surface charge density.  The net charge on one plate is Q, and the capacitance is defined as C = Q/V.

So, how deep into the metal surfaces is the charge density altered from that in the bulk metal?  The relevant distance is called the screening length, and it's set in large part by the density of mobile electrons.  In a normal metal like copper or gold, which has a high density of mobile (conduction) electrons on the order of 1022 per cm3, the screening length is comparable to an atomic diameter!  That's very short, and it tells you that it's extremely hard to alter the electronic properties of a piece of normal metal by capacitively messing about with its surface - you just don't mess with the electronic density in most of the material.  (This is in contrast to the situation in semiconductors or graphene, by the way, when a capacitive "gate" electrode can change the number of mobile electrons by orders of magnitude.)

That's why this paper was surprising.  The authors use ionic liquids (essentially a kind of salt that's molten at room temperature) to modulate the surface charge density of gold films by something like 1015 electrons per cm2.  The surprising thing is that they claim to see large (e.g., 10%) changes in the conductance of quite thick (40 nm) gold films as a result of this.  This is weird.  For example, the total number of electrons per cm2 already in such a film is something like (6 x 1022/cm3) x (4 x 10-5 cm) = 2.4 x 1018 per cm2.  That means that the gating should only be changing the 2d electron density by something like a tenth of a percent.  Moreover, only the top 0.1 nm of the Au should really be affected.  The data are what they are, but boy this is odd.  There's no doubt that these ionic liquids are an amazing enabling tool for pushing the frontiers of high charge densities in CM physics....

Sunday, August 14, 2011

Topological insulator question

I have a question, and I'm hoping one of my reader experts might be able to answer it for me.  Let me set the stage.  One reason 3d topological insulators are a hot topic these days is the idea that they have special 2d states that live at their surfaces.  These surface states are supposed to be "topologically protected" - in lay terms, this means that they are very robust; something deep about their character means that true back-scattering is forbidden.  What this means is, if an electron is in such a state traveling to the right, it is forbidden by symmetry for simple disorder (like a missing atom in the lattice) to scatter the electron into a state traveling to the left.  Now, these surface states are also supposed to have some unusual properties when particle positions are swapped around.  These unconventional statistics are supposed to be of great potential use for quantum computation.  Of course, to do any experiments that are sensitive to these statistics, one needs to do quantum interference measurements using these states.   The lore goes that since the states are topologically protected and therefore robust, this should be not too bad.

Here's my question.  While topological protection suppresses 180 degree backscattering, it does not suppress (as far as I can tell) small angle scattering, and in the case of quantum decoherence, it's the small angle scattering that actually dominates.  It looks to me like the coherence of these surface states shouldn't necessarily be any better than that in conventional materials.  Am I wrong about this?  If so, how?  I've now seen multiple papers in the literature (here, here, and here, for example) that show weak antilocalization physics at work in such materials.  In the last one in particular, it looks like the coherence lengths in these systems (a few hundred nanometers at 1 K) are not even as good as what one would see in a conventional metal film (e.g., high purity Ag or Au) at the same temperatures.  That doesn't seem too protected or robust to me....  I know that the situation is likely to be much more exciting if superconductivity is induced in these systems.  Are the normal state coherence properties just not that important?

Tuesday, August 09, 2011

DOE BES CMX PI mtg

Went for the cryptic headline.  I'm off for a Department of Energy Basic Energy Sciences Condensed Matter Experiment principal investigator meeting (the first of its kind, I believe) in the DC area.  This should be really interesting, getting a chance to get a perspective on the variety of condensed matter and materials physics being done out there.  This looks like it will be much more useful than a dog-and-pony show that I went to for one part of another agency a few years ago....

Monday, August 08, 2011

Evolution of blogger spam

Over the last couple of weeks, new forms of spam comments have been appearing on blogger. One type takes a sentence or two from the post itself, and feeds them through a parser reminiscent of ELIZA, to produce a vaguely coherent statement in a comment. Another type that I've noticed grabs a sentence or two from an article that was linked in the original post. A third type combines these two, taking a sentence from a linked article, and chewing on it with the ELIZA-like parser. A few more years of this, and we'll have the spontaneous evolutionary development of generalized natural-language artificial intelligence from blogger spam....

Friday, August 05, 2011

Summer colloquium

Every year at Rice in early August, the Rice Quantum Institute (old website) (shorthand: people who care about interdisciplinary science and engineering involving hbar) has its annual Summer Colloquium. Today is the twenty-fifth such event. It's a day-long miniconference, featuring oral presentations by grad students and posters, by both grad students and undergrad researchers from a couple of REU programs (this year, the RQI REU and the NanoJapan REU). It's a full day, with many talks. It's a friendly way for students to get more presentation experience, and a good way for faculty to learn what their colleagues are doing. I'd be curious to know if other institutions have similar things - my impression has been that this is comparatively unique, particularly its very broad interdisciplinary nature (e.g., talks on spectroscopy for pollution monitoring, topological insulators, plasmons, carbon nanotube composites, batteries) and combination of undergrads and grad students.

Thursday, July 28, 2011

Plutonium: a case study in why CM physics is rich

At the heart of condensed matter physics are two key concepts: the emergence of rich phenomena (including spontaneously occurring order - structural, magnetic, or otherwise) in the many-particle limit; and the critical role played by quantum mechanics in describing the many-body states of the system. I've tried to explain this before to lay persons by pointing out that while complicated electronic structure techniques can do an adequate job of describing the electronic and vibrational properties of a single water molecule at zero temperature, we still have a difficult time predicting really emergent properties, such as phase diagram of liquid, solid, and vapor water, or the viscosity or surface tension of liquid water.

Plutonium is an even more striking example, given that we cannot even understand its properties from first principle when we only have a single type of atom to worry about. The thermodynamic phase diagram of plutonium is very complicated, with seven different crystal structures known, depending on temperature and pressure. Moreover, as a resident of the actinide row of the periodic table, Pu has unpaired 5f electrons, though it is not magnetically ordered. At the same time, Pu is very heavy, with 94 total electrons, so that relativistic spin-orbit effects can't be neglected in trying to understand its structure. The most sophisticated electronic structure techniques out there can't handle this combination of circumstances. It's rather humbling that more than 70 years after its discovery/synthesis, we still can't understand this material, despite the many thousands of person-hours spent on it via various nations' nuclear weapons programs.

Sunday, July 24, 2011

Einstein, thermodynamics, and elegance

Recently, in the course of other writing I've been doing, I again came to the topic of what are called Einstein A and B coefficients, and it struck me again that this has to be one of the most elegant, clever physics arguments ever made.  It's also conceptually simple enough that I think it can be explained to nonexperts, so I'm going to give it a shot.

Ninety-four years ago, one of the most shocking ideas in physics was the concept of the spontaneous, apparently random, breakdown of an atomic system.  Radioactive decay is one example, but even light emission from an atom in an excited state will serve.  Take ten hydrogen atoms, all in their first electronically excited state (electron kicked up into a 2p orbital from the 1s orbital).  These will decay back into the 1s ground state (spitting out a photon) at some average rate, but each one will decay independently of the others, and most likely at a different moment in time.  To people brought up in the Newtonian clockwork universe, this was shocking.  How could truly identical atoms have individually differing emission times?  Where does the randomness come from, and can we ever hope to calculate the rate of spontaneous emission?

Around this time (1917), Einstein made a typically brilliant argument:  While we do not yet know [in 1917] how to calculate the rate at which the atoms transition from the ground state "a" to the excited state "b" when we shine light on them (the absorption rate), we can reason that the rate of atoms going from a to b should be proportional to the number of atoms in the ground state (Na) and the amount of energy density in the light available at the right frequency (u(f)).  That is, the rate of transitions "up" = Bab Na u(f), where B is some number that can at least be measured in experiments.  [It turns out that people figured out how to calculate B using perturbation theory in quantum mechanics about ten years later.].  Einstein also figured that there should be an inverse process (stimulated emission), that causes transitions downward from b to a, with a rate = Bba Nb u(f).  However, there is also the spontaneous emission rate = AbaNb, where he introduced the A coefficient.

Here is the brilliance.  Einstein considered the case of thermal equilibrium between atoms and radiation in some cavity.  In steady state, the rate of transitions from a to b must equal the rate of transitions from b to a - in steady state, no atoms are piling up in the ground or excited states.  Moreover, from thermodynamics, in thermal equilibrium, the ratio of Nb to Na should just be a Boltzmann factor, exp(-Eab/kBT), where Eab is the energy difference between the two states, kB is Boltzmann's constant, and T is the temperature.  From this, Einstein shows that the two Bs were equal, was able to solve for the unknown A in terms of B (which can be measured and nowdays calculated), and to show that the energy density of the radiation (u(f,T)) is Planck's blackbody formula.

My feeble writing here doesn't do this justice.  The point is, from basic thermodynamic reasoning, Einstein made it possible to derive an expression for the spontaneous emission rate of atoms, many years in advance of the theory (quantum electrodynamics) that allows one to calculate it directly.  This is what people mean by the elegance of physics - in a few pages, from proper reasoning on fundamental grounds, Einstein was able to deduce relationships that had to exist between different physical parameters; and these parameters could be measured and tested experimentally.  For more on this, here is a page at MIT that links to a great Physics Today article about the topic, and an English translation of Einstein's 1917 paper.  

Thursday, July 21, 2011

Slackers, coasters, and sherpas, oh my.

This is mostly for my American readers - be forewarned.

I wrote last year about a plan put forward by Rick O'Donnell, a controversial "consultant" hired by the state of Texas (hint: Gov. Rick Perry, apparent 2012 presidential hopeful, wanted this guy.) to study the way public universities work in Texas. Specifically, O'Donnell came from a think tank that had very firm predetermined concept about higher education: Faculty are overpaid slackers that are ripping off students, and research is not of value in the educational environment. O'Donnell has written a report (pdf) about this topic, and he's shocked, shocked to find that he was absolutely right. By his metrics of number of students taught and research dollars brought in, he grouped faculty at UT and Texas A&M into "Dodgers, Coasters, Sherpas, Pioneers, and Stars". Pioneers are the people who bring in big grants and buy out of teaching. Stars are the people who bring in grants and teach large lecture classes. Sherpas are mostly instructors (he doesn't seem to differentiate between instructors and faculty) who lecture to large classes but don't bring in grants. Dodgers teach small classes and don't bring in grant money. Coasters teach small classes and bring in some grant money.

This is the exact incarnation of what I warned about in comments on my old post. This analysis basically declares that all social science and humanities faculty that teach upper division classes are worthless leeches (small classes, no grants) sponging off the university. People in the sciences and engineering who teach upper level classes aren't any better, unless they're bringing in multiple large research grants. Oh, and apparently the only metric for research and scholarship is money.

Nice. Perry, by the way, also appointed Barbara Cargill to run the state board of education. She's a biologist who wants evolution's perceived weaknesses to be emphasized in public schools, and she also was upset because the school board only has "six true conservative Christians" as members. I guess Jews, Muslims, Buddhists, Hindus, and atheists need not apply.  Update:  It looks like Texas has dodged creationism for another couple of years.  Whew.

Wednesday, July 20, 2011

What is so hard about understanding high temperature superconductivity?

As ZZ has pointed out, Nature is running a feature article on the history of high temperature superconductivity over the last 25 years. I remember blogging about this topic five years ago when Nature Physics ran an excellent special issue on the subject. At the time, I wrote a brief summary of the field, and I've touched on this topic a few times in the intervening years. Over that time, it's pretty clear that the most important event was the discovery of the iron-based high temperature superconductors. It showed that there are additional whole families of high temperature superconducting materials that are not all copper oxides.

Now is a reasonable time to ask again, what is so hard about this problem? Why don't we have a general theory of high temperature superconductivity?  Here are my opinions, and I'd be happy for more from the readers.
  • First, be patient.  Low-T superconductivity was discovered in 1911, and we didn't have a decent theory until 1957.  By that metric, we shouldn't start getting annoyed until 2032.  I'm not just being flippant here.  The high-Tc materials are generally complicated (with a few exceptions) structurally, with large unit cells, and lots of disorder associated with chemical doping.  This is very different than the situation in, e.g., lead or niobium.
  • Electron-electron interactions seem to be very important in describing the normal state of these materials.  In the low-Tc superconductors, we really can get very far understanding the normal starting point.  Aluminum is a classic metal, and you can do a pretty good job getting quantitative accuracy on its properties from the theory side even in single-particle, non-interacting treatments (basic band theory).  In contrast, the high-Tc material normal states are tricky.  Heck, the copper oxide parent compound is a Mott insulator - a system that single-particle band structure tells you should be a metal, but is in fact insulating because of the electron-electron repulsion!  
  • Spin seems to be important, too.   In the low-Tc systems, spin is unimportant in the normal state, and the electrons pair up so that each electron is paired with one of opposite spin, so that the net spin of the pair is zero, but that's about it.  In high-Tc systems, on the other hand, very often the normal state involves magnetic order of some sort, and spin-spin interactions may well be important.
  • Sample quality has been a persistent challenge (particularly in the early days).
  • The analytical techniques that exist tend to be indirect or invasive, at least compared to the desired thought experiments.  This is a persistent challenge in condensed matter physics.  You can't just go and yank on a particular electron to see what else moves, in an effort to unravel the "glue" that holds pairs together (though the photoemission community might disagree).  While the order parameter (describing the superconducting state) may vary microscopically in magnitude, sign, and phase, you can't just order up a gadget to measure, e.g., phase as a function of position within a sample.  Instead, experimentalists are forced to be more baroque and more clever.
  • Computational methods are good, but not that good.  Exact solutions of systems of large numbers of interacting electrons remain elusive and computationally extremely expensive.  Properly dealing with strong electronic correlations, finite temperature, etc. are all challenges.
Still, it's a beguiling problem, and now is an exciting time - because of the iron compounds, there are probably more people working on novel superconductors than at any time since the heady days of the late '80s, and they're working with the benefit of all that experience and hindsight.  Maybe I won't have to write something like this for the 30th high-Tc anniversary in 2016....