Friday 28 May 2010

3 Quarks Daily Best Blog Entry Prize

3 Quarks Daily is having Richard Dawkins to choose the best science blog entry of the year. The nominations end in 3 days actually, on 31st of May and the prize will be announced on the 21st of June after the public had the right to vote on the nominated entries.

If you liked any of the entries here, I would like to encourage you to nominate it for the prize. All you have to do is to click here and post the link to the blog entry you liked in the comments section.

Well, the competition is heavy, but there is always some hope. :)

Thursday 27 May 2010

Scientific Art



Art of Science 2010 from Princeton Art of Science on Vimeo.

I wanted to post this for some time already, so let me do it before it becomes a too old news. For the fourth time, Princeton University (the legendary one from Einstein and many others) held the Art of Science competition with this year's theme being Energy.The video above is a slideshow with the competition's work (don't blame me for the music, it wasn't me who chosen it!). These are the first three prizes:

First Prize
Xenon Plasma Accelerator
by Jerry Ross

Second Prize
Therapeutic Illumination
by David Nagib

Third Prize
Neutron Star Scattering off a Super Massive Black Hole
by Tim Koby

There were many more quite beautiful pictures in the contest and you can see the rest of them and the past galleries in the competition's website. You can also read some extra information in the physicsworld.com blog post:

Tuesday 25 May 2010

A New Dark Age

We are always discussing in the university the present problems with the general politics on funding for science. I cannot say that the discussion changed a lot in the last years, as since I started to be worried about that, funding was always directed to more "practical" areas than to fundamental research, "practical" meaning with a higher probability of generating money faster. Of course this is expected. The monetary return given by more applied research is less risk and comes quicker than for fundamental research and we all know that funding is not really about knowledge, but about money. People will invest in you if they can get some profit from it and they do not live forever, so they will not be willing to wait decades for the results, no matter how fantastic they may be.

I will not try to preach about how important is fundamental research, how only fundamental research bring revolutions and so on, although I will not resist to write a bit about it in a moment. But I won't try to convince anyone that the LHC generates as much jobs (actually probably much more) as the construction of a bridge over a river, with the added advantages that many of the jobs will be long term ones, many technological advances and solutions for problems will come from it and a huge amount of knowledge will be generated. Actually, I will take advantage of this post by Sabine Hossenfelder:
to express my deep concerns with the direction we are headed on to. Read the post and you will understand her position, although the title is clear enough. Our main point is that knowledge does not need to be useful in the mundane sense of the word, which obviously means that it does not need to generate money right away.

The most standard way to argue about this is that the advancement of pure knowledge now can bring surprising revolutions in the future. The consequences can be huge and unpredictable. Again, the usual example is quantum mechanics. It is not difficult to say that solving the problem of the black body spectrum was just an academic issue around 1900, but it turns out that that little detail, which would struggle to get funding today for sure, brought us the computers, the laser, the magnetic ressonance and pretty much virtually all of our present technology.
But I am not going to use this argument because it is still not exactly the message I want to leave, and after all any person with some vision can understand that. Obviously, if you are worried about being reelected in the next 4 years it does not matter to you what science will bring to the world in 10, right? Do you think I am exagerating? So read these two articles from the Times Higher Education:
And these are just some of the news. Physics, philophy and other fundamental courses are being closed all around the world! It is not a localised phenomenon. You can step up and say Oh, come on! As if philosophy was useful in some way. First of all, I have a deep respect for philosophy. Physics came from it. And philosophers do what many scientists don't dare, they think beyond the limits of what we know and of what we may know. It is an exploration more than anything else and it is exactly here that I want to make my point.

What is being relegated to second plane everywhere is the act of thinking. The most powerful product of knowledge is not the money, the technology, the health improvements. These may be wonderful subproducts, but I will dare to say that they are not the main ones. The really important aspect that is the most feared by the minority which is on the upper levels of the food chain, is that knowledge modifies a person. Each time an individual acquires more knowledge this person sees the world in a different way. The person starts to question what was unquestionable before, becomes aware of what is going on around her and learns to change prejudices, to identify injustices and to fight for what really matters. Knowledge changes people. That is the most important and powerful result of knowledge. Not generating money, not even generating jobs, but improving people!

Think about that and think hard. Everyone knows the story of the frog that jumps if you put it in a bowl of boiling water but stays there until it dies if you put it in cold water and increases the temperature steadily until it is cooked alive. The former Dark Ages also arose slowly. That time religion was the enemy to blame, but what is behind is always power. Today it may be money, petrol or whatever may be important to keep the world's control but we are slowly being boiled in a pan and people must realise it before we end up cooked like a frog. It is obviously more convenient just to continue to live our lives and do our part, what we are being told to do. Much simpler. But I believe there are still people who can see beyond that. Hope is not over yet. As we say in Brazil actually, Hope is the last one to die.

I will finish with a phrase I read in some blog, although I do not remember exactly where (so if you can identify it, I will happily put the link here): Think. It is not illegal yet.

Sunday 23 May 2010

Apollo 11 Saturn V Launch (HD) Camera E-8


Apollo 11 Saturn V Launch (HD) Camera E-8 from Mark Gray on Vimeo.

They also have a a website named Spacecraft Films with a very nice collection from virtually the entire US space program. You can watch the movies and buy the DVDs if you are a real fan. Via Open Culture

Friday 21 May 2010

About Life

Mycoplasma mycoides - AP Photo/J. Craig Venter Institute

I usually don't like to be repeating other people's post, specially because people probably read about that everywhere else before reading here. :( But I was thinking about writing a post about life for some time and then I will use the opportunity to do so.

As news run fast, everybody by now must know about the creation of an "artificial" cell by the J. Craig Venter's Insitute. The paper was published in Science Express:

[1] Gibson et al., Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome, www.sciencexpress.org / 20 May 2010 / Page 4 / 10.1126/science.1190719

As is well reminded in this discussion in Nature (you can also find another nice discussion in the Edge that includes the opinion of Freeman Dyson), what they've done was to construct a pre-designed DNA from parts extracted from a unicellular organism called Mycoplasma mycoides (the one in the photo above), reassemble it and implant it into another of these organisms from which the original DNA was previously extracted. There is not synthetic components in the usual sense of the word, what was "synthetic" was the design of the new DNA, which did not exist before. They gave the new organism temporarily the beautiful and creative name Mycoplasma mycoides JCVI-syn1.0 (the guy is already going to win a Nobel and still wanted to imortalise his initials in the poor cell).

The feat is not small though. It has important implications and open many doors. One of the important things is that it serves to test some of our present knowledge about how DNA works. DNA is a program and what they are doing is writing test programs to check how much we really know about the programming language. Given that the created cell seems to function and reproduce normally, it shows that we at least understand the basics about writing these programs.

Think about the cell as a very delicate computer that will interpret a program encoded in the form of DNA. Mathematically, its okay to see the cell as a Turing machine with the DNA being the input tape (to see what I am talking about, look at the first YouTube video in the sidebar to your left). Consequently, one of the doors their work opens is that it is possible to vary the program and see what happens.

Before I continue to talk about life, let me just list here some articles and blog posts about JCVI's team feat:


I have been in some conferences on complex systems lately and many people has been trying to create artificial organisms really from scratch. It is a different kind of strategy where instead of using already DNA, the idea is to discover what exactly are the minimum possible assemblage of parts that is capable of creating a life-form when put together. Well, everybody knows that the definition of life is problematic, but in this case the idea is to create a self-replicating thing that is autonomous, in the sense that once created, it does not need our assistance to survive and reproduce anymore. I believe many groups are working on that, but I cannot find the references here right now. If somebody would like to point some of them, I would be glad to include here.

In any case, you see that life may have different meanings and definitions. You may have been convinced by someone that in order for an organism to be alive it must reproduce right? Wrong. This is a species-based definition. Reproduction is important for natural selection, but a sterile man is still alive, even not being able to reproduce itself. An individual-based definition for life would be a way to take any "individual" in the universe and just looking at it decide if it is alive or not. Not an easy task, of course.

When a definition is difficult to attain, the way around is to identify some properties that this definition must include. This is equivalent to say that if we cannot find sufficient and necessary conditions for something, we can concentrate either on sufficient or on necessary ones. I have been discussing it with some colleagues and one condition I really consider necessary for any individual to be alive is the capacity of processing information. In a more mathematical way, I would say that any living organism must be a Turing machine, of course not necessarily universal. Just to make it clear, this is a necessary, not sufficient condition. Not all Turing machines are alive, although I would never consider anything that cannot change its states according to some input information alive.

So, the one point I want to make here is that life requires information processing. Unfortunately, this is the only necessary condition in my list, although I think something can already be explored by using it. For instance, I have read many comments in Cosmic Variance about life and the second law. This is a quite interesting discussion actually: does life is dependent or invariably linked to the increase of entropy? A way to try to attack this problem is by looking at the necessary conditions for life and see if they need it. Now, I must draw attention to something which is called reversible computation. Rolf Landauer, some time ago, proposed the famous idea that the erasure of one bit of information increases the entropy of the environment by an amount of $k_B \ln 2$, where $k_B$ is Boltzmann constant. Usual computing gates, like the AND or OR gates, take two bits and give one in return. Modern computation is base on the use of these gates and, according to the so-called Landauer Principle, they necessarily are dissipative and increase the entropy of the environment.

However, there is another kind of gate which takes 2 inputs and give back other 2 in such a way that there is a one-to-one correspondence between inputs and outputs, by which I mean that given the outputs and the knowledge of which gate was applied, you can recover the inputs. An example is the CNOT gate. CNOT is reversible and do not erase information. According to Landauer Principle, CNOT do not necessarily increase entropy. Now, if there is any way, at least fundamentally, in which an organism could process information by using reversible instead of dissipative gates, it would not need to increase entropy at least for these process.

Although this is not such a big result, it demonstrates that formalising and analysing some fundamental aspects of life is possible. Although you may say I am biased, and I completely agree, these concepts seem to be nicely described in the framework of information theory (well, as well as thermodynamics/statistical physics, which in the end are all related). For instance, note that even reproduction is related to transmitting information from one individual to another. But I will deal with that in another post. 

Thursday 20 May 2010

StatPhys and ECCs #2: Statistical Physics Overview

For those who still remember, I have decided to talk about my work on statistical mechanics (statphys) applied to error-correcting codes (ECCs). Yes, that is the work that kept me away from writing this blog for the last three years... In the first part of this "series" I wrote about the basic concepts involved in linear error-correcting codes. The discussion was all about information theory. Today I am going to write a bit about the other side of the coin: statistical physics.

Probably many of you have already studied statistical physics and know the fundamental concepts. It is interesting however how statistical physics is not as popular for a greater non-specialist audience as other areas of physics. I cannot blame anyone, as I ended up studying statistical physics for chance and just realised how interesting it is afterwards.

Statistical physics was born with the theory of gases. It was a very logical step. The idea was that, if matter was made of atoms obeying the laws of Newtonian mechanics, then it should be somehow possible to derive the laws of thermodynamics from a microscopic description of the system based on these premises. Well, the truth is that many people did not believe in atoms at that time, for instance, Mach was one of them.

It is well known that Boltzmann (the guy at the right) was the great name of statistical mechanics, although many other great physicists like Maxwell and Einstein also contributed largely to the field. And is also known that Boltzmann killed himself after dedicating his life to statistical physics due to the disbelief in the notion of atom, until of course Einstein published his paper on Brownian motion.

Statistical physics is the area of physics (at least it started in physics...) that deals with the behaviour of a large number of interacting units and the laws that govern it. It started with gases, but the most famous model of statistical physics is called the Ising model (see here the curious story of Ernst Ising). The Ising model is a mathematical model for many units interacting among themselves. Is strikingly simple, but surprisingly general and powerful. It is defined in terms of a Hamiltonian, the function defining the energy of a system. It was devised to explain magnetic materials and so is a function of the spins of the the electrons in those materials, which are symbolised by $\sigma_i$, where $i$ runs from 1 to N, the total number of spins. In terms of spins, the interaction that is importante is called exchange interaction and favours spins to be aligned with each other. As a spin can have two values that can be taken to be +1 and -1, the interaction favours the case when two spins multiplied give the value 1. As all systems in nature want to minimise their energies, we can then write the Hamiltonian as

\[H=-\sum_{\left\{i,j\right\}} \sigma_i \sigma_j,\]

where the symbol under the summation sign says that we must sum over all the pairs that are interacting.
It turns out that this simple model of interaction, properly generalised, can describe not only the interaction between two spins, but also between two or more persons, robots in a swarm, molecules in a gas, bits in a codeword and practically EVERY interaction between ANYTHING. And I really mean it. Of course there is no free lunch and the calculations become more difficult when you increase the sophistication of the system, but the fundamental idea is the same. For example, the usual way the Ising model is defined is with spins in a regular lattice, which in one dimension is a straight line with spins located at equally spaced points, in two dimensions is a square grid, in three is a cubical one (as the one at the side) and so on.

The simplest thing is also to consider that only first neighbours interact, which means that for instance in the cubic lattice above, spins will interact only if there is an edge linking them. The one dimensional model is solvable exactly, the two dimensional also, but it took many years and  a huge mathematical effort by Lars Onsager (a Nobel Prize winner of Chemistry) to be able to solve. The three dimensional one is already NP-complete and therefore practically hopeless (until some alien comes to Earth and print the proof that P=NP in the countryside with crop circles).

Then things got complicated when people started to wonder what happens when the interaction between the spins is different for each edge in the lattice. The most interesting case turned out to be when it is randomly distributed on the lattice. The Ising model conveniently generalised is written as

\[H=-\sum_{\left\{i,j\right\}} J_{ij} \sigma_i \sigma_J,\]

where now the $J_{ij}$ controls the characteristics of the interaction. Note that if $J$ is positive the interaction, as before, favours alignement and is called a ferromagnetic interaction,while if now $J$ is negative, it favours anti-alignemnt and correspondly is called anti-ferromagnetic. If we restrict ourselves to a simple situation where $J$ can be either +1 or -1 randomly in the lattice, we discover that this is a highly complicated case which give rise to a kind of behaviour of our system called a spin glass state.

This kind of randomness is called disorder. In particular, if for every measure we do in our system we fix the interactions to one configurations and do our measurements, than change it, fix it and do it again and so on, the disorder receives the name of quenched disorder. I will write a longer post about disorder soon, as it is a huge and interesting topic. There is another two places in the Ising model where disorder can appear. One is in the form of the lattice. It does not need to be a regular one, it can be any configuration of points linked by as many lines as you can imagine, what we call a random graph. You can also imagine that the interaction involves more than two spins, it can involve three or more generally $p$ spins multiplied together like $\sigma_{i_1}\sigma_{i_2}\cdots\sigma_{i_p}$. Finally, it can appear in the form of random local fields, a field being a variable $h_i$ multiplying the $i$-th spin, something that can be written for $p=2$ (our usual two spins interaction) as

\[H=-\sum_{\left\{i,j\right\}} J_{ij}\sigma_i\sigma_j -\sum_i h_i \sigma_i \]

Now, there are two important facts about statistical mechanics that must be known. Statistical mechanics is concerned with something which is called the typical behaviour of a system, which means the most common realisation of it. Also, it is interested in knowing what happens when the number of units N is really huge, what can be understood by thinking that the number of atoms in any macroscopic sample of a material is of the order of $10^{23}$. The nice thing is that in this large N limit, called appropriately the thermodynamical limit, typical and average coincide and we can always look at averages over our systems. 

Statistical mechanics is a giant topic and I will not attempt to cover it in just one post. I will write other posts explaining parts of it with time, but the main idea of many units interacting will be our link with information theory and codes. Don't give up, we are almost there.

Tuesday 18 May 2010

Goodbye Butantan...


There area some things you take for granted. Things that you think will exist forever and you will always have time to see. The above building, existing adjacent to the main campus of the University of Sao Paulo, Brazil, iscalled "Instituto do Butantan", or Butantan Institute. Many buildings in this institute are old. In particular, the one who used to contain the whole biological collection was projected in the 1960's, which means that for most people of my age it always existed, but the institute itself was founded on 1901. Here is a Wikipedia article in English about it. Until this Saturday, it was one of the largest collection of venomous animals conserved in vitro in the world, with approximately 450 000 specimens of spiders and scorpions and 85 000 snakes.

As I have seen many ignorant, to say the least, comments about this in the internet, let me just clarify that people from the institute did not go around the country hunting snakes to put them in jars. Although this may have happened some times, as I cannot tell anything about science on 1901, I know that we in Brazil were used to send any kind of snake we would find dead, or that needed to be killed because it was attacking or has attacked someone, to the institute and I believe that many of the species were acquired in that way. And before anyone says it is some kind of horror show, I bet that all children that have ever put their eyes on that collection were amazed, not scared.

That said, the institute was also the largest producer of antivenom and vaccines in Latin America. To be honest, nobody dies of poisoning by a venomous animal in Brazil thanks to it. Specially those who live in Brazil know how it is not uncommon to find snakes, spiders or scorpions in the gardens. And not too far from the big centres. And some of them are really poisonous.

Everything I am saying is in the past for it took only one hour and a half for a fire to destroy more than one century of research. The fire started at 7 o'clock in the morning on this Saturday and was merciless to the building that was used to store the collection of dead animals (neither the one in the above photo, nor the one in the photo to your left, but the one in the photos below). The building was not prepared for a fire and most specimens were conserved in ethanol, which contributed to the rapid spread of the fire. The specimens that were alive fortunately could be saved before the fire reached them.

Science everywhere in the world survives thanks to the efforts of those who love it. Politicians usually do not give a damn unless it is useful to fulfill their needs for money and power. The public in general may even like it, but consider most of it nothing more than a kind of entertainment and scientists as some kind of people who want to have a fun life using public funds. In Brazil it is even more difficult and the Butantan was one of those places that become a legend thanks to the efforts of the researchers and certainly not of the politicians. It seems that last year the researchers asked for a grant of 1 million reais (the Brazilian currency), which is about US$ 500 000 and £ 380 000, to install fire protection equipment in the building. I do not need to say it was not granted.

When I was a kid, almost every child in Sao Paulo had gone to visit the Butantan at some point to see the amazing collections. What kid doesn't want to see snakes, spiders and scorpions? They had also live animals there, and it seems that at least these were saved, and everyone has a fantastic story of seeing huge snakes being fed with small mouses. I have never been there, and now I regret. Although the institute was more than just one building, that one in particular could be considered the most important. You can reconstruct everything, but the knowledge is lost forever.

The amount of money needed to avoid the lost of one thing that was something Brazil should be proud of was less than what is needed to pay for an advertisement in a TV channel. Less than what the president that the rest of the world love (Lula) would spend in one of his "receptions". Actually, in modern terms, it was almost nothing. Since that Times article about Brazil, everyone here always ask me why am I not coming back as Brazil is meant to be one of the powers of the future. The answer is simple. The only thing that is becoming better in Brazil are the economic indices, and unfortunately I do not have US$ 1 000 000 to invest and take advantage of it.


I must admit that while I am writing this, I am really trying to contain the tears that are forming in my eyes. Butantan was part of Sao Paulo, and Brazil of course, history. One of those few things that we could say that was working there and that we were proud of. It is really a pity. A great loss. I doubt I can continue to write more about that without indeed crying, I leave everyone with some more photos and news:




Goodbye...

Monday 17 May 2010

X-Ray Trail


This was taken at Wuerzburg, Germany. It says more or less that In this house, W.C. Roentgen discovered in the year of 1895 the X-rays. Roentgen's Nobel Prize medal and many memorabilia like letters, notebooks and pieces of equipment can still be seen in the Physics Institute of the Wuerzburg University. I should have taken some pictures of that as well. Well, the above building is actually close to the city centre, far from the main campus of the university where the physics building is located, but it is still part of the university anyway. You can find more about Roentgen's life here in this Wikipedia article.


And then I was coming back from a day out in a historical house with some friends here in the UK when we decided to stop in these small town half way between there and Birmingham. Imagine my surprise when after we started walking around I saw the above plaque. No need to say that my friends didn't get so excited as myself with the finding. Here is the bio of Willian Henry (the father) and William Lawrence (the son). For those who may not know, X-ray crystallography is probably the most important method to probe the structure of not only crystals but many kinds of proteins and it was through X-ray measurements that Rosalind Franklin collected the data that allowed for the discovery of the DNA double-helix structure.

Yes, I know it's not exactly a trail... but still, I cannot resist to take these pictures when I pass through those places. By the way, if you have any additional suggestion for the X-ray Trail, that would be interesting. Maybe I can even spend some time to collect them (if there is any) in a document. :)

Sunday 16 May 2010

Oscillating Chemical Reactions




I remember learning about these reactions when I was in high school, but I had never seen them. I found this video in an interesting blog named Materials Science. It is written in Greek but you can get a fairly good translation with Google translator. The site has many other interesting videos, it is worth to take a look at it.

Saturday 15 May 2010

The Supersolid Mystery

Supersolidity was proposed a long time ago by Andreev and Lifshitz in a Russian paper in 1969, which unfortunately I could never put my hands on, and by Chester and Legget (Nobel prize winner of 2003 exactly by his works on superfluidity and superconductinity) in two different papers in 1970:

[1] Andreev and Lifshitz, Sov. Phys. JTEP 29, 1107 (1969)
[3] Legget, Can a Solid be "Superfluid"?, Phys. Rev. Lett. 25, 1543 (1970)

Actually, Chester does not cite Andreev and Lifshitz's work, which is considered the first one with the idea, but Legget explicitly cites Chester's work. As I said, I don't have access to the Russian work, but I could read the other two. They are very short papers and easily readable.

The idea of a supersolid is a very interesting one that is derived from the phenomenon of superfluidity. Superfluidity is a very low temperature state of a fluid, the most common being Helium, where the viscosity disappears entirely. If a superfluid is put inside a rotating vessel, as long as the rotational velocity stays below a critical velocity, it will stay at rest in the laboratory frame and will not rotate as there will be no friction between the fluid and the walls of the container. Superfluidity is due to a Bose-Einstein condensation of the atoms in the fluid. They all condensate in the same quantum state creating all the interesting effects that you can see partially in this video:





Superfluidity is also related to superconductivity, with the difference that superfluidity is a phenomenon that occur for bosons, while superconductivity occurs for the "electron fluid" that lives in every metal. The idea of supersolidity expressed by Chester was based on the form of the wave-function for the system which he speculated could support both crystalline order and superfluidity, with the most probable system being Helium-4 (2 protons and 2 neutrons). In the end of the paper he speculates that disorder most play a crucial hole with vacancies being the key to the superfluid properties. Vacancies are places in the crystal lattice which were supposed to be occupied by an atom but in fact are empty. It is quite interesting that at this early point disorder was already proposed to play a major role, something that recent experiments seem to corroborate.

Then, in its 1960 paper, Legget gives a more detailed description of the supersolid, although he still doesn't use this name. In the paper he describes a superfluid solid as one with a fraction of its mass condensed in the form of a superfluid and proposes to use what he called the non-classical rotational inertia (NCRI) effect to test it.  NCRI is a direct result of the zero viscosity in a superfluid. If you enclose the solid in an annulus and rotate it, the superfluid fraction by its own superfluid nature is expected not to rotate with the container. This results in a decrease in the moment of inertia of the solid, which gives the name to the effect.

In 2004, Kim and Chan used the basic idea of Legget to test for a NCRI in a torsional oscillator (TO). A TO is an annulus with the wannabe supersolid inside that oscillates with some natural resonant frequency that is basically inversely proportional to the square-root of the moment of inertia of the annulus when the damping is small. If the material inside the TO is a supersolid, the experiment is supposed to measure a higher oscillating frequency than for the normal material as moment of inertia should be lower. This change in the resonant frequency was claimed to be measure by Kim and Chan in bulk Helium-4 in two papers, Nature and Science (everybody's dream, I know...):

[4] Kim and Chan, Probable observation of a supersolid helium phase, Nature 427, 225 (2004).
[5] Kim and Chan, Observation of Superflow in Solid Helium, Science 305, 1941 (2004).

I did not have access to these papers, so everything I am saying about them are based on what is written in the referencing papers. Kim and Chan observed a possible transition to a supersolid state at a temperature of about 200 mK. These observations were followed by a huge number of other experiments with He-4 with some conflicting results. Although there was some doubt in the beginning if the measured NCRI was really due to supersolidity, it seems now that other explanations are not very probable. These subsequent experiments also seem to corroborate Chester's intuition that the higher the disorder in the sample, the higher the supersolid signal. Usually, disorder appears in the form of vacancies, which was Chester's original idea, or dislocations, which are a kind of defect where the there is some mismatch in the form of the crystalline lattice. A dislocation induces a mismatched growth of the crystal around it. This picture is an example of a screw (for obvious reasons) dislocation in a silicon carbide crystal


and was taken from an article about supersolidity from 2007 named Supersolid, with a Twist commenting a paper publish in Physical Review Letters. I have noticed that many blogs in the internet promote the above picture as being of a supersolid just because it appeared in the above article. THIS IS NOT A SUPERSOLID AND CERTAINLY NOT HELIUM. It's a regular crystal with dislocations. The article seems to indicate that dislocations instead of vacancies are involved in the supersolidity of He-4 by forming a kind of pathway where atoms can move as a superfluid. Actually, experiments have observed that the dependence of the stiffening in solid helium on the macroscopic parameters (temperature, pressure) has the same shape as the dependence of the NCRI. Stiffening (the effect of a solid becoming "harder" in some sense)  is related to the mobility of the dislocations in the crystal. These defects can move around the crystal and the more mobile they are, the more malleable is the solid. When the dislocations are somehow pinned to their places, what can occur in the presence of impurities to which they attach, the solid becomes stiffer. Therefore, the relation between stiffness and NCRI indicates that dislocations instead of vacancies are the important kind of defects involved in the onset of supersolidity.

I am writing all this stuff about supersolidity because of a recent article about this to which I have already posted a link to (see Physics - Spotlight of Exceptional Research)

[1] Sébastien Balibar, Is there a true supersolid phase transition?,  Physics 3, 39 (2010)

where the author comments on the recent Physical Review Letters paper

[2] Oleksandr Syshchenko, James Day, and John Beamish, Frequency Dependence and Dissipation in the Dynamics of Solid Helium, Phys. Rev. Lett. 104, 195301 (2010)

The paper is temporarily free for download, so if you run you can still get it at no charge. What Syshchenko et al. have measured was the dependency of the stiffness with temperature for a range of different oscillation frequencies. That is not possible with a TO technique as it relies on taking measurements exactly at the resonant frequencies of the oscillator. In the experiment, Syshchenko et al. used a sample of solid helium inside a container with one of the surfaces a piezoelectric material that would apply a strain in the sample according to an electric current applied to it.

The discussion by Balibar is a good summary of the work and has also some opinions about the result of the latter. It seems that the experiment indicates that what was thought to be a phase transition to a supersolid state at 200 mK may not be, being instead what they call a crossover. I don't think it is completely clear to me what it means in this case. Usually a crossover is a change of universal exponents, technical quantities that characterise phase transitions, from one universality class to another. In other places I have seen crossover used as a synonymous of a second order phase transitions, something that cannot be the case here as the authors says explicitly that their crossover is not a phase transition, what I think should include second order ones, although I may be wrong. I confess I am a bit confused here and if someone can clarify the issue I would be glad.

In any case, they argued that the true phase transition, according to their definition, must be located somewhere below 55 mK and not around 200 mK. I suppose it must mean that there is no divergence on the susceptibility and probably that's also related to the failure of fitting power laws close to the 200 mK temperature actually. However, technically, this also may be called a phase transition, but of a higher order...

Supersolidity is still a very ill understood phenomena and there is not a good theoretical explanation that can account for all the observed experiments. Each time a new experimental piece of this puzzle appears it shows that our theoretical understanding is still poor. I guess it is a bit tacky to say that as it was already said many times, but I guess it is great to see that there is still so many things that we do not understand.

Friday 14 May 2010

Physics - Spotlight of Exceptional Research

[Picture taken from the article: Is there a true supersolid phase transition?]

The American Physical Society (APS) is well known for probably the most traditional series of physics journal, but recently (not that much, but more or less) they made available in their website a feature that I have been enjoying a lot, which goes by the name Physics - Spotlight of Exceptional Research.

This webpage is regularly updated with headlines about notable papers published in their Physical Reviews journals. The best thing is that the papers are then commented by a specialist in the area and made available for free download! The accompanying comment is also downloadable in PDF format if you want. This is a fantastic feature specially for those who for any reason don't have access to the paid subscription of the journals. The current for headlines highlighted in the front page are:
  • Blurry vision belongs to history, Hans Blom and Jerker Widengren: Making simple modifications to laser-scanning microscopes—like those found in many laboratories—can beat the classical diffraction limit by a factor of 2.
  • Ultrafast computing with molecules, Ian Walmsley: Vibrations of the atoms in a molecule are used to implement a Fourier transform orders of magnitude faster than possible with devices based on conventional electronics.
  • Is there a true supersolid phase transition?,  Sébastien Balibar: New measurements of the rigidity of solid helium show that the emergence of supersolidity is actually a crossover, rather than a true phase transition.
  • The tetrahedral dice are cast … and pack densely, Daan Frenkel: Magnetic resonance images of tetrahedral dice show a density of random close packing, in agreement with recent calculations.
And yesterday I received an email from them about some videos they made available in this webpage as well. They can be watched in here. Currently they show three videos from a metting that happened in the 17th of March:
  • Optomechanical Devices, Florian Marquardt: The interplay of light and mechanical motion on the nanoscale has emerged as a very fruitful research topic during the past few years. Optomechanical systems are now explored as ultrasensitive force and displacement sensors. By using light to cool a mechanical system to its quantum ground state, researchers hope to explore the foundations of quantum mechanics in a new regime.
  • Spintronics, David Awschalom: The spin-orbit interaction in the solid state offers several versatile all-electrical routes for generating, manipulating, and routing spin-polarized charge currents in semiconductors. Recent experiments have explored several guises of this effect for the nascent field of spintronics. These include new opportunities for making the transition from fundamental studies to a spin-based technology for classical and quantum information processing.
  • Iron Age Superconductors, Michael Norman: A new class of high-temperature superconductors has been discovered in layered iron arsenides. In these materials, magnetism and superconductivity appear to be intimately related. Results in this rapidly moving field may shed light on the still unsolved problem of high-temperature cuprate superconductivity.  
I haven't had the time to watch any of the videos yet, but I intend to do that over the weekend. They seem quite interesting. If someone watched them and wants to comment a bit, you will be very welcome.

Thursday 13 May 2010

Jay Walker's Library of Human Imagination


Jay S. Walker is an American internet intrepeneur. This is a video of the library he constructed for his amusement. The items in the library are original. Isn't good being rich? Via Open Culture.


Read more about it here:

Wednesday 12 May 2010

Statistical Physics and Error-Correcting Codes #1

As I have been working on the statistical physics of error-correcting codes since 2006, I guess it is only fair to write a bit about that. Although it may not seem as appealing as cosmology, strings or the LHC, there is something fundamental and interesting behind it. Besides, since I started to talk about quantum codes I thought I should write something more basics about classical error-correction and information theory as they are the basis for the their quantum versions.

Since I will have to introduce some information theory first, I thought it would be a better idea to break down this post into two or three parts, depending on how it goes. The standard reference for information theory is Thomas and Cover book:

[1] Elements of Information Theory, Thomas and Cover

The basics of error correction theory is very simple. The so-called sender wants to transmit some information, be it a word, a song or a picture, to someone usually called the receiver. However, some part of that information may be lost during the transmission. We say that the information is being transmitted by a noisy channel. To give a non-orthodox example, if you burn a CD (like the one in the figure above) with some songs to give to a friend but accidentally you scratch the surface of the disc, some of the data will not be correctly read by the laser head. What do you do to avoid losing the information?

Every human language has already the basic required feature for error-correction, something we call redundancy. Can you r**d th*s phr**e withou* some of t*e le**ers? You can because every language has some degree of redundancy, which means, information is encoded with much more symbols than it is really needed. Because of this, if you loose some of the symbols, you still can manage to understand the original message. In the case of a language, redundancy is a very intricate thing for even if there are entire words missing from a document, you can still retrieve them.

Erasing a symbol (or some), like I did above, is not the only way of corrupting a message. Letters can be scrambled, exchanged for others and, unlike the example I gave, they may be corrupted without you even knowing exactly where it happened.

Error-correction is the driving force behind the digital revolution, simply because it is much easier to correct binary messages and then error correction is much better. For digital messages, which are simply composed of 0's and 1's, errors can be reduced to 2 types: erasure, when you simply cannot read which symbol is there, or flipping, when a 0 becomes a 1 and vice-versa.

The simplest, most naive technique of error-correction is called a repetition code. If your message is 1, then you just repeat it many times, like 111. If the amount of errors is small, for instance 1 in each 3 symbols is corrupted in the example above, you can retrieve the original message by looking at the majority of bits. It means that if you receive 110 instead of 111, you know what the correct message is just by looking to which symbol is the majority. If the noise (the amount of errors) is larger, you need to add more redundancy. For example, if the probability of flipping a bit is 2 in three, you need to use at least 5 copies of the symbol to be able to retrieve the original message. The result of the original message plus the redundancy is what we call the codeword.

Of course you have already noticed that to protect against many errors using the repetition code we need to repeat the symbol many times. The ratio R=N/M of the size N of the original message to the size M of the codeword is called code rate. The code rate is always between zero and one and we obviously want to make it as close to one as possible. For example, in our repetition codes the first code rate was 1/3 and the second was 1/5. It is obvious that the higher the redundancy, the lower the code rate. This is the main dichotomy of codes: we want to add as much redundancy as possible but keep the codeword as small as we can.

It's easy to see that repetition codes give too low code rates, their not very efficient. The next level of the game is named linear codes. A codeword in a linear code is simply the result of multiplying the original N-sized (binary) message m by a binary matrix G, of size M×N to get the M-sized codeword t. Mathematically speaking, we have just t=Gm. G is called the generator matrix. Okay, but how does it help? Well, there is another piece missing and that is the star of the play, a matrix A called the parity-check matrix. A is also binary and is chosen in such a way that AG=0. Now, if you apply the parity-check matrix to the codeword, you can easily see that the result is zero.

I need to make an observation here. All the operations with these matrices and vectors are supposed to be on the binary field, a.k.a. the Galois Field of order two or GF(2). I am saying that because things can be more or less generalised to higher order finite fields (another name for a Galois field), but that's too ahead of us now.

The parity-check matrix has this name because it can be seen as a constraint on t that forces some chosen combination of entries in it to add to zero (addition mod 2, for we are still talking about the binary field). A binary sum of this type has only two possible results, 0 or 1. As we can associate it to a number being even or odd, we end up calling this value the parity of this sum and then we say that we need to check that the parity is 0.

Let us now model the noise in the channel, where by channel I just mean the medium through which the information is being transmitted. Look at the simplest possible model which is called the binary symmetric channel, where each bit of your message can be flipped independently with probability p. To flip a binary number you just need to add 1 to it, for in the binary field the addition is defined by 1+1=0, 1+0=1, 0+0=0 and 0+1=1. So, we can model the noise in our transmitted codeword by adding to it a vector of zeros and ones, a binary vector, that we call the noise vector and where each bit has probability p of being one. Let us now consider the received message that we will call by the letter r. In our notation, we then have

r=t+n, (1)

One thing that you should notice is that if we add the noise vector to the received vector, we get back the transmitted codeword, clean and uncorrupted. Why? Because of the addition rule I gave above. Note that any number added to itself result in zero. That means that n+n=0 and r+n=t+n+n=t. That means that if we can discover n we can recover our codeword. Now, you probably already noticed that this does not help because I am just restating the problem in a different way. Just that. We still have nothing from which we can recover the message. But now, apply A to both sides of equation (1). Then we end up with what we call the error syndrome:

z=Ar=An. (2)

You may or may not have noticed already, but now we are talking! The parity check matrix is something that both sender and receiver know. The received message is what the receiver has after all. Therefore the receiver can do the above operation and calculate the syndrome. With that, equation (2) gives you a set of linear equations that can in principle be solved to find n and therefore the original codeword t. Of course there are many subtleties, I will talk about them a little in the next posts. The bottom line however is that parity-check codes are much more efficient than repetition codes. Actually, they seem to be at the moment one of the most efficient, if not the most, error-correcting codes.

What is the relationship with physics? We are not ready to appreciate it yet, but just notice that we have here a system where each variable can have two values, more or less like an electron spin, and they must obey a constraint. The constraint relates one variable to the other, or you may say that these variables "interact" is some sense. I will leave you to think a bit about that before spoiling the surprise completely.

Tuesday 11 May 2010

Errors as Anyons




[Picture taken from: songshuhui.net]

As promised, I will explain here how anyons are related to the errors in the toric code. I will rely heavily on what is in chapter 9 of Preskill's online lecture notes on quantum computation that can be found here:

[1] Chapter 9 - Topological Quantum Computation, Preskill, J.


Actually, I found that the introduction to anyons in Preskill's lectures is much more pedagogical than in the previous paper from Rao that I linked to. If you do not remember what the heck I am talking about, I suggest you to go back to the posts Kitaev's Toric Code and Anyons to refresh your mind. In the first of these posts, I explained that the toric code was defined by a set of stabilizers defined as plaquette and star operators on a square lattice in the surface of a torus. The spins, each one being just the mental image of a two-level quantum system, live on the edges of the lattice and are responsible for storing the quantum state we want to preserve.

Due to the characteristics of the set of stabilizers, we saw that the whole torus can only store 2 qubits, no matter what size it is. These 2 qubits correspond to the 2 non-trivial topological cycles of the torus, i.e., closed loops that cannot be continuously shrunk to points.

Actually, the way I described it, our torus is what is called a quantum memory. Its aim is to keep the two qubits stored for as long as we want, as if it was a quantum hard disc. It turns out that errors in the encoded message occur when the spins are changed from their original configuration. As the encoded message is defined as the ground state of the Hamiltonian, which itself is defined as minus the sum of all star and plaquette operators, when one qubit suffers an error the energy is increased by two for each operator that changes its measured value. This is because being products of the spins, the operators have only two possible eigenvalues, which are also the possible measured values, +1 and -1.

When an error occur, be it a Z or X error, the measured value of the two operators of the relevant kind (A or B) that contain that edge becomes -1 and the energy increases by 2×{-[(-1)-(+1)]}=4. Well, many times the Hamiltonian is rescaled such that the energy difference has a specific value, but that is not important now.

Before I continue there is one more piece of physics that I need to talk about. It's the Aharonov-Bohm effect. This is a quantum effect where a charged particle's wavefunction gains a phase when the particles goes around an infinite solenoid where an electric current is flowing. This infinite solenoid is just the physical way of constructing what is called a flux tube, which means that inside the solenoid there is a magnetic field that never goes outside. It has an effect on the particle because nowadays we know that the vector potential, which is the spatial part of the gauge field associated with electromagnetism, is more fundamental than the electromagnetic field and the vector potential exists outside the flux tube.

Back to the toric code, the beautiful thing is that whenever a Z-error occurs, which means that the |0> in the quantum state of the qubit becomes |1> and the |1> becomes |0>, this can be seen as a pair of "charged" particles being created in the two vertices connected by the link. Similarly, when an X-error occurs, which changes the sign of the relative phase between |0> and |1>, we can see this as the creation of flux tubes in the plaquettes which have this link as a common boundary. Flux tubes and charges are not anyons if considered isolated, but as in the Aharonov-Bohm effect, when a charge goes around a flux tube in a closed path, a loop, it turns out that its wave function is multiplied by -1. Bosons' and fermions' wavefunctions should not gain a phase if we move the particle in a closed loop taking it back to the initial place. Conclusion: the errors are actually anyons (when one kind is compared to the other).

Of course I am cheating a lot because I am not explaining the details, but for those who want to follow them, Preskill's notes and Kitaev's original paper are the recommended readings.

But the story does not end here. What about error-correction? That is the central aim, right? How does it work?

Error-correction is accomplished in the toric code by bringing two anyons together. What happens is that when they encounter each other, they annihilate and depending how this is done, the error is corrected. The important thing here is that anyons need to be annihilated in such a way that the path traced by them is a trivial cycle on the torus. If they annihilate after going around a non-trivial cycle, then the error remains! You can even annihilate anyons from different initial pairs as long as their total paths do not involve non-trivial loops.

You may quickly perceive that the solution to control errors in our code is: do not let the anyons go to far away from each other once they are formed! This way, they will not have gone around dangerous cycles on the torus and everything will be alright. So, you only need to increase the size of the torus, right? Then the anyons would take too long to travel through a non-trivial loop.

It turns out that it's not so simple (in quantum mechanics, it never is...).  In the following very nice (but articleless) paper

[2] Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems, Alicki and Horodecki [arXiv:quant-ph/0603260v1]

the authors argue that at a non-zero temperature it does not matter the size of your system, you will not be able to get rid of the errors. They give many arguments, in particular they say that if this was possible, the system would violate the second law of thermodynamics. In essence, the only states that could store some information would be classical states, which cannot store quantum superpositions.

Their conclusion is that you cannot have a quantum hard drive, store information in it and leave it there. The information will always leak unless you spend energy to keep it that way. This is dismaying as it shatters the dream of quantum information storage. However, we know that being mortals we only need to store information for a reasonable amount of time. For instance, I am pretty sure I don't care about what is gonna happen to my files after the next 300 years. So the idea is to try to increase the storage time reasonably. I've been talking to the quantum information group in Leeds about the subject and we had some nice ideas, but that is something I will write about in another opportunity. 

Monday 10 May 2010

Condensed Matter Blogs

[Picture taken from the San Diego State University Website]

If you follow physics blogs in the internet you probably read Cosmic Variance, Backreaction, Not Even Wrong The Reference Frame etc. That's okay, I read them too. They are basically Cosmology and High Energy Physics blogs though and there is a part of physics that is not probably well represented there: Condensed Matter Physics.

For those to whom this word may seem unfamiliar, condensed matter is that part of physics that deal with how matter behaves when you have a lot of it. Superconductivity, superfluids, metals, crystals... they are all subjects of condensed matter. These are thrilling topics as well and there are many wonderful things in this area that does not appear in those above cited (and side listed) blogs. That's why I am listing two very interesting blogs about the subject below and I myself will try to write more articles in this area. The two blogs are:
  1. Nanoscale Views (http://nanoscale.blogspot.com/), by Douglas Natelson
  2. Condensed Concepts (http://condensedconcepts.blogspot.com/), by Ross H. Mackenzie
Take a look at them. Browsing around these blogs you will understand how exciting condensed matter can be. Many important concepts in physics have their origins in condensed matter, like symmetry breaking for example, and the field is full of unsolved and intriguing problems. I will finish this flash post with a list of subjects that have been around for some time but are still hot today and that you will probably see them in those blogs and in mine as well:
Look around and have fun!

Saturday 8 May 2010

Symphony of Science





The above video, which I found reading the Portuguese blog BioTerra, is one of a series from a website named Symphony of Science which according to their own description
is a musical project headed by John Boswell designed to deliver scientific knowledge and philosophy in musical form. Here you can watch music videos, download songs, read lyrics and find links relating to the messages conveyed by the music.

The project owes its existence in large measure to the wonderful work of Carl Sagan, Ann Druyan, and Steve Soter, of Druyan-Sagan Associates, and their production of the classic PBS Series Cosmos, as well as all the other featured figures and visuals.
Enjoy.

Friday 7 May 2010

50 Years of Laser




1960, exactly 50 years ago, is officially the year when the laser was born. At that year, Theodore Harold Maiman constructed the first working ruby laser at Hughes Research Laboratories in the USA. Although the historical paper

[1] Maiman, T.H., "Stimulated Optical Radiation in Rubi", Nature 187, 493-494 (1960)

published on the 6th of August has only one author, I would think that he probably had a team to construct it and did not do everything alone. About the paper, you can find a nice excerpt written by the other laser pioneer Charles H. Townes, which actually won a Nobel Prize for it, named The First Laser.

In Towner's excerpt, you can read the famous quote about the laser as being a "solution looking for a problem". The meaning is not that nobody was thinking about possible applications, but that the applications were not the objective of the research. This idea seems heretical today, researching without practical objective?, but that is only a consequence of the New Dark Age mentality that is becoming stronger each day. I will write a future post about that soon, but just look around today and see how your life would be without the laser before criticising pure research, although I know that this advice will unfortunately be just ignored by most people...

Back to good things, the festivities include Physics World giving a sample copy of its commemorative issue on the 50 years of the laser on their webpage: Physics World magazine: May 2010 special issue and New Scientist publishing a cool picture gallery about the subject with the first photo being the one I put above, which is from the first laser taken by Kathleen Maiman (which surely must be a realtive of  Theodore, but I am not sure at what level). The Nobel Prize foundation also has special pages about the laser called Laser Facts. And finally, there is also an editorial in Nature about the laser: Laser-guided impact.

    Thursday 6 May 2010

    Albert Fert Answers




    The above YouTube video is one of the series of videos uploaded by the Nobel Prize Foundation into their YouTube channel where Albert Fert answers questions made by many different persons (I have even seen one Second Life avatar asking one in one of the videos). The series can be watched: Answers from Alber Fert.

    Albert Fert won the Nobel Prize of Physics in 2007, together with Peter Gruenberg, for the discovery og the Giant Magnetoresitance (GMR) effect in 1988, which he explains in a very simplified way in the above embedded video. The Nobel Prize website is quite good and you can find not only a lot of information about him in the 2007 Physics Prize page but also a downloadable video with his Nobel Prize Lecture.

    GMR is the effect of decreasing of the electric resistance of a material in the presence of a magnetic field. It's giant because the resistivity can drop up to 80%. The effect is very well explained in the Wikipedia article linked above and by Albert Fert itself in the videos (in particular the one I put in this post). It is used comercially in read heads of hard disk drives, this being actually the application that made this discovery so important.