Tuesday, 12 October 2010

Graphene




Via physicsworld.com

Sunday, 10 October 2010

About Testing String Theory by Analogy


I like string theory, but as sad as it may seem we have to face that there is still no experimental test of it. And again, as desperate some people may be not to have wasted their lives (which actually is an unjustifiable fear), if string theory turns out to be not falsifiable, it is not science, but just a book keeping device. That's true. Without any falsifiable prediction, string theory becomes an extremely elegant and compact way to express our nature's knowledge up to date. If you are fine with that, no problem, but sincerely I prefer not to be sure that there will be no more experiments with explanations requiring new physics to be done even in principle. But I can be wrong.

But what this post is really about is alleged tests of string theory based on mathematical analogies. I can't deny that supersymmetry is a non-trivial prediction. If it is true, point to string theory. But other theories can be supersymmetric too. Another day, I heard about a paper using string theory to quantum computing, and these days I have heard a lot about holographic superconductors and AdS/CFT applied to condensed matter.

However, people must remember that applying the methods developed in one theory to other does not provide a proof of the former. The fact that you can use Feynman graphs in condensed matter and it works for explaining superconductivity does not mean that QFT is proved by it, experiments do. There is a difference between the mathematical methods developed to deal with a theory and the theory itself. Some people will say that there isn't, but that is dead wrong! The power of mathematics comes from abstraction and this allows for the use of the same tools to different problems. But physics is not only mathematics and depend on principles that are derived from and tested by experiments.

I am not saying that string theory is not science. On the contrary. It is a possible hypothesis which is being explored. However, it is not a proved theory no matter what the most intelligent people in the world say. Nature usually cares very little about what intelligent people think. There are many examples of it in human history. And the bottom line is that, even if the mathematics of string theory helps other theories, that does not count as a verification of string theory.

Saturday, 9 October 2010

Nobel Prize of Physics for Graphene


I know news run fast through the web and everyone knows by now that the Nobel of physics this year went to Andre Geim and Konstantin Novoselov, from the University of Manchester here in the UK, for the discovery/invention of graphene. As usual, it was a busy week and the only thing I had time to do about it was to put together a gallery of graphene pictures on my other blog Sciencescapes. And probably everyone also knows by now that Andre Geim won the IgNobel prize of physics in 2000 for levitating a frog over a superconducting magnet. The frog paper is free to read: Of Flying Frogs and Levitrons,  by M.V. Berry and A.K. Geim, European Journal of Physics 18, 307 (1997) .

Graphene is a very interesting material. It is the closest you can get to a two-dimensional sheet  for it is a carbon sheet just one atom thick. The picture above is an artistic rendering you can find on Wikipedia. It shows that graphene forms what we call a regular hexagonal lattice. I should have written in this blog about that before, because I always thought these guys would win a Nobel soon, but now I cannot prove it. It was somewhat logical to assume it as if you check the condensed matter part of arXiv daily, you will see that it is hard to find a day without a paper about graphene. 

Due to the fact that it is practically two-dimensional, graphene has many interesting physical properties. In particular, at least for physicists, you can find an anomalous quantum Hall effect. Also, being 2D, graphene can support anyonic quasi-particles, elementary excitations that have statistics which are neither bosonic nor fermionic (see the previous post Anyons). As an extra bonus, graphene appears to be one of the strongest materials that exists, with a breaking strength 200 times greater than steel.

Geim, Novoselov and others wrote a nice review on graphene: The electronic properties of graphene, Neto et al., Reviews of Modern Physics 81, 109 (2009).  There is also this other paper by Peres: The transport properties of graphene: An introduction, Peres, Reviews of Modern Physics 82, 2673 (2010). Unfortunately, you need a subscription to access them. 

As I lost the opportunity to predict the graphene Nobel, this time I will take the risk of making the prediction (which is again fairly obvious) that soon the Nobel will be given to the guys who discovered that the universe expansion is accelerating. They were called the High-z Supernova Search Team, and the discovery came on 1998. Adam Riess was the leader of the team, so he is probably one of the guys who will win the prize. That discovery was completely a surprise at the time as everyone were expecting a decelerating universe. This also led to many famous hypothesis to try to explain it, like dark energy and quintessence.

Tuesday, 5 October 2010

Viscosity




Biological Physics (Updated Edition)
I have just started reading Chapter 5 of the book Biological Physics by Philip Nelson, which is called Life in the Slow Lane: The Low Reynolds-Number World. The book is an undergraduate introduction to biophysics which is extremely well written and very pedagogic. The undergraduate word however just means that the mathematics of the book is not very advanced, for there are a lot of physical insights that are extremely interesting and valuable for any physicist.


In this chapter, Nelson is writing about the difference in the relative viscosity for macroscopic and microscopic objects and the effect of it to the world of cells. In the very beginning, he explains the experiment in the video above, the only difference being that in the book you only have one coloured drop.

In the experiment, the container is composed by two concentric cylinders with corn syrup, a very viscous fluid, filling the space between them. As you then can see, drops of coloured syrup are put in this space. Note how the fluid is viscous by the fact that the drops don't even move once they are there. Then, the handle is turned and the internal cylinder is rotated a number of times. The fluid is dragged by the rotation and the drops apparently mix. The magic happens when the cylinder is rotated in the opposite direction and, miraculously, the drops unmix and reappears almost intact.

The explanation of how this can happen is quite interesting and is given in Nelson's book. What happens is that the drops never really get mixed, because the fluid is so viscous that there is no turbulence. Without turbulence, there is only a very organised laminar movement of the fluids and not the disordered wandering of molecules that causes mixing. The molecules actually stop moving (at least almost) when the rotation stops. When the rotation is realised in the opposite direction, the molecules simply retrace their previous steps and come back to the place where they were in the beginning. Of course, that's not perfect and you can see the drops had fuzzy boundaries where some diffusion and mixing did happen, but that is negligible.

The most interesting part of the discussion in Nelson's book comes afterwards where he explains that water is very viscous from the point of view of bacteria and this kind of effect happen in the microworld. This brings problem for them to move as, if they just swing upwards and backwards some kind of structure, they will never move because the fluid will just trace back the previous movement. I stopped somewhere around there.  If you are interested, I highly recommend Nelson's book.

 

Tuesday, 14 September 2010

Values


"An investment in knowledge always pays the best interest"
Benjamin Franklin

It was a beautiful morning and the singer, who was one of the most popular in his planet, sat in his living room to read a magazine. He was feeling well and happy. He suddenly felt like doing something good something honourable. Then he passed his eyes over a small note in one of the magazine's page. Archaeologists were trying to raise 500 000 pieces to complete a project not far from where he lived. They had found two truly beautiful floor glass and metal mosaics, almost complete. Both dated from around two thousand years before, said the magazine. It was one of the brightest periods in the Jau history. The amount would be spent in the construction of a museum over the art crafts, mimicking the structure that should have been there before. The mosaics were not the only things they found. Many other pieces were there. Parts of the original construction, daily utensils, artwork. The project would allow people to walk around the whole reconstructed structure in suspended glass platforms.

The singer smiled. 500 000 was not too much. Actually, he himself had spent more than that in a flying vehicle three months ago. And last week holiday in the south continent with his friends has costed as much as ten times that. He felt sorry for those archaeologists. So little and still they could not raise it. At some point, the article said that the government was going to give them funding, but then decided to cut it because the project would not have a big impact on society. Then, he decided that he would give them the amount.

But he was the most popular singer in his planet and it was too difficult for him to do something in secret. Besides, his public relations team decided that would be good for his image to transform the occasion into a big event. However, when the people of the city discovered his plans, they wondered if that was the best use to that money. Soon, a campaign led by the citizens was urging him to give the money to a more useful cause. What a waste of money was to donate 500 000 pieces to restore some old bricks while people was suffering every day with more immediate problems. Little time passed before they agreed on the cause. Scientists have been searching for a cure to Jora's disease for more than sixty years and that amount could help them. Obviously, those scientists agreed.

The singer was puzzled. All he wanted was to do a good action. There was no campaign against his new car or his previous week holiday, so he could not understand why people were making a big deal of that. Lots of people were holding huge signs in front of his house. His advisers suggested him to give up the idea and give the money to the disease's research instead. It would be better for his image. He was tired but he was decided to do a good action. Then, he went to the bank, collected the amount and finished what he begun and do what he thought was right. 

One kid stopped on the highest platform and stared at the mosaic in awe, her tentacles holding firmly to the coloured plastic bars. She had never seen something like that before. The fantastic bipedal creatures depicted in the artwork made her chill. She never forgot that. The whole structured filled her dreams. Inspired her like nothing else before. She looked for the creatures on the network and read all the stories. When there were no more stories about the creatures, she started to read about the people who created them. And then she read about why they created those creatures. What inspired them? And she learned that those people, who lived much before She was born, asked the same questions about the world that she used to ask herself and her parents. She wanted to know the answers and she studied hard for that. She became a biologist and she ended up knowing many of the answers, but in the process, she also find out many other questions. She was seeking the answer for one of them, the cure for Jora's disease? Late that night, she remembered the mosaic. And when she looked at the pattern of the molecules in her computer screen, she saw something wonderful. The answer had always been in the mosaic. Jora's disease was eradicated form her planet in fourteen years.

But, in fact, the child never saw the mosaics, for the singer was convinced by the citizens that giving the money to the disease's research was much more important than to give the money to maintain some old stones. The archaeologists could never raise the money and the mosaics were lost to the action of the planet's harsh weather. The kid was inspired by that, so she became a singer. A famous one. The best of her planet. She always had enough money, but she also remembered about what happens with the old singer in her childhood time and always kept secret about her own possessions. She died of Jora's disease, as many other people. Eventually, they found the cure some centuries after that. But that was okay, for the people was happy. They never stopped singing. 

Friday, 10 September 2010

Science Stamps

I probably haven't said yet that I am a collector. Actually, I collect collections. One of them happens to be a stamp collection and, thanks to a friend who wanted some stamps of the Royal Mail some time ago, I ended up signing for a newsletter of them. I have already bought two nice sets of stamps and today I received an email with one of their new sets: 
"Medical Breakthroughs - The Mint Stamps have been created using medical imagery that best illustrates each of our six specially selected breakthroughs: beta-blockers, penicillin, hip replacements, artificial lens implants, malaria parasites and CT scanners."
And here is a picture of the presentation pack:

They will be issued on the 16th of September and
"The Presentation Pack also takes an in-depth look at at how each of these significant breakthroughs came about, courtesy of Dr Richard Barnett from the Department of History and Philosophy of Science, University of Cambridge."
Although I really feel that magnetic ressonance should be among the breakthroughs, as I am a biased real fan of it, I reckon that there must be difficult to choose just 6 among the many achievements of (real) medicine. By the way, check out the link to the CT scanner to see the link between it and The Beatles. 

While browsing the stamps, I also found this other pack:

The Royal Society Presentation Pack - A beautifully illustrated Presentation Pack containing all ten Royal Society stamps, together with authoritative text by Eugene Byrne telling the stories of those featured.
If you live in the UK, the stamps plus the postage end up costing just around £6.00, which is fair. As I said, I bought two other packs and they were worth the price.

Thursday, 9 September 2010

Standards


The man in the photo above is the UK Business Secretary Vincent Cable (I hope I have highlighted enough the word Business in this statement). In a radio interview on the 8th of September, the above politician said that presently in the UK
"45% of grants were not of excellent standard"
It is a very worrisome figure, isn't it? Now, the bare truth is that politicians obviously now very little, usually nothing, about science. I don't mean popular science, I mean scientific research, although I believe that many of them do not know either. Therefore, to make this kind of strong assertion, the politicians rely on certain "objective" quantities. In the case of science, they forced the idea that number of publications, weighted by the so-called impact index of the journals, and citations are good numbers. Number of patents  as well. 

Another criteria which is more subjective is called impact. Impact means how the society will benefit from the research. Sounds pretty noble, but again, after throwing away all the nice words, the true meaning is how much money the research can generate in the short term. Money and short term are really the key terms here. And not only politicians are to blame here, but we as well. Everybody has a relatively short lifespan in history terms. We all are interested in our lives, which is obviously okay. So, we want results from our invested money and we want it fast. Everything boils down to that. Therefore, a good translation of the assertion of Mr. Cable would be
"45% of grants do not generate enough money in the short term"
Although this would be the honest thing to say, it obviously is not nice. Now, what if I say that 99% of all politicians are not of excellent standards and we should cut their budget? I guess it is not very difficult to come up with much more reasonable numbers to evaluate this, right?

Let me just finish with another statement. There is one and only one measure of excellent research: seriousness. It does not matter what is the subject. It may be the less money-generating of all subjects. It may be a completely abstract mathematical theorem. It will be excellent when the researchers involved really care about the subject and explore the area with seriousness and professionalism. That is excellence. No single set of numbers can measure it and that is the problem. To evaluate true excellence requires also true excellence. It requires effort and is time consuming. It requires hard and deep thinking and analysis. Even if a study does not get published, even if it is not cited by anyone presently, if it was done seriously, it is excellent. 

It's easy to agree with the above arguments, isn't it? But are you really kin to stick to them in your life? Think seriously.

Friday, 27 August 2010

Consensus

Long time no writing, I know... That's because my present contract was finishing and our project turned out to be more complicated than we thought... Anyway, let me start by suggesting a game. Watch this video:





Now, how many logical fallacies can you spot in this talk? (Hint: List of Fallacies) I will point to four:

  1. Argumentum ad populum
  2. Negative proof fallacy
  3. Proof by example
  4. Appeal to accomplishment
I won't tell you where they are, let us consider as an exercise for the reader to find them in the video. Be careful, for some of the fallacies in the list are almost the same although they are listed with different names in different categories. Actually, I would group them together anyway, they are just special cases of more general concepts.


Before anyone gets angry, I have no intention of talking about global warming. What I want to talk about is the idea of the speaker that science is made by consensus. To make a long story short, it is not. And if you were about to argue that he is a scientist, so he must know what he is talking about, I suggest you to review the list of logical fallacies above. In any case, I'm a scientist too.

Consensus has nothing to do with science, actually. It has to do with social interactions. Now I would be lying by saying that in general scientists do not develop an emotional attachment to some ideas to the point that a whole generation decides to adopt that point of view. After all, scientists are humans and share all emotional states that other humans do. They want to be accepted in their group, they want to be listened by the others, they do not want to look like fools, they have a need for surviving (meaning, earning money) and all that. Many are religious people. And many has the necessity to be sure that they are not wasting their lives with something that won't turn out to be true. So, they appeal to consensus.

However, and I will probably repeat it many times in this blog, science is not a political campaign, even with the governments' incentive for that kind of behaviour. Science is a kind of adventure where we try to understand (whatever that means) how the world really works. How nature really works (not how we would like it to work...). And we know that the opinion of the majority IS NOT the best way to do it. Any kind of idea that is based on consensus has the risk of not even being a scientific idea! And at this point I am being really broad about what I would call a scientific idea.

You may have noticed that the speaker talked about the Earth being flat was science at that time because that was the consensus. There could not be something more false. The Earth was taught to be flat at the dark ages because the world decided to forget about what the Greek have learned about nature. That can hardly be qualified as science. Besides, even if it were considered as science at that time, I think quite disappointing that some part of the scientific community haven't updated their understanding of what is science in the last 500 years. As far as we know, even what we think is science today may turn out to be wrong upon further thinking! That is the beauty of science, it evolves. It evolves even its own foundations by accumulating experiences, by rethinking, by not being afraid of assuming that a previous statement was wrong and that now we, at least think, that we know better.

Next time you listen someone telling you that something is probably true because it's consensus, think hard about that. Try to challenge it inside your mind. If that is really true, it will survive the challenge. But above that, you must understand why that is or is not true. Don't commit the (logical) error of accepting something just because most people believe in that. THAT IS NOT SCIENCE. And also, that is not wise.

Sunday, 13 June 2010

Free Lecture Notes #1


It's been a while since I wrote the last post. It was a very busy week. My contract in Aston finishes on September and, as life as a post-doc dictates, I have been looking for a job. Also, I have to prepare to some upcoming conferences, try to finish my projects and publish (or perish, of course), worry about paying bills, visas and many other things. Science is tough to do on days like these.

Therefore, to keep the ball rolling while I have not the time to write more elaborated things, I decided to start listing the lecture notes I have been accumulating from arXiv. That's a good trick to do if you do not have time. :) So, I will start with these first five:


  1. Lectures on holographic methods for condensed matter physics, Sean A. Hartnoll
  2. Lecture notes on the physics of cosmic microwave background anisotropiesAnthony Challinor & Hiranya Peiris
  3. Les Houches Lectures on Black HolesAndy Strominger
  4. Three lectures on Newton's lawsSergey S. Kokarev
  5. Gravity & Hydrodynamics: Lectures on the fluid-gravity correspondenceMukund Rangamani
In particular, the first one seems to be quite interesting. If you have comments about them, that would be nice as well. Have fun.

Monday, 7 June 2010

Spin Liquids


[Simulation of a quantum spin-liquid performed on a flat honeycomb structure - by The University of Stuttgart]

As I have been reading many posts, specially via Condensed Concepts, about spin liquids, I decided to learn a bit about that. So, following the new plan for this blog of using it to help me understand things, I am writing what I have found in the following arXiv article, which is a compilation of lectures given at the famous Les Houches Conference in France:
There are some minor typos, so just be sensible when reading. Nothing serious. It seems that although the definition of a spin liquid may make sense in the classical setup, it would not be realisable there, so the concept is usually only applied to quantum spin models. Anyway, let me describe what I understood from the above paper. If anyone have more interesting things to add or corrections to make, I would be very happy to hear them and learn more about the subject!

The term (quantum) spin liquid refers to the ground state of a spin model where no symmetry of the model Hamiltonian is broken.

Let me include at this point two paragraphs for the less technical audience that has been brave enough to continue reading up to here. Physicists may feel free to skip the next two paragraphs as I am going to explain the basic concepts for a broader audience. I will get back to a more technical description after them. First, a spin model is a mathematical model describing particles with spin (the quantum version of magnetic moment) usually, but not necessarily, on a lattice (a collection of points linked by lines). These models are defined through a so-called Hamiltonian function, which is just a formula that gives the energy of the model for each configuration of the spins in the lattice. The ground state is the minimum energy configuration, that should be favoured at zero temperature as all physical systems like to minimise the energy and there are no thermal fluctuations at T=0. You can think of the energy as a cost function that you always try to keep at a minimum. Finally, a symmetry of the Hamiltonian is some kind of modification that you do to the spins or any other variable in the Hamiltonian such that when you put this modified variable back into the formula, the Hamiltonian does not change.

The symmetry part requires some more explanation, I know. Consider an Euclidean vector with two coordinates  v=(x,y) and let us assume that in some system there is a Hamiltonian depending on it given by H=xy, i.e., the product of its two coordinates. If we multiply the vector v by -1, then each coordinate is multiplied by -1 and the Hamiltonian becomes H=(-x)(-y)=xy. It doesn't change. Therefore, the multiplication of v by -1 is a symmetry of the Hamiltonian. Of course physically meaningful symmetries are more interesting, although the one I have just gave you may be considered as a very special case of a more general local gauge symmetry, but we are not going to talk about that now. The important is the idea.

Consider now, for instance, the ground state of the (ferromagnetic) Heisenberg model. This is the classical example of symmetry breaking. The Hamiltonian is rotationally invariant as it is given by the scalar product of the spin vectors, but the ground state has magnetic order with all spins pointing in the same direction and obviously changes if they are rotated (although a rotation takes it to another ground state). The word liquid in spin liquid is an analogy with the transition from the liquid to the solid state. The liquid is homogeneous and looks the same everywhere, so it has a continuous translational symmetry of the liquid. On the other hand, a crystalline solid (let us not talk about glasses at this point...) breaks that symmetry in the sense that it is not invariant by a continuous translation, but by very specifically ones. The same works for rotations, but the basic idea is what matters. This can also be stated as the fact that the spins do not develop any long range order (LRO) at zero temperature, they are completely disordered even then.

That is the reason why these ground states can be realised only in the quantum setup. Classically, there are no fluctuations at zero temperature to destroy an ordered state. However, at the quantum level there exist quantum zero point fluctuations that can do job. They can disorder the spins from their ordered states guaranteeing that they will not break any symmetry.

An interesting characteristic of ground states with broken symmetries is that they are degenerate. By applying the symmetry operation that is itself broken by these states, you get another ground state. This kind of degeneracy should not happen in the spin liquids, but their ground state can still be degenerate, the degeneracy coming from another kind of order called topological order, about which I am certainly going to write a more detailed post in the future.

It seems that there was no observation up to date of a spin liquid phase in a real system, although many simmulations on almost realistic models seem to observe it. For instance, the article Exotic Quantum Spin-Liquid Simulated: A Starting Point for Superconductivity?, from where I took the picture for this post, describe one of them. As I said, Ross McKenzie from the blog Condensed Concepts have been writing a lot about that recently, so let me just list some of his posts
Just to summarise things: Spin liquids are ground states that break no symmetry from the original Hamiltonian, they have no long range order an can only appear in quantum systems because these have fluctuations even at zero temperature and these quantum fluctuations can destroy the order. Although from the experimental side these states have not been realised so far, it seems that theoretically they are reasonably understood, although through the article in the beginning of this post it seems to me that this understanding is only at the mean field level. The detailed description of these states, even theoretically, is still lacking and seems to be an interesting topic of research.

As a finishing note, Misguich at the end of the paper gives an interesting connection of spin liquids with Kitaev's toric code (which the reader may already know from previous posts). The ground state of this topological quantum code is a spin liquid. Although the ground state has a degeneracy, this degeneracy is topological and has nothing to do with the breaking of a symmetry by the ground state, so we are still fine. This analogy is only a final observation in the article, but given that the search for an experimental realisation of a topological code is also a hot topic, this gives another path through which spin liquids may be observable in real systems, although in this case they would be engineered instead of natural.

    Friday, 4 June 2010

    JoP: Condensed Matter - Highlights 2009

    SEM micrograph of a strongly crumpled graphene sheet on a Si wafer - Condensed Matter Physics Group, The University of Manchester

    The Journal of Physics: Condensed Matter published here a list of 35 papers published on it in 2009 that were chosen "on the basis of a range of criteria including referee endorsements, citations and download levels, and simple broad appeal". These papers will be free to read until 31-Dec-2010. 

    An interesting thing to do is to compare the subject of the selected works. Graphene is the theme of 8 articles, about 23% of the total. Then comes superconductivity with 4 articles, with about 11%, multiferroics with 3, about  9%, and then comes all other subjects with 2 or 1 articles.

    The above list gives supporting evidence to the statement that graphene has been the biggest star in the latest year(s). There are many reasons and probably funding is one of them. We can argue that graphene has not only many interesting properties but also great potential for technological applications, which can be said also about superconductors but their time of impact seems to be gone at least by now. 

    Thursday, 3 June 2010

    StatPhys and ECCs #3: From bits to spins

    [O(3) Spin Model - Credit: Paul Coddington, University of Adelaide]

    This was meant to be the final post in the series but while writing it I realised that I had too much to say and as such, I decided to split this into more parts (I still don't know how many...). If you feel lost while reading, try to remind the basic concepts from the previous posts:
    • Statistical Physics of Error-Correcting Codes #1:
    • Statistical Physics of Error-Correcting Codes #2: Statistical Physics Overview
    Also, take a look at the references therein. Now, at the end of post #2 on this subject I said that when working with statistical physics we are always interested in what happens in the thermodynamic limit, by which we understand the limit when the number of units N in our system is very large. This is not always true actually. There is a lot of interest in what is called finite size effects in statphys, which are the effects that appear when we consider N not being large. Although the difference between large and not large may seem not very well defined, mathematically it can be translated as taking the limit of N being infinite.

    Taking the system size to infinity may seem radical and even unnatural, but a bit of practice with calculations is enough for everyone to see that one mol of atoms is for all practical purposes (fapp, according to John Bell's definition) infinite. Even smaller numbers, like one-billion or one-million may be close enough to infinite to render any deviation completely negligible in most practical situations. Now think about the last film you downloaded (I know you did it, but I will not tell anyone). It was probably of the order of 1G, right? This means that it has about 10 billion bits. This may not be as big as a mol, but still is big enough to allow us to consider this system already in the thermodynamic limit fapp.

    Consider then some very large file. To be consistent with post #1 let us call it a message t. In the same way as the electron spin, bits can assume only two values. The former can assume +1 and -1, while the latter can assume 0 and 1. For calculational convenience, it is usual to work with the former representation (specially because statistical physicists were used to do that way before they started to look at bits). We can transform between these two representations in two ways. The first is the linear relation

    \[\sigma=2x-1,\]
    where $\sigma\in\{\pm1\}$ is the spin variable while $x\in\{0,1\}$ is the binary variable. You can see that 0 is mapped to -1 and 1 to +1 as would seem more natural. However, there is a second way to map these variables which seems less natural, but actually is much more beautiful, namely

    \[\sigma=(-1)^x.\]
    And where is the beauty?, you may ask as it is not only non-linear but also maps 0 to +1 and 1 to -1, which is kind of weird. In fact, the above mapping is a homomorphism between the Galois field of order 2 and the square roots of unity!

    Let me explain this better. Isolated bits are usually summed using a sum mod 2 operation, also known as exclusive OR or simply XOR. The operation is defined by the following table at the right and many times it is expressed using the symbol $\oplus$ to differentiate it from the normal + sign. That's the notation I am going to use. By also defining the product of two bits to be the usual product of two numbers, the bits form a mathematical structure called a finite field or a Galois field and its order is the number of elements, in this case 2. But under this addition mod 2, this so-called binary field is also a group. The two square roots of 1 also form a group under the usual multiplication, and the non-linear mapping defined above maps one group to the other preserving the group operation. This can be easily seen by writing the mapping in a more formal way as $\sigma(x) = (-1)^x$ and observing that

    \[\sigma(x_1\oplus x_2) = (-1)^{x_1\oplus x_2} = (-1)^{x_1}(-1)^{x_2}=\sigma(x_1)\sigma(x_2).\]
    Homeomorphisms are always beautiful. This one in special can be even generalised to higher orders. When the Galois field has a prime order p, the addition operation of the field can be taken to be the addition mod p (be careful, as it doesn't work for a general order p!) therefore representing the elements by the integers from 0 to p-1. The mapping can then be generalised to

    \[\sigma(x)=\exp\left(\frac{2\pi i}{p} x\right).\]
    I will leave to you to check that this is indeed a homeomorphism. This formula can be interpreted as a mapping from the Galois field to spins pointing in some direction, as the exponentials are nothing more than the p-th complex roots of unit. I guess that all this beautiful mathematical structure is enough to justify using the non-linear mapping instead of the linear one, and that is what I am going to do from now on.

    Keeping the potential for generalisation in the back of our minds for now, let's come back to the case of bits, the binary field. There are three classic ensembles in statistical physics. The word ensemble refers to a particular situation of study where some microscopic variables are allowed to vary around a mean value and others are kept fixed. When we fix the number of particles (or units, I will use it particles many times as I am a physicist and some habits are difficult to lose) and the average value of the energy, we call this the canonical ensemble, and this will imply that the temperature of the system will be kept constant. There are many ways to show (my favorite being the maximum entropy that you can find in Jaynes book) that when the system is in equilibrium the canonical ensemble is describedby the Gibbs distribution

    \[P(\sigma)=\frac{e^{-\beta H(\sigma)}}{Z},\]
    where $H$ is the Hamiltonian (energy, cost function etc) of the system. $Z$ is the partition function and is one of the stars of the show. Actually, from the partition function we can derive all important properties of the system.

    Then, our first task will be to find out what is the analogue of the Hamiltonian in our case. Let's first understand what is that we want when analysing error-correcting codes. The interesting thing is to understand how much noise the code can stand before we cannot perfectly recover the message. We will not accept errors in the recovered message because if we do, the message will degrade each time we decode it. Imagine that in your HD. If you access to much a document, you loose it. Just to give the right credits here, the first person who noticed the mapping between codes and spin systems was Nicola Sourlas back in 1989:
    However, we will deal with a slightly different model from his. Our basic idea will be to use Bayes' rule to infer the original codeword from the corrupted one. Once we have the codeword, the original message is automatically retrieved as the correspondence is 1-to-1. Let us call the dynamical variable representing the possible codewords by $\tau$ and the received codeword, already corrupted by noise, by $r$. Then, according to Bayes' rule we have
    \[P(\tau|r)=\frac{P(r|\tau)P(\tau)}{Z}.\]

    The first term in the numerator is the so-called likelihood of $\tau$ and represents the action of the noise in the codeword. Therefore, this term contains the noise model of the channel. The second term is a prior distribution over $\tau$, where we include any information we have that may help in the decoding. Finally, $Z$ is just the normalisation factor, although you probably already noticed that I am using the same notation as for the partition function because that is exactly what it will be.

    In our case of parity-check codes, we know that the codeword is in the kernel of the parity-check matrix. That's the only prior information we have about the codeword and we will symbolise it by the indicator function $\chi(A,\tau)$ which will be 1 when $\tau$ is in the kernel of $A$ and zero otherwise. There are many ways to include this term in the calculation, but the most physical one is to see it as an interaction term between each coordinate of $\tau$. Then, we can write the probability of $\tau$ as

    \[P(\tau)=\frac{e^{-\beta H}}{Z},\]
    where

    \[H=-\ln P(r|\tau)-\ln \chi(A,\tau).\]
    This finally has the form of a Gibbs probability. The inverse temperature $\beta$ was introduced for convenience and can be taken to be 1 to coincide with the original problem. The first term of the Hamiltonian usually factorises in the spins, with each component of $r$ acting in a different component of $\tau$, which confers them the role of local fields.

    There are many subtleties in what I wrote above, but the general idea is that. Bayes' rule allows us to make the connection with the statistical physics of the problem.  In the next post I will enter into the details of this formulation and show how the methods of statistical physics allow us to extract what we want from the problem.

      Friday, 28 May 2010

      3 Quarks Daily Best Blog Entry Prize

      3 Quarks Daily is having Richard Dawkins to choose the best science blog entry of the year. The nominations end in 3 days actually, on 31st of May and the prize will be announced on the 21st of June after the public had the right to vote on the nominated entries.

      If you liked any of the entries here, I would like to encourage you to nominate it for the prize. All you have to do is to click here and post the link to the blog entry you liked in the comments section.

      Well, the competition is heavy, but there is always some hope. :)

      Thursday, 27 May 2010

      Scientific Art



      Art of Science 2010 from Princeton Art of Science on Vimeo.

      I wanted to post this for some time already, so let me do it before it becomes a too old news. For the fourth time, Princeton University (the legendary one from Einstein and many others) held the Art of Science competition with this year's theme being Energy.The video above is a slideshow with the competition's work (don't blame me for the music, it wasn't me who chosen it!). These are the first three prizes:

      First Prize
      Xenon Plasma Accelerator
      by Jerry Ross

      Second Prize
      Therapeutic Illumination
      by David Nagib

      Third Prize
      Neutron Star Scattering off a Super Massive Black Hole
      by Tim Koby

      There were many more quite beautiful pictures in the contest and you can see the rest of them and the past galleries in the competition's website. You can also read some extra information in the physicsworld.com blog post:

      Tuesday, 25 May 2010

      A New Dark Age

      We are always discussing in the university the present problems with the general politics on funding for science. I cannot say that the discussion changed a lot in the last years, as since I started to be worried about that, funding was always directed to more "practical" areas than to fundamental research, "practical" meaning with a higher probability of generating money faster. Of course this is expected. The monetary return given by more applied research is less risk and comes quicker than for fundamental research and we all know that funding is not really about knowledge, but about money. People will invest in you if they can get some profit from it and they do not live forever, so they will not be willing to wait decades for the results, no matter how fantastic they may be.

      I will not try to preach about how important is fundamental research, how only fundamental research bring revolutions and so on, although I will not resist to write a bit about it in a moment. But I won't try to convince anyone that the LHC generates as much jobs (actually probably much more) as the construction of a bridge over a river, with the added advantages that many of the jobs will be long term ones, many technological advances and solutions for problems will come from it and a huge amount of knowledge will be generated. Actually, I will take advantage of this post by Sabine Hossenfelder:
      to express my deep concerns with the direction we are headed on to. Read the post and you will understand her position, although the title is clear enough. Our main point is that knowledge does not need to be useful in the mundane sense of the word, which obviously means that it does not need to generate money right away.

      The most standard way to argue about this is that the advancement of pure knowledge now can bring surprising revolutions in the future. The consequences can be huge and unpredictable. Again, the usual example is quantum mechanics. It is not difficult to say that solving the problem of the black body spectrum was just an academic issue around 1900, but it turns out that that little detail, which would struggle to get funding today for sure, brought us the computers, the laser, the magnetic ressonance and pretty much virtually all of our present technology.
      But I am not going to use this argument because it is still not exactly the message I want to leave, and after all any person with some vision can understand that. Obviously, if you are worried about being reelected in the next 4 years it does not matter to you what science will bring to the world in 10, right? Do you think I am exagerating? So read these two articles from the Times Higher Education:
      And these are just some of the news. Physics, philophy and other fundamental courses are being closed all around the world! It is not a localised phenomenon. You can step up and say Oh, come on! As if philosophy was useful in some way. First of all, I have a deep respect for philosophy. Physics came from it. And philosophers do what many scientists don't dare, they think beyond the limits of what we know and of what we may know. It is an exploration more than anything else and it is exactly here that I want to make my point.

      What is being relegated to second plane everywhere is the act of thinking. The most powerful product of knowledge is not the money, the technology, the health improvements. These may be wonderful subproducts, but I will dare to say that they are not the main ones. The really important aspect that is the most feared by the minority which is on the upper levels of the food chain, is that knowledge modifies a person. Each time an individual acquires more knowledge this person sees the world in a different way. The person starts to question what was unquestionable before, becomes aware of what is going on around her and learns to change prejudices, to identify injustices and to fight for what really matters. Knowledge changes people. That is the most important and powerful result of knowledge. Not generating money, not even generating jobs, but improving people!

      Think about that and think hard. Everyone knows the story of the frog that jumps if you put it in a bowl of boiling water but stays there until it dies if you put it in cold water and increases the temperature steadily until it is cooked alive. The former Dark Ages also arose slowly. That time religion was the enemy to blame, but what is behind is always power. Today it may be money, petrol or whatever may be important to keep the world's control but we are slowly being boiled in a pan and people must realise it before we end up cooked like a frog. It is obviously more convenient just to continue to live our lives and do our part, what we are being told to do. Much simpler. But I believe there are still people who can see beyond that. Hope is not over yet. As we say in Brazil actually, Hope is the last one to die.

      I will finish with a phrase I read in some blog, although I do not remember exactly where (so if you can identify it, I will happily put the link here): Think. It is not illegal yet.

      Sunday, 23 May 2010

      Apollo 11 Saturn V Launch (HD) Camera E-8


      Apollo 11 Saturn V Launch (HD) Camera E-8 from Mark Gray on Vimeo.

      They also have a a website named Spacecraft Films with a very nice collection from virtually the entire US space program. You can watch the movies and buy the DVDs if you are a real fan. Via Open Culture

      Friday, 21 May 2010

      About Life

      Mycoplasma mycoides - AP Photo/J. Craig Venter Institute

      I usually don't like to be repeating other people's post, specially because people probably read about that everywhere else before reading here. :( But I was thinking about writing a post about life for some time and then I will use the opportunity to do so.

      As news run fast, everybody by now must know about the creation of an "artificial" cell by the J. Craig Venter's Insitute. The paper was published in Science Express:

      [1] Gibson et al., Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome, www.sciencexpress.org / 20 May 2010 / Page 4 / 10.1126/science.1190719

      As is well reminded in this discussion in Nature (you can also find another nice discussion in the Edge that includes the opinion of Freeman Dyson), what they've done was to construct a pre-designed DNA from parts extracted from a unicellular organism called Mycoplasma mycoides (the one in the photo above), reassemble it and implant it into another of these organisms from which the original DNA was previously extracted. There is not synthetic components in the usual sense of the word, what was "synthetic" was the design of the new DNA, which did not exist before. They gave the new organism temporarily the beautiful and creative name Mycoplasma mycoides JCVI-syn1.0 (the guy is already going to win a Nobel and still wanted to imortalise his initials in the poor cell).

      The feat is not small though. It has important implications and open many doors. One of the important things is that it serves to test some of our present knowledge about how DNA works. DNA is a program and what they are doing is writing test programs to check how much we really know about the programming language. Given that the created cell seems to function and reproduce normally, it shows that we at least understand the basics about writing these programs.

      Think about the cell as a very delicate computer that will interpret a program encoded in the form of DNA. Mathematically, its okay to see the cell as a Turing machine with the DNA being the input tape (to see what I am talking about, look at the first YouTube video in the sidebar to your left). Consequently, one of the doors their work opens is that it is possible to vary the program and see what happens.

      Before I continue to talk about life, let me just list here some articles and blog posts about JCVI's team feat:


      I have been in some conferences on complex systems lately and many people has been trying to create artificial organisms really from scratch. It is a different kind of strategy where instead of using already DNA, the idea is to discover what exactly are the minimum possible assemblage of parts that is capable of creating a life-form when put together. Well, everybody knows that the definition of life is problematic, but in this case the idea is to create a self-replicating thing that is autonomous, in the sense that once created, it does not need our assistance to survive and reproduce anymore. I believe many groups are working on that, but I cannot find the references here right now. If somebody would like to point some of them, I would be glad to include here.

      In any case, you see that life may have different meanings and definitions. You may have been convinced by someone that in order for an organism to be alive it must reproduce right? Wrong. This is a species-based definition. Reproduction is important for natural selection, but a sterile man is still alive, even not being able to reproduce itself. An individual-based definition for life would be a way to take any "individual" in the universe and just looking at it decide if it is alive or not. Not an easy task, of course.

      When a definition is difficult to attain, the way around is to identify some properties that this definition must include. This is equivalent to say that if we cannot find sufficient and necessary conditions for something, we can concentrate either on sufficient or on necessary ones. I have been discussing it with some colleagues and one condition I really consider necessary for any individual to be alive is the capacity of processing information. In a more mathematical way, I would say that any living organism must be a Turing machine, of course not necessarily universal. Just to make it clear, this is a necessary, not sufficient condition. Not all Turing machines are alive, although I would never consider anything that cannot change its states according to some input information alive.

      So, the one point I want to make here is that life requires information processing. Unfortunately, this is the only necessary condition in my list, although I think something can already be explored by using it. For instance, I have read many comments in Cosmic Variance about life and the second law. This is a quite interesting discussion actually: does life is dependent or invariably linked to the increase of entropy? A way to try to attack this problem is by looking at the necessary conditions for life and see if they need it. Now, I must draw attention to something which is called reversible computation. Rolf Landauer, some time ago, proposed the famous idea that the erasure of one bit of information increases the entropy of the environment by an amount of $k_B \ln 2$, where $k_B$ is Boltzmann constant. Usual computing gates, like the AND or OR gates, take two bits and give one in return. Modern computation is base on the use of these gates and, according to the so-called Landauer Principle, they necessarily are dissipative and increase the entropy of the environment.

      However, there is another kind of gate which takes 2 inputs and give back other 2 in such a way that there is a one-to-one correspondence between inputs and outputs, by which I mean that given the outputs and the knowledge of which gate was applied, you can recover the inputs. An example is the CNOT gate. CNOT is reversible and do not erase information. According to Landauer Principle, CNOT do not necessarily increase entropy. Now, if there is any way, at least fundamentally, in which an organism could process information by using reversible instead of dissipative gates, it would not need to increase entropy at least for these process.

      Although this is not such a big result, it demonstrates that formalising and analysing some fundamental aspects of life is possible. Although you may say I am biased, and I completely agree, these concepts seem to be nicely described in the framework of information theory (well, as well as thermodynamics/statistical physics, which in the end are all related). For instance, note that even reproduction is related to transmitting information from one individual to another. But I will deal with that in another post. 

      Thursday, 20 May 2010

      StatPhys and ECCs #2: Statistical Physics Overview

      For those who still remember, I have decided to talk about my work on statistical mechanics (statphys) applied to error-correcting codes (ECCs). Yes, that is the work that kept me away from writing this blog for the last three years... In the first part of this "series" I wrote about the basic concepts involved in linear error-correcting codes. The discussion was all about information theory. Today I am going to write a bit about the other side of the coin: statistical physics.

      Probably many of you have already studied statistical physics and know the fundamental concepts. It is interesting however how statistical physics is not as popular for a greater non-specialist audience as other areas of physics. I cannot blame anyone, as I ended up studying statistical physics for chance and just realised how interesting it is afterwards.

      Statistical physics was born with the theory of gases. It was a very logical step. The idea was that, if matter was made of atoms obeying the laws of Newtonian mechanics, then it should be somehow possible to derive the laws of thermodynamics from a microscopic description of the system based on these premises. Well, the truth is that many people did not believe in atoms at that time, for instance, Mach was one of them.

      It is well known that Boltzmann (the guy at the right) was the great name of statistical mechanics, although many other great physicists like Maxwell and Einstein also contributed largely to the field. And is also known that Boltzmann killed himself after dedicating his life to statistical physics due to the disbelief in the notion of atom, until of course Einstein published his paper on Brownian motion.

      Statistical physics is the area of physics (at least it started in physics...) that deals with the behaviour of a large number of interacting units and the laws that govern it. It started with gases, but the most famous model of statistical physics is called the Ising model (see here the curious story of Ernst Ising). The Ising model is a mathematical model for many units interacting among themselves. Is strikingly simple, but surprisingly general and powerful. It is defined in terms of a Hamiltonian, the function defining the energy of a system. It was devised to explain magnetic materials and so is a function of the spins of the the electrons in those materials, which are symbolised by $\sigma_i$, where $i$ runs from 1 to N, the total number of spins. In terms of spins, the interaction that is importante is called exchange interaction and favours spins to be aligned with each other. As a spin can have two values that can be taken to be +1 and -1, the interaction favours the case when two spins multiplied give the value 1. As all systems in nature want to minimise their energies, we can then write the Hamiltonian as

      \[H=-\sum_{\left\{i,j\right\}} \sigma_i \sigma_j,\]

      where the symbol under the summation sign says that we must sum over all the pairs that are interacting.
      It turns out that this simple model of interaction, properly generalised, can describe not only the interaction between two spins, but also between two or more persons, robots in a swarm, molecules in a gas, bits in a codeword and practically EVERY interaction between ANYTHING. And I really mean it. Of course there is no free lunch and the calculations become more difficult when you increase the sophistication of the system, but the fundamental idea is the same. For example, the usual way the Ising model is defined is with spins in a regular lattice, which in one dimension is a straight line with spins located at equally spaced points, in two dimensions is a square grid, in three is a cubical one (as the one at the side) and so on.

      The simplest thing is also to consider that only first neighbours interact, which means that for instance in the cubic lattice above, spins will interact only if there is an edge linking them. The one dimensional model is solvable exactly, the two dimensional also, but it took many years and  a huge mathematical effort by Lars Onsager (a Nobel Prize winner of Chemistry) to be able to solve. The three dimensional one is already NP-complete and therefore practically hopeless (until some alien comes to Earth and print the proof that P=NP in the countryside with crop circles).

      Then things got complicated when people started to wonder what happens when the interaction between the spins is different for each edge in the lattice. The most interesting case turned out to be when it is randomly distributed on the lattice. The Ising model conveniently generalised is written as

      \[H=-\sum_{\left\{i,j\right\}} J_{ij} \sigma_i \sigma_J,\]

      where now the $J_{ij}$ controls the characteristics of the interaction. Note that if $J$ is positive the interaction, as before, favours alignement and is called a ferromagnetic interaction,while if now $J$ is negative, it favours anti-alignemnt and correspondly is called anti-ferromagnetic. If we restrict ourselves to a simple situation where $J$ can be either +1 or -1 randomly in the lattice, we discover that this is a highly complicated case which give rise to a kind of behaviour of our system called a spin glass state.

      This kind of randomness is called disorder. In particular, if for every measure we do in our system we fix the interactions to one configurations and do our measurements, than change it, fix it and do it again and so on, the disorder receives the name of quenched disorder. I will write a longer post about disorder soon, as it is a huge and interesting topic. There is another two places in the Ising model where disorder can appear. One is in the form of the lattice. It does not need to be a regular one, it can be any configuration of points linked by as many lines as you can imagine, what we call a random graph. You can also imagine that the interaction involves more than two spins, it can involve three or more generally $p$ spins multiplied together like $\sigma_{i_1}\sigma_{i_2}\cdots\sigma_{i_p}$. Finally, it can appear in the form of random local fields, a field being a variable $h_i$ multiplying the $i$-th spin, something that can be written for $p=2$ (our usual two spins interaction) as

      \[H=-\sum_{\left\{i,j\right\}} J_{ij}\sigma_i\sigma_j -\sum_i h_i \sigma_i \]

      Now, there are two important facts about statistical mechanics that must be known. Statistical mechanics is concerned with something which is called the typical behaviour of a system, which means the most common realisation of it. Also, it is interested in knowing what happens when the number of units N is really huge, what can be understood by thinking that the number of atoms in any macroscopic sample of a material is of the order of $10^{23}$. The nice thing is that in this large N limit, called appropriately the thermodynamical limit, typical and average coincide and we can always look at averages over our systems. 

      Statistical mechanics is a giant topic and I will not attempt to cover it in just one post. I will write other posts explaining parts of it with time, but the main idea of many units interacting will be our link with information theory and codes. Don't give up, we are almost there.

      Tuesday, 18 May 2010

      Goodbye Butantan...


      There area some things you take for granted. Things that you think will exist forever and you will always have time to see. The above building, existing adjacent to the main campus of the University of Sao Paulo, Brazil, iscalled "Instituto do Butantan", or Butantan Institute. Many buildings in this institute are old. In particular, the one who used to contain the whole biological collection was projected in the 1960's, which means that for most people of my age it always existed, but the institute itself was founded on 1901. Here is a Wikipedia article in English about it. Until this Saturday, it was one of the largest collection of venomous animals conserved in vitro in the world, with approximately 450 000 specimens of spiders and scorpions and 85 000 snakes.

      As I have seen many ignorant, to say the least, comments about this in the internet, let me just clarify that people from the institute did not go around the country hunting snakes to put them in jars. Although this may have happened some times, as I cannot tell anything about science on 1901, I know that we in Brazil were used to send any kind of snake we would find dead, or that needed to be killed because it was attacking or has attacked someone, to the institute and I believe that many of the species were acquired in that way. And before anyone says it is some kind of horror show, I bet that all children that have ever put their eyes on that collection were amazed, not scared.

      That said, the institute was also the largest producer of antivenom and vaccines in Latin America. To be honest, nobody dies of poisoning by a venomous animal in Brazil thanks to it. Specially those who live in Brazil know how it is not uncommon to find snakes, spiders or scorpions in the gardens. And not too far from the big centres. And some of them are really poisonous.

      Everything I am saying is in the past for it took only one hour and a half for a fire to destroy more than one century of research. The fire started at 7 o'clock in the morning on this Saturday and was merciless to the building that was used to store the collection of dead animals (neither the one in the above photo, nor the one in the photo to your left, but the one in the photos below). The building was not prepared for a fire and most specimens were conserved in ethanol, which contributed to the rapid spread of the fire. The specimens that were alive fortunately could be saved before the fire reached them.

      Science everywhere in the world survives thanks to the efforts of those who love it. Politicians usually do not give a damn unless it is useful to fulfill their needs for money and power. The public in general may even like it, but consider most of it nothing more than a kind of entertainment and scientists as some kind of people who want to have a fun life using public funds. In Brazil it is even more difficult and the Butantan was one of those places that become a legend thanks to the efforts of the researchers and certainly not of the politicians. It seems that last year the researchers asked for a grant of 1 million reais (the Brazilian currency), which is about US$ 500 000 and £ 380 000, to install fire protection equipment in the building. I do not need to say it was not granted.

      When I was a kid, almost every child in Sao Paulo had gone to visit the Butantan at some point to see the amazing collections. What kid doesn't want to see snakes, spiders and scorpions? They had also live animals there, and it seems that at least these were saved, and everyone has a fantastic story of seeing huge snakes being fed with small mouses. I have never been there, and now I regret. Although the institute was more than just one building, that one in particular could be considered the most important. You can reconstruct everything, but the knowledge is lost forever.

      The amount of money needed to avoid the lost of one thing that was something Brazil should be proud of was less than what is needed to pay for an advertisement in a TV channel. Less than what the president that the rest of the world love (Lula) would spend in one of his "receptions". Actually, in modern terms, it was almost nothing. Since that Times article about Brazil, everyone here always ask me why am I not coming back as Brazil is meant to be one of the powers of the future. The answer is simple. The only thing that is becoming better in Brazil are the economic indices, and unfortunately I do not have US$ 1 000 000 to invest and take advantage of it.


      I must admit that while I am writing this, I am really trying to contain the tears that are forming in my eyes. Butantan was part of Sao Paulo, and Brazil of course, history. One of those few things that we could say that was working there and that we were proud of. It is really a pity. A great loss. I doubt I can continue to write more about that without indeed crying, I leave everyone with some more photos and news:




      Goodbye...