Manipulating vacuum with mirrors

The Wikipedia definition of vacuum is “a region of space devoid of matter”. In other words, just empty space. Still, within the strange world of quantum mechanics, vacuum is not that “empty”. There are so-called quantum fluctuations (or vacuum fluctuations) which give rise to some interesting physical phenomena and can (to some extent) be controlled!

In a perfect crystal, the atoms are organised in a repeating pattern, called a crystal lattice. If one, or more atoms do not follow this repeating pattern, we have a crystal defect. In everyday life, you might think of something being defect as being broken. However, in solid-state physics, crystal defects prove to be very interesting, and to be of high importance! One example of a crystal defect is the nitrogen-vacancy (NV) centre in diamond. As the name suggests, the NV centre consists of a substitutional nitrogen atom (e.g.  a carbon atom replaced by a nitrogen atom) next to a vacant position (called a vacancy).

Picture1
The nitrogen vacancy (NV) centre is a crystal defect in diamond, consisting of a substitutional nitrogen atom next to a vacancy. The NV centre acts like a trapped ion inside the wide bandgap of the diamond.

The NV centre has a total of 5 valence electrons, two from the nitrogen atom and three from surrounding carbon atoms. The two electrons from the nitrogen atom alongside two from the carbon atoms form pairs, leaving one unpaired electron. The unpaired electron can efficiently trap an electron from a nearby donor [1], leaving the NV centre with a net charge of minus 1. We call these negatively charged NV centres for NV. For the remaining of this article, I will simply refer to NVas NV centres.

Diamond is made out of carbon atoms, where 99% of the carbon atoms are carbon-12 (12C for short).  The nucleus of 12C consists of six protons and six neutrons, meaning that all the nucleons have “paired up” with another nucleon. In addition, all the six electrons also “pair up”. The significance of this pairing, is that the diamond lattice is more or less spin-free. Nitrogen, on the other hand, has one additional proton and neutron, resulting in a net spin of 1. The spin-free lattice of diamond is a good host for the NV centre spin, resulting in a long spin coherence time [2], which is the time for which the spin is pointing along one axes before flipping due to interactions with the environment. When irradiated with green laser light, the NV centre fluoresces bright in red. However, the intensity of this fluorescence depends on the spin state of the NV centre. In other words, it is possible address the NV centre spin optically!

In a previous blog post, Matteo explained how one could use the long spin coherence time of the NV centre to create a quantum network (if you have not read it yet, I encourage you to do so). For the sake of this article, I will quickly summarise the key concepts:

A photon emitted from the NV centre can be entangled with the spin state. Photons from two remote NV centres can interact to form an entangled pair, provided the photons are indistinguishable. Since the photons were already entangled to the spin of the host NV centre, and the photons are entangled with each other, the two remote NV centre spins become entangled to each other [3]! This spin-spin entanglement can then act as a node in a larger quantum network. The entanglement between distant NV centres has been demonstrated in a famous experiment performed by Ronald Hanson’s group at TU Delft, where they used two NV centres separated by 1.3km to demonstrate a loop-hole free Bell inequality violation [4].

One of the biggest challenges for a large scale quantum network based on NV centres, is the low flux of coherent, indistinguishable photons. Firstly, only approximately 3% of the photons emitted from the NV centre are emitted along the so-called zero-phonon line (ZPL), a purely electronic transition resulting in indistinguishable photons with a wavelength of 637 nm. The remaining photon emission is accompanied by the emission of a phonon (a quantised vibration of the crystal lattice). Secondly, with a large refractive index of 2.4, total internal reflection at the diamond-air interface results in the photons primarily being emitted into modes propagating latterly in the diamond. Thirdly, the NV centre possesses a relatively long lifetime of 12 ns, resulting in a low flux of photons. Luckily, all three problems can be solved by engineering of the optical environment, which slowly leads us back to the properties of vacuum. However, before we can start talk about optical engineering, we need to understand the mechanism behind spontaneous emission of photons.

Picture2
The emission spectrum from an NV centre shows a sharp zero-phonon line (ZPL) accompanied by a wide phonon sideband. Only approximately 3% of the emitted photons are emitted along the ZPL.

The rate of spontaneous emission of photons from a NV centre (or any other quantum system), is governed by the coupling between the NV centre and the previously mentioned vacuum fluctuations. Therefore, by controlling the vacuum fluctuations, one can alter the rate of photon emission. One way to control the vacuum fluctuations is by the use of an optical microcavity [5]. An optical microcavity consists of two highly reflective opposite facing mirrors, separated by a small distance. The light inside the microcavity bounces back and forth between the two mirrors, in a similar fashion to the sound wave inside a guitar. Only light, provided the special condition that the cavity length is equal to an integer number of half the wavelength, will be transmitted through the cavity. In addition, provided the separation of the mirrors in the microcavity is comparable to the wavelength of the light, the vacuum fluctuations become confined. In other words, the vacuum fluctuations become stronger inside the cavity, resulting in a stronger coupling between the NV and the aforementioned vacuum fluctuations. Hence, the probability of photon emission into the cavity is enhanced. One can picture this as the photons being funnelled into one direction, and with one specific wavelength. Enhanced emission rates, by confinement of the vacuum fluctuations, is known as the Purcell effect.  We can use mirrors to control the vacuum!

With the use of piezoelectric nano-positioners, the separation between the two mirrors can be controlled with sub nanometre precision. By tuning the cavity length to overlap with the NV zero-phonon line (637 nm), my colleagues at the University of Basel have experimentally demonstrated Purcell effect from single NV centres [6]. The experimental evidence to back of this claim, is the reduction of the lifetime from 12 ns outside the cavity, to 7 ns inside the cavity. This experiment shows that, with the use of two mirrors, one can address and solve the three problems with the NV centres mentioned above; the broad emission spectra, the low extraction efficiency and the long lifetime.

Picture3
Schematic figure of the tuneable microcavity used to demonstrate Purcell effect from individual NV centres at the University of Basel. The set of xyz piezo positioners allows in situ tuning, and study of individual NV centres.

The reduction of the lifetime, and hence increased flux of coherent photons, is an important step in the right direction for a large scale diamond based quantum network.  The next steps will be to make better performing cavities, alongside the implementation of spin control inside the cavity. Then, in principle, two NV centres located in two remote microcavities can be entangled, and act as two nodes in a large scale quantum network!

So to look back on this article, the key takeaway message is that we can use two highly reflective mirrors to control the vacuum fluctuations, and hence alter the emission of photons from individual NV centres.  Please have a look at my video for a visual introduction to this work: https://www.youtube.com/watch?v=xeovlntM66U&t=99s

 

Written by Sigurd Flågan, PhD student in Richard Warburton’s group at the University of Basel, Switzerland.

[1]        L. Novotny, Principles of Nano-optics 2nd ed, Cambridge University Press 2012.

[2]        G. Balasubramanian et al, Ultralong spin coherence time in isotopically engineered diamond, Nature Materials 8 383-287 (2009)

[3]        H. Bernien et al, Heralded entanglement between solid-state qubits separated by three meters, Nature 497, 86 (2013)

[4]        B. Hensen et al, Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, Nature 526, 682–686 (2015)

[5]        R.J Barbour et al, A tunable microcavity, J. Appl. Phys. 110, 053107 (2007)

[6]        D. Riedel et al, Deterministic Enhancement of Coherent Photon Generation from a Nitrogen-Vacancy Center in Ultrapure Diamond, PRX, 7, 031040 (2017)

Advertisements

The Coldest Place in the Universe

Have you ever wondered where the coldest place in the universe is?

Well, you may be surprised to find out that it’s actually in our Milky Way galaxy, more precisely here: https://www.google.dk/maps/@-11.6368098,-83.7133284,23035101m/data=!3m1!1e3?hl=en.

Yes, it is true. The Earth contains the coldest place in the known cosmos. In fact, over the last century, scientists have been able to reach temperatures fractions of Kelvin (K) from absolute zero, the coldest temperature physically possible, equal to 0 K or -273.15 °C. This result is astonishing if you consider that the average temperature of the universe is around 2.7 K. The coldest natural temperature is found in the Boomerang Nebula of the constellation Centaurus, and has been estimated to be around 1 K.

How is it possible to reach such low temperatures artificially?

The journey for the development of cryogenic temperature started almost 150 years ago [1], when Carl von Linde patented equipment to liquefy air, which has a boiling point around 80 K. The process was rather simple, making use of pure thermodynamic laws and exploiting the Joule–Thomson expansion process. By adiabatically (i.e. by avoiding heat exchange with the external environment) lowering the pressure of the gas through its expansion, the increase in intermolecular distance caused a lowering of the kinetic energy of the atoms therefore the temperature of the gas. The cooled gas was subsequently reintegrated in the cycle, undergoing the process until all the gas reached the liquid state. Liquefaction of air was subsequently followed by nitrogen and oxygen.

About 30 years later, Dewar managed to liquefy hydrogen, exploiting a similar process to the one previously described. However, this turned out to be more complex problem, since the inversion temperature of hydrogen (and helium) is below 50 K at 1 atmosphere, whereas for air, nitrogen and oxygen is well above 500 K. The inversion temperature is the temperature at which the Joule-Thomson process start to be effective and lead to gas cooling. Therefore, the Linde cycle needs to be started already at very low temperature. Dewar succeeded in his purpose using liquid nitrogen (77 K) to precool the hydrogen, before starting the liquefaction process. However, in order to do that, he had to invent a new instrument which allowed him to thermally decouple the hydrogen from room temperature, the Dewar flask: a vacuum-insulated, silver-plated glass container. Sounds familiar? Yes, it is exactly what we use every day to keep our coffee warm, a Thermos!

In 1908 Kamerlingh Onnes, the “father of low temperature physics”, managed to successfully cool down 4He to its boiling point, reaching the record temperature of 4.2 K and opening the few Kelvin temperature range to science. He was awarded with the Nobel prize in physics in 1913 “for his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium”. Eventually, Onnes realized that by increasing the evaporation rate of the liquid, it was possible to reduce its temperature even further.  In 1920 he successfully cooled down a bath of 4He to 0.8 K, by pumping away the helium vapor from the chamber. However, since the vapor pressure decreases exponentially with the temperature, this method starts to be inefficient very soon.

At this point in history, the technique of gas liquefaction reached its bottleneck, and fundamentally different technologies were required. One solution was offered from adiabatic electronic demagnetization techniques, which was proposed in the late ‘20s, and developed further during mid 20th century, culminating in the development of adiabatic nuclear magnetization technique which allowed to reach 100 microkelvin at the end of the ‘60s. The main idea behind this technique is to adiabatically disorder magnetic domains in materials by removing the magnetizing field. The process of inducing disorder requires energy, which is absorbed from the external environment. However, as the name suggests, this technique requires working with magnetic materials which are uncommon, and at the same time it doesn’t allow to maintain such a low temperature over long periods.

Another approach started to be developed during the second half of the last century, and it dealt with the dilution of 3He (a stable isotope of helium) in 4He, which allowed to implement continuous refrigeration methods down to the millikelvin regime. The first prototype was experimentally realized in 1964 in the Kamerlingh Onnes Laboratorium at Leiden University, in the Netherlands. When the mixture of 3He and 4He is cooled to sufficiently low temperature, like that in a low-pressure liquid 4He bath, the mixture will tend to separate into two liquid phases, one which is pure 3He and a the other which is mostly 4He with a small amount of 3He, the proportion of which is fixed at equilibrium. By evaporating 3He from its diluted phase, the ratio of the mixture is altered, and it tries to get back to equilibrium by moving 3He from the pure to the dilute phase. This is a endoenergetic process, which absorbs energy from the external environment as heat, leading to the cooling of the system. The equipment that exploits this process is known as dilution refrigerator, which is the main apparatus for low temperature physics experiments, such as the computation with spin qubits, which Stephan and Yanick discussed in a previous blog post.

Towards the end of the 20th century, the novel method of atomic laser cooling proved to be successful in reaching even lower temperatures. This technique reduces the average velocity of a few atoms, by using fast energy pulses at precise frequencies, which can be only provided by lasers. Smaller average velocities mean smaller kinetic energy, corresponding to a reduced temperature of the atoms (actually this technique is much more complicated, so for more information I suggest this reading [3]). Applying this procedure, allowed researchers in 2015 to lower the temperature of few rubidium atoms down 50 picokelvin, that is 50 trillionths of a degree above absolute zero! [2]

Picture1
Historical development of refrigeration [1]
Eventually, after this long story, I am pretty sure you are wondering, why are scientist interested in reaching low temperature?

One important phenomenon that only happens at low temperature is superconductivity. In 1911 Onnes observed that the resistance of mercury dropped to zero as the temperature reached 4.2 K, the first superconducting transition, and the first superconductor. This discovery that revolutionized the field of condensed matter physics as well as leading to powerful technologies such as maglev trains and magnetic resonance imaging (MRI). However, even 100 years after the discovery of the first superconductor, scientists are still discovering new materials that turn superconducting, and no general theory able to fully describe this phenomenon exists yet.

By Fabio Ansaloni, PhD student at the Centre for quantum devices group at the University of Copenhagen, Copenhagen, Denmark.

[1] F. Pobell, Matter and Methods at Low Temperatures, springer (2007)

[2] M.A. Kasevich et al., Phys. Rev. Lett. 114, 143004 (2015)

[3] W.D Philips, Laser Cooling and Trapping of Neutral Atoms, Nobel Lecture (1998)

 

 

 

 

 

A Quantum Internet made of Diamonds 🤯: How cool is that? A Quantum Internet. Made of Diamonds.

We are constantly connected to Internet. With our computers, our smartphones, our cars, our fridges (mine is not, yet,  but you get the idea). In its very first days the Internet was a very rudimentary, yet revolutionary, connection between computers [1]. It enabled one computer on the network to send messages to any other computer on the network, whether it was directly connected to it (that is, with a cable) or not. Some of the computers on the network acted as routing nodes for the information, so that it could get directed toward the destination. In 1969 there were four nodes on the then-called ARPANET. By 1973 there were ten times as many. In  1981 the number of connected computers was more than 200. Last year the number of devices capable of connecting to Internet was 8.4 billion (with a b!) [2].

Computers on their own are already great, but there is a whole range of applications that without a network infrastructure would be inaccessible. Do you see where I am going?

 

image3
“Internet Map” by Chris Harrison. The brighter the edges, the higher the number of connections. [http://www.chrisharrison.net/index.php/Visualizations/InternetMap]

A couple of posts ago Stephan and Yanick explained what a quantum computer is, how to make it and, most importantly, why we should bother to build one. If you have not read their posts yet I strongly suggest you to do so, since they are two really nice reads. But if I had to shrink them to a handful of words: quantum computers will exploit the weird laws of micro(nano!)scopical objects to solve some problems way faster than any future normal computer. They will do so by encoding the information in quantum systems, which will therefore be quantum information. A Quantum Internet is a network capable of routing this quantum information between quantum computers. We can already foresee some nice application for this quantum network, like establishing an inviolable secure communication link between any two nodes, connecting far-apart telescopes to take ultra-sharp images of stars and galaxies, linking small quantum computers into a huge powerful one (a bit like cloud computing, but much stronger) [3]. In all likelihood the best way to use these new technologies will come over time, with applications we cannot anticipate.  For example, in the early 1960s the invention of the laser was welcomed as “a solution looking for a problem” [4]. Now we have lasers everywhere! I think that the same thing will happen for quantum computers and their internet.

One of the quirks of quantum information is that it cannot be copied. When you try to copy it, you irreversibly destroy the original information. This is, under the hood, what makes quantum communication so secure; but on the other hand, it could also make sharing quantum information excessively hard! If we had to rely on a single quantum system (a photon for example, a particle of light) to travel undisturbed across the globe, we could as well stop this now.

Fortunately quantum mechanics offers us a solution to this problem: teleportation. It’s not like Star Trek, we can’t teleport people, but we can teleport quantum information. We can transfer the information stored in a single atom in Amsterdam to an electron in London, without reading the atom, without knowing what the information is and, crucially, without making it “physically” travel the distance. Of course this does not come for free, we have to pay a price, and that price is entanglement.

 

image2.png
“Quantum Teleportation” by xkcd [https://xkcd.com/465/]

Quantum mechanics predicts the possibility of a rather weird phenomenon. If you take two separate quantum objects, say two nanometer-sized M&M’s, and you make them interact in some particular way, you can make them entangled: the two M&M’s lose their individuality and can only be described as one of the parts of a two M&M’s system. Let’s say that our nano-M&M’s can only be of either two colours: red and blue. A non-entangled scenario would be, for example, if the first was red and the second blue. Each M&M’s has its own colour, its own identity. Now let’s take the two M&M’s (one red and one blue), shuffle them a little a bit, just to lose track of which one is which, and then send them one to you and one to me. When you observe the colour of your M&M’s, you immediately get to know also the colour of mine! While this is interesting, this is not quantum entanglement. This is called correlation and it is not quantum at all. The two M&M’s were always of the same colour, we simply didn’t know what it was.

When you entangle the two M&M’s, they actually lose their own colour! You can make an entangled two-M&M’s system, in which you know that the M&M’s will have different colour when you will observe them, but until then their colour will not be assigned. This effect is so weird that even great scientists believed it was too weird to be true [5]. Now we have the tools to prove with experiments that the effect is real [6], and we can exploit it to build new technologies, such as a quantum internet. The idea is to share entangled objects between the nodes of the network, protect their entanglement from the noisy non-quantum environment in which we live, such that when we need to send a quantum bit of information, we can spend the connection of the entangled objects to teleport the (qu)bit. But how can we share entanglement on this network?

 

image4
Illustration of the difference between classic correlation and entanglement.

This is where our diamonds finally  find a place. Diamonds are crystals made of carbon atoms arranged in a very compact way. Sometimes the crystal may have some defects, like an intruder atom (say nitrogen) or a missing carbon atom (what we call a vacancy). If we are lucky enough, these two defects happen one next to the other. Such a system is called an N-V center (nitrogen-vacancy).

N-V centers are one of the most promising candidates to act as nodes of a quantum network.

A node is made of three ingredients: a processor to handle information, a memory to temporary store it and a link to the other nodes.

We use the spin of a pair of electrons localized around the N-V center as the processor of our node. We can read its state using lasers and manipulate it using microwave pulses. The information stored in this spin has a short lifetime: the system loses memory of the information we store in it too quickly to use it as a reliable memory. Luckily enough, Nature provides us with a strong quantum memory not far apart from the N-V center. About 99% of the carbon atoms in nature are C12 which is spin-less, it does not have a spin. Most of the remaining carbons are C13, which has a spin (due to the additional neutron in the nucleus). We can talk to these C13 atoms in the diamond thanks to their spin-spin interaction with our electrons in the N-V center. Since the C13 spin “feels” the electronic spin, we can manipulate the latter to perform operations on the first.

The last ingredient of the node is the ability to link it to other nodes. We do this by making the N-V center emit a photon that is entangled with the electronic spin (like the M&M’s). A second N-V center, in a second diamond, in a different place, does the same. Then, by making the two photon interact we can transfer their entanglement to the two electronic spins, entangling the nodes [7]. This entanglement can then be used for quantum network applications.

 

image1
Left: Illustration of the lattice of a diamond with an N-V center [jameshedberg.com]. Right: Scanning electron microscope image of one our diamonds. In each of these domes there is an NV-center. The electrical connections are used to deliver microwave pulses and other signals.

We are living the second quantum revolution. We will not be just spectators of quantum mechanics, we will use it as a technology. In a couple of decades everybody will have access to quantum computers connected through a quantum internet, to design drugs, to optimize airports, to play videogames and who knows for what else. Aren’t you excited? I certainly am.

[1] History of the Internet. Wikipedia. https://en.wikipedia.org/wiki/History_of_the_Internet

[2] Internet of things. Wikipedia. https://en.wikipedia.org/wiki/Internet_of_things

[3] The quantum internet has arrived (and it hasn’t). D Castelvecchi. Nature 554, 289-292 (2018)

[4] Beam: the race to make the laser. J Hecht. Optics and photonics news 16.7, 24-29 (2015)

[5] Action at a distance. Wikipedia. https://en.wikipedia.org/wiki/Action_at_a_distance

[6] Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres. B Hensen et al. Nature 526, 682–686 (2015)

[7] Heralded entanglement between solid-state qubits separated by three meters. H Bernien et al. Nature 497, 86 (2013)

By Matteo Pompili, PhD student at the Hanson group at Delft University of Technology, Delft, The Netherlands.

Van der Waals Heterostructures: To Infinity and Beyond

Two weeks ago we witnessed to an event that will probably remain in the history books; I am speaking about the successful test of the Falcon Heavy that puts SpaceX on to the front pages of pretty much all magazines that are even remotely interested in space technology.

To sum up, the episode was classified as an astonishing success, and had a large impact on social media thanks to the eccentric personality of Elon Musk, the founder of SpaceX. In fact, the very first test of a new type of rocket is extremely risky (due to the high possibility that everything will blow up on the launch platform), so no one offered to load a satellite on this run. Instead of simulating the load with concrete weights in the payload capsule, Musk decided to send in to space his own cherry-red Tesla Roadster while playing David Bowie’s “Life on Mars” and with the first SpaceX suit sitting as a driver and astronaut. The images are literally astonishing and look like they were photo edited, but apparently this is one of the times when reality overcomes imaginations. From a practical point of view this is most powerful rocket actually in operation and is extremely cheap (£90 million per launch) and opened a door to a new space race.

 

g4302.png
a): Image of the Tesla Roadster with the Earth in the background, b): Falcon Heavy on the platform during the final tests c): Falcon Heavy engines, formed by 27 Merlin engines (image credit: SpaceX, http://www.spacex.com/)

 

On the wave of this excitement I think it is interesting to focus our attention on how the research on two dimensional materials and Van der Waals heterostructures can potentially be useful also for space exploration technologies. Two of the most pressing problems that we have to solve to gain easier and broader access to our space neighbourhood are the rocket efficiency to lift heavy weights from the Earth’s surface, and energy harvesting once in space. The first one is currently under heavy development (vivified by the entry on stage of private companies, such as SpaceX, blue origin and Orbital ATK); the second one is actually dominated by multi-junction inorganic solar cells, which gather solar light to produce electricity.

They actually hold the record of efficiency over 45%, but such high efficiency is gained at the cost of increasing complexity and manufacturing price, in fact, each single ones of these elements are composed of several different solar cells, each one of them absorbing in a specific range of wavelengths. Moreover, they need to be extremely engineered to be folded during the launch and fully extended and oriented once that they reach the final destination. Last but not least, there is a growing demand of energy to sustain larger and more complex satellites or even human missions.

 

02.png
On the left schematic structure of a multi-junction solar cell; on the right, image of the solar panel on the international space station (ISS) (image credit NASA https://www.nasa.gov/)

 

This is the scenario in which two dimensional materials could give a contribution, especially transition metal dichalcogenides (TMDs). In fact, they have some desirable properties which make them excellent candidates for solar energy harvesting.

Light absorption: a single layer TMD flake of can absorb an extremely large amount of photons in a significant part of the visible spectrum [1]. This is a key property for a material to be employed for light harvesting devices. Considering that these materials have a sub-nanometre thickness their absorption-to-weight ratio is really promising for photovoltaic applications. Especially in the space sector when the weight is a critical parameter to consider.

Efficient carrier separation: putting in contact two different TMDs will create a heterojunction that has a type II band alignment. This means that the electrons will be confined in one material, while the holes (or electron vacancies) are confined in the other material. In a heterojunction the recombination process (which is exactly the opposite of the photovoltaic effect) is a serious problem to overcome, to convert the light absorbed into electrical current.

 

03.png
Schematic sketch of photovoltaic effect and radiative recombination process in a heterojunction.

 

heterobilayers have an extremely efficient carrier separation, this means that when a photon is absorbed the two carriers (electron and holes) are split in less than 50 femtoseconds in to the two different layers [2]. These properties could be the key to massively increasing the efficiency of the device.

Flexibility: since they are so thin and flexible they can be integrated in many more elements of the spaceships, so instead of having big ships with huge solar panels, they can be also built into the surface of the spaceship modules themselves. This will optimize the light collection efficiently by exploiting the entire useful surface. Also due to their flexibility, new solutions can be developed to compress the solar panels during the launch.

Engineering material and devices: Adopting the same strategy seen for multi-junction solar cells, many isolated heterobilayers can be stacked together to form a similar structure, in which each single pair can be tailored at will to absorb most efficiently in a specific spectral region. This will maximize the energy collected and therefore the power gained, moreover, the overall thickness of the devices is indeed negligible due to the two dimensional nature of these materials.

To conclude, we are at the beginning of a new era of space exploration that can offer unprecedented benefits in countless fields, from global marketing to fundamental research, from resource access to aerospace engineering. In this space race, two dimensional materials can also express their full range of potential and lead to a major breakthrough which will bring us a step closer to the stars.

 

Per aspera ad astra

 

By Alessandro Catanzaro, PhD student at the LDSD group at University of Sheffield, Sheffield, United Kingdom.

 

  1. Bernardi M, Palummo M, Grossman J.C, Nano Lett. 2013; 13(8):3664-3670. doi:10.1021/nl401544y.
  2. Yu Y, Hu S, Su L, et al. 2014. doi:10.1021/nl5038177.

Acknowledging and dealing with mental health issues as a PhD

The exotic idiosyncrasies of topological insulators are fascinating and would no doubt prove fertile grounds to write this blog post. Nevertheless, I want to write about something more important and urgent for our generation of young scientist: Mental Health issues that many PhD students struggle with but seldom mention.  Graduate school is an exciting time as we, young scientists, work on the cutting edge of science, testing new and exciting ideas.  In striving for excellence, however, many of us struggle with balancing our academic and personal life like I have tried since I embarked on my academic career.

 

Constant high stress levels can severely affect our work, physical and emotional health and social life. There is a tacet understanding that PhD is difficult but like a boastful sleep deprived marine or medical intern on their 30th hour in the emergency ward, there is also an unhealthy culture discouraging many of us from speaking openly about our “weaknesses”. As a PhD student, job anxiety and high stress levels are seen as part of being an adult and having a responsible job. We do not want our colleagues with whom we are racing for publications to find out that we may not be doing so well personally. We meet them with confident bright faces hiding our ever looming fears of uncertain futures and failures inside.

 

In rare studies conducted in US and Belgium universities (see reference links at the end), up to 50% of graduate students from different academic fields were reported to be facing some form of mental health problem. Most of them reported one or more of the following:

  • low self-esteem
  • constantly feeling unhappy
  • being depressed
  • losing sleep because of anxiety
  • loss of appetite and concentration
  • feeling like they have poor control on their job and its progress
  • not being able to overcome difficulties
  • failing to enjoy day-to-day activities
  • low career optimism and feeling hopeless
  • feeling scared of making decisions
  • feeling low in energy levels
  • failing at maintaining a life-work balance

 

A big fraction admitted to having thought about suicide. This puts most of the PhD students at the risk of developing severe psychiatric problems. However, we tend to casually accept feeling depressed and constantly dealing with high levels of stress as part of “being a PhD”.

pic1

pic2
Figures taken from the berkeley report [http://ga.berkeley.edu/wp-content/uploads/2015/04/wellbeingreport_2014.pdf]
Although self reported data can be unreliable, particularly with respect to recall of activities over long time spans, it is still shocking to see two bimodel peaks at 2.5 and 5.7 hours on sleep on average and many students indicating high frequency of depressive symptoms.

 

I think a huge change can be made by just acknowledging that these are mental health issues that are not negligible and can be treated.  We really need to change our attitude towards how we look down upon our colleagues who confess about having a harder time than us. We tend to smirk at them thinking they are not as smart.

 

We need to realize that there can be many more complicated factors at play. For example:

  • having to support a family and larger financial pressures
  • starting PhD in a foreign country/group can be very intimidating. Other than adapting to a new culture, it may also involve relearning social roles and their hierarchies. For example, the role of a teacher/professor may be perceived very differently in different cultures.
  • female researchers may find themselves dealing with an implicit culture of misogyny where they might not only feel a larger pressure to prove themselves but also considerably lesser space to make mistakes. Consequently, they end up with depression, low self-esteem and the fraud syndrome.
  • lacking a social support network or friends
  • balancing research, paying bills and a personal life etc

 

We can perhaps learn from Martin Seligman’s theory of “Learned Helplessness” where animals with uncontrollable stress/punishments eventually gave up on even trying to solve their problems and showed much worse cognitive and social skills down the lane. We often feel helpless too when we can not control the results of our experiments or fail at doing the multiple tasks we have to manage simultaneously as a PhD student. Professors (or PIs) may well be very alien to these tumultuous events happening inside their students. Most of us will be afraid to take our PIs in confidence regarding these issues. However, many universities now offer psychological counselling and help. We should realize that our mere understanding can play a crucial role in helping someone deal with these problems. So, kindly support your PhD friends who are struggling and encourage them to get external support as they themselves may feel helpless or trapped in their own predicament.

 

[1] http://ga.berkeley.edu/wp-content/uploads/2015/04/wellbeingreport_2014.pdf

[2] http://www.tandfonline.com/doi/abs/10.1080/19325037.2013.764248#.UvEd27T_cX9

[3] https://www.sciencedirect.com/science/article/pii/S0048733317300422

[4] http://www.postdocjournal.com/file_journal/614_91180693.pdf

 

By Aroosa Ijaz, PhD student at the Ensslin Nanophysics group at ETH Zurich, Zurich, Switzerland.

Making quantum computers with spin qubits

In the previous blog post, Yannick explained what a quantum computer is and motivated why it is a useful tool. In this blog article we will stay on the same theme and talk about what is needed to make a quantum computer. What your quantum computer will look like, all depends on the chosen qubit implementation. Currently, there are a few qubit implementations that look quite promising. The most prominent examples are superconducting qubits, ion traps and spin qubits. In this article, we will focus on the latter one, since that’s the one I’m working on. All the platforms mentioned above fulfill the so called DiVincenzo criteria. These criteria, defined in 2000 by David DiVincenzo, need to be fulfilled for any physical implementation of a quantum computer:

 

  1. A scalable physical system with well characterized qubits.
  2. The ability to initialize the states of the qubits to a simple fiducial state, such as |000⟩.
  3. Long relevant coherence times, much longer than the gate operation time.
  4. A “universal” set of quantum gates.
  5. A qubit-specific measurement capability.

 

In this article we will go through all these criteria and show why spin qubits fulfill these criteria, but before doing that, let’s first introduce spin qubits.

 

Spin qubits are qubits where the information is stored in the spin momentum of an electron. A spin of a single electron can either be in the spin down (low energy) or in the spin up (high energy) state. Comparing to a classical bit, the spin down will be the analogue to a zero and spin up to a one.

27141122_10214938371790296_245174527_o.png

 

One of the first steps in a spin qubit experiment is to obtain a single electron that is isolated

from its environment. The isolation from the environment is needed to make sure that the electron does not undergo unwanted interactions that will affect its quantum state in an uncontrollable manner.

Isolated single electrons are obtained by shaping a two dimensional electron gas (2DEG).

When stacking different materials on top, one can get, under certain conditions a 2DEG. A two dimensional electron gas can be seen as a plane where electrons are free to move wherever they want. One could compare the 2DEG with a very thin metal layer.

27140149_10214938339789496_1578255226_o

Once we have a plane of electrons, the trick is actually to push or attract electrons in this plane by using gate electrodes placed on top of the 2DEG. This will allow you to make a row of electrons (= isolated electrons). Here each electron will form one spin qubit on the chip.

27265178_10214938340149505_100620957_o.png

Now we covered the basic concepts of spin qubits, we can return to the discussion why spin qubits can form a good platform for quantum computation. In the following I will go through the DiVincenzo criteria, one by one, and show why and how spin qubits satisfy these criteria.

 

A scalable physical system with well characterized qubits

 

This criteria consists of two parts, a scalable and characterized system. The characterized part means that you know how to simulate the qubit (e.g. you know the Hamiltonian). This knowledge is needed to design the required gate operations. In case of spin qubits, the system is described well by the Fermi-Hubbard model.

The second requirement for this criterion was the argument of scalability. Where scalable means that you can extend your qubit control to millions of qubits.

A first thing to look at could be the size of one qubit. For a spin qubit this will correspond to an area of 70nm × 70nm. This means one could easily put a billion of qubits on a chip of 1 by 1 cm.

However, this is not the big challenge. Problems arise when you start thinking about the control of the qubits. In the lab we observe that each qubit has its own personality. Examples are, different gate voltages for every qubit, the different amount of power required to operate a qubit, different energy differences between spin up and spin down, … . The reason that every qubit has its own character has to do with the fact that we work with imperfect materials and an imperfect fabrication process. This is one of the big reasons we are collaborating with industrial chip manufacturers (e.g. Intel). Aside form the fabrication issues, there are also very practical problems, for instance how to connect all the control electronics to the qubits. For example, to have full control of a 5 qubit system, you need 40+ connections. 14 of these connections are high bandwidth connections. This means that very fast signals need to be send through these lines to the qubits. Therefore they need to be connected by dedicated coaxial cables (big). The picture below shows how a chip carrier for a 5 qubit chip looks like and where the coaxial cables need to be connected. Note also that such a chip needs to operate at extremely low temperatures, which also means you have a limited thermal budget (e.g. μW range). A typical operating point for spin qubits is 10mK, which is close to absolute zero.

So are spin qubits scalable? The answer is, we don’t know. Until now, people have only focused on small systems (few qubits). The same is true for superconducting qubits and ion traps.

 

27140810_10214938340069503_925534835_o.png

The ability to initialize the states of the qubits to a simple fiducial state, such as |000⟩

 

 

 

When working with a quantum computer, one could imagine that, before running any algorithm, the user should be able to start from a well known quantum state. Typically, the lowest energy state is taken (e.g. 3 qubits in 0, |000⟩). For spin qubits one can achieve this using the Elzerman method. The idea is to load one electron with spin down from the reservoir in the quantum dot. The reservoir is an area next to the qubits filled with a lot of electrons. Note that when you put a lot of electrons together, there will be no difference anymore in the energy between spin up and down (electrons strongly interact with each other).

Now the idea is to align the energy level of the spin qubit (yes, you can control those!) with the reservoir in such a way that only an electron with spin down can tunnel in.

 

27140031_10214938339869498_1987127968_o.png

Long relevant coherence times, much longer than the gate operation time

 

 

In any quantum system there is a characteristic time that tells you how long a quantum state will be maintained if left alone. This time is called the coherence time. It is important that the coherence time is much longer than a typical gate time (e.g. change a qubit from 0 to 1 and vice versa). When this is not the case, all the quantum information will be lost before the algorithm is done. The loss of quantum information is mainly due to the interaction of the qubits with the environment.

The coherence time of spin qubits in mainly limited by the nuclear spins in silicon. When considering a spin qubit, one can imagine the electron as a charge cloud that is sitting at the Si/SiGe interface. This charge cloud overlaps with the nuclei of the Silicon atoms. These nuclei can also possess a spin. When one of the spins of the nuclei flips (even at low temperatures there is enough energy to make this happen at random), the energy of the spin qubit will change a bit. This causes random energy fluctuations, and will affect the coherence time of the qubit. In practice, we try to minimize this effect by using isotropically purified Silicon (e.g. 28Si has no nuclear spin).

The coherence time of spin qubits can be quite long (e.g. 100μs, with 28Si). The time for a single quantum operation will take about 100ns or less. This means there is a factor 1000 difference between gate time and coherence time.
A “universal” set of quantum gates

 

 

A quantum algorithm typically consists of a set of operations, let’s call them U1, U2 ,… . These operations represent changes to the state of the qubits. One can think of this as the quantum mechanical equivalent of classical AND,NAND, OR,… gates. In the quantum mechanical case, you have two kinds of gates, single qubit gates and two qubit gates.

Single qubit gates, as the name suggest are used to manipulate a single qubit. For spin qubits, this is done using microwave photons. The energy of these photons corresponds to the energy difference of the spin up and down state. When both energies are exactly matched, one can controllably change the spin from up to down or any combination in between.

Two qubit gates are gates that entangle two qubits. This means that both qubits get a kind of awareness of each other. In practice, making two qubit gates is quite straightforward. This can be done by pulsing the barrier gate which is in between the two qubits. One could visualize this interaction as pushing the electrons close together, so that they interact with each other. One of the topics I’m actively working on, is making these kind of gates very reliable. At the moment these gates are often plagued by a lot of noise. In practice, this means that your gate operation is not reliable anymore. To solve this, we are exploring different regimes where you can make a two qubit gate. The interaction between the qubits is, for example, highly dependent on the energy difference between the two qubits you want to entangle.

A gate set is universal when you can perform single qubit gates and two qubit gates on all the qubits. As just described, this is the case for spin qubits.

A qubit-specific measurement capability

When a quantum algorithm is done, you want to know what the outcome is. To know the outcome, you need to measure the state of the system (e.g all or a part of the qubits that were used for the experiment). When performing a quantum measurement, the wavefunction of the qubit is collapsed. A wavefunction of one qubit could look like:

 

|ψ⟩ = 0.1 |0⟩ + 0.9 |1⟩

 

This function describes the probabilities of the electron being in 0 or 1. In this case there would be 10% chance on a 0 and 90% chance on a 1. The collapse of the wavefunction means, that after a measurement, it will become 1 or 0 depending on which state you measured. A quantum algorithm typically starts by making a full superposition. When you have a lot of qubits, this would mean that you are in all the possible states at the same time (e.g. for 2 qubits, |ψ⟩ = |00⟩ + |01⟩ + |01⟩ + |11⟩). Then some computation is done, which should yield one result. The power of quantum computation is that you can evaluate all the possibilities at the same time. The hard thing is to describe your problem in such a way that you have a high likelihood of measuring the right outcome and a low probability of measuring bad outcomes.

Measurements for spin qubits can be done using the Elzerman method. It is quite similar to the initialization. When manipulating your qubits there is a big barrier between the reservoir and the spin qubit (single electron). When an electron is in spin up, it will be able to tunnel out into the reservoir. When this happens, there is for a short while no net charge in the place where the qubit was. After this, a spin down will tunnel back in, as we saw before. Presence or absence of charge is a property we can measure. A charge detector is placed close by the place where the electron can tunnel out. This sensor will give a change in signal whenever you have a spin up. In case of spin down, this charge sensor will not respond since no electron jumps out. Note that this readout method also directly initializes the qubit in spin down.

27145355_10214938339829497_831785465_o.png

 

In conclusion, going through these 5 criteria, shows that spin qubits have great potential for quantum computation. The only points where spin qubits fall short is in the scalability part. But let’s note that the field of quantum information is still in its infancy and there is still a lot of room for new developments.

 

By Stephan Philips, PhD student at the Vandersypen group at Delft University of Technology, Delft, Holland.

Quantum Computation and Simulation: A Beginners Guide

Quantum Computation in a nutshell

 

In recent years, the search for topological states of matter became one of the major subjects of research in the condensed matter physics community. The rapid growth of interest in this field has many reasons. For one, there is an intellectual appeal to it. If the predictions of the theory are true, it will be an example of how powerful and abstract mathematical concepts can be used to describe the behavior of a certain class of physical systems. Secondly, there is a practical interest in the field, since it constitutes the backbone of a potential topological quantum computer. As for myself, some of my colleagues in the Spin-Nano network do fundamental research on topics with potential application in quantum computation, which is why I want to give a brief introduction to how a quantum computer works and why it could be useful in this blogpost.

 

Firstly, how does a classical computer work? In a classical computer, information is stored in a binary fashion, and the smallest unit of information is called a binary digit or short – bit. The state of a bit is either 0 or 1. In a computer, any number or letter is represented by a string of bits. The task of a computer is then to take a string of bits as input, perform some action on it, e.g. adding two strings of bits, and then give back some output also in the form of bits.

Physically, most bits are represented by an electrical voltage or current pulse. Complex electrical circuits allow for a manipulation of these bits by the implementation of logical gates (AND, OR, NOT, etc.). As the data that is being manipulated gets bigger and the manipulations more complex, the computation time increases. This problem is tackled by using ever more circuits. In order for the physical size of the computer not to become the size of a house or even larger, the circuits are required to get smaller and smaller. Nowadays, on a typical computer chip of about 100 million transistors are packed into 1 mm2, and the distance between two transistors is around 10 nanometers. It is apparent that one day, a fundamental limit will be reached, where transistors cannot get any smaller, and therefore the computational power we can achieve has an upper bound.

 

A quantum computer works fundamentally differently to a classical computer, and based on theoretical considerations it is expected that it will able to solve certain mathematical problems much more efficiently. Let’s first have a look at the basic idea behind a quantum computer. As the name suggests, a quantum computer exploits the peculiar laws of quantum physics. In a quantum computer, the smallest unit of information is stored in a so-called quantum bit – or qubit for short.

 

In the abstract formalism of quantum mechanics, the state of a system is represented by a vector in an n-dimensional vector space. The dimension of this vector space is given by the properties of the system. A popular example is the spin of a free electron, i.e. an electron in empty space. The value of the spin projection on a given axis is quantized to two values and thus, the vector space is two-dimensional. In this vector space, there exist two mutually orthogonal (state) vectors associated to the two possible values of the spin projection. The laws of quantum mechanics state, that the system doesn’t have to be exclusively in one of the two states, but can be in any linear combination of the two. Systems with states in a two-dimensional vector space are called two-level systems, and these take on a central role in quantum computation: a qubit is a quantum mechanical two-level system. Now, in a classical computer, the bit is either in one of two possible states, 0 or 1. As stated above, a qubit can be in a superposition of two distinct states, i.e. 0 and 1 at the same time. In the quantum circuit model, the working principle of a quantum computer is similar to that of a classical computer: take a string of qubits as input, by a combination of logical gates (so-called quantum gates) manipulate the qubits, read out the result in the form of qubits. There are at least two tasks a quantum computer can do more efficiently than a classical computer: integer factorization of large numbers and quantum simulation. In the last part of this post, we will focus on quantum simulation.

 

Quantum Simulation and engineering of new drugs

 

Whenever physicists make predictions about physical systems, they first develop a model that describes this system to a certain degree of accuracy, i.e. the complexity is reduced and maybe, due to our ignorance, some properties are neglected. Then, in order to know how the model behaves, mathematical equations have to be solved. In the case of quantum mechanics, the underlying equation is the Schrödinger equation:

 

schroedinger_equation

 

In this formalism, ψ (psi) represents the state of the system and H is called the Hamiltonian, which is the central entity of every quantum mechanical system. Let’s assume one is interested in the properties of a molecule consisting of several atoms. The Hamiltonian contains information on the mass and charge of the atoms, and how the atoms interact with each other. Solving the Schrödinger equation then allows to predict how, once its initial state is specified, the molecule evolves with time and what energetic states it can be in. The problem is that, only for the simplest problems, such as e.g. the hydrogen atom and the quantum harmonic oscillator, exact solutions are available. For more complex physical systems physicists have to make approximations and solve the Schrödinger equation numerically with classical computers. Already describing a relatively small system, as e.g. a system consisting of thirty particles with spin-1/2, requires the manipulation of 230 x 230 matrices, which exceeds the capability of current supercomputers. Since the quantum computer itself is a quantum mechanical system, its temporal evolution also follows the laws of quantum mechanics and it could be used to simulate the dynamics of another quantum system. The reason thereof is the fact that any physically relevant Hamiltonian can be mapped to a Hamiltonian of a number of qubits. The number of qubits is usually larger than the number of physical particles that one intends to simulate. Schematically, the steps for simulating a quantum system is as follows:

 

1.) Represent the Hamiltonian of the system by the Hamiltonian of n qubits, this includes fine tuning interactions between the qubits in order for them to represent the original system faithfully.

2.) Choose an initial state of the system that shall be simulated, translate it to the qubit system and prepare the qubits accordingly.

3.) Let the qubit system evolve in time.

4.) Read out the final state and translate it back to the original system.

 

Of course, this is very schematic and a lot of challenges have to be overcome in order to build a universal quantum simulator, but as illustrated, the basic idea is rather simple. As it offers so many new possibilities, the expectations in such a device are high. For example, when designing new drugs, a quantum simulator could potentially allow for a faster production and the development of more precise drugs causing less side effects. A quantum simulator could simulate how the new drug interacts with cells of a tumour on a molecular level, eliminating the need of guessing a model that describes the working principle.

 

A universal quantum simulator that can simulate large physical systems is still not in near reach, but a lot of effort is put in its development and we have witnessed several breakthroughs over the last years.

 

By Yanick Volpez, PhD student at the Condensed Matter and Quantum Computing group at the University of Basel, Basel, Switzerland