Making quantum computers with spin qubits

In the previous blog post, Yannick explained what a quantum computer is and motivated why it is a useful tool. In this blog article we will stay on the same theme and talk about what is needed to make a quantum computer. What your quantum computer will look like, all depends on the chosen qubit implementation. Currently, there are a few qubit implementations that look quite promising. The most prominent examples are superconducting qubits, ion traps and spin qubits. In this article, we will focus on the latter one, since that’s the one I’m working on. All the platforms mentioned above fulfill the so called DiVincenzo criteria. These criteria, defined in 2000 by David DiVincenzo, need to be fulfilled for any physical implementation of a quantum computer:

 

  1. A scalable physical system with well characterized qubits.
  2. The ability to initialize the states of the qubits to a simple fiducial state, such as |000⟩.
  3. Long relevant coherence times, much longer than the gate operation time.
  4. A “universal” set of quantum gates.
  5. A qubit-specific measurement capability.

 

In this article we will go through all these criteria and show why spin qubits fulfill these criteria, but before doing that, let’s first introduce spin qubits.

 

Spin qubits are qubits where the information is stored in the spin momentum of an electron. A spin of a single electron can either be in the spin down (low energy) or in the spin up (high energy) state. Comparing to a classical bit, the spin down will be the analogue to a zero and spin up to a one.

27141122_10214938371790296_245174527_o.png

 

One of the first steps in a spin qubit experiment is to obtain a single electron that is isolated

from its environment. The isolation from the environment is needed to make sure that the electron does not undergo unwanted interactions that will affect its quantum state in an uncontrollable manner.

Isolated single electrons are obtained by shaping a two dimensional electron gas (2DEG).

When stacking different materials on top, one can get, under certain conditions a 2DEG. A two dimensional electron gas can be seen as a plane where electrons are free to move wherever they want. One could compare the 2DEG with a very thin metal layer.

27140149_10214938339789496_1578255226_o

Once we have a plane of electrons, the trick is actually to push or attract electrons in this plane by using gate electrodes placed on top of the 2DEG. This will allow you to make a row of electrons (= isolated electrons). Here each electron will form one spin qubit on the chip.

27265178_10214938340149505_100620957_o.png

Now we covered the basic concepts of spin qubits, we can return to the discussion why spin qubits can form a good platform for quantum computation. In the following I will go through the DiVincenzo criteria, one by one, and show why and how spin qubits satisfy these criteria.

 

A scalable physical system with well characterized qubits

 

This criteria consists of two parts, a scalable and characterized system. The characterized part means that you know how to simulate the qubit (e.g. you know the Hamiltonian). This knowledge is needed to design the required gate operations. In case of spin qubits, the system is described well by the Fermi-Hubbard model.

The second requirement for this criterion was the argument of scalability. Where scalable means that you can extend your qubit control to millions of qubits.

A first thing to look at could be the size of one qubit. For a spin qubit this will correspond to an area of 70nm × 70nm. This means one could easily put a billion of qubits on a chip of 1 by 1 cm.

However, this is not the big challenge. Problems arise when you start thinking about the control of the qubits. In the lab we observe that each qubit has its own personality. Examples are, different gate voltages for every qubit, the different amount of power required to operate a qubit, different energy differences between spin up and spin down, … . The reason that every qubit has its own character has to do with the fact that we work with imperfect materials and an imperfect fabrication process. This is one of the big reasons we are collaborating with industrial chip manufacturers (e.g. Intel). Aside form the fabrication issues, there are also very practical problems, for instance how to connect all the control electronics to the qubits. For example, to have full control of a 5 qubit system, you need 40+ connections. 14 of these connections are high bandwidth connections. This means that very fast signals need to be send through these lines to the qubits. Therefore they need to be connected by dedicated coaxial cables (big). The picture below shows how a chip carrier for a 5 qubit chip looks like and where the coaxial cables need to be connected. Note also that such a chip needs to operate at extremely low temperatures, which also means you have a limited thermal budget (e.g. μW range). A typical operating point for spin qubits is 10mK, which is close to absolute zero.

So are spin qubits scalable? The answer is, we don’t know. Until now, people have only focused on small systems (few qubits). The same is true for superconducting qubits and ion traps.

 

27140810_10214938340069503_925534835_o.png

The ability to initialize the states of the qubits to a simple fiducial state, such as |000⟩

 

 

 

When working with a quantum computer, one could imagine that, before running any algorithm, the user should be able to start from a well known quantum state. Typically, the lowest energy state is taken (e.g. 3 qubits in 0, |000⟩). For spin qubits one can achieve this using the Elzerman method. The idea is to load one electron with spin down from the reservoir in the quantum dot. The reservoir is an area next to the qubits filled with a lot of electrons. Note that when you put a lot of electrons together, there will be no difference anymore in the energy between spin up and down (electrons strongly interact with each other).

Now the idea is to align the energy level of the spin qubit (yes, you can control those!) with the reservoir in such a way that only an electron with spin down can tunnel in.

 

27140031_10214938339869498_1987127968_o.png

Long relevant coherence times, much longer than the gate operation time

 

 

In any quantum system there is a characteristic time that tells you how long a quantum state will be maintained if left alone. This time is called the coherence time. It is important that the coherence time is much longer than a typical gate time (e.g. change a qubit from 0 to 1 and vice versa). When this is not the case, all the quantum information will be lost before the algorithm is done. The loss of quantum information is mainly due to the interaction of the qubits with the environment.

The coherence time of spin qubits in mainly limited by the nuclear spins in silicon. When considering a spin qubit, one can imagine the electron as a charge cloud that is sitting at the Si/SiGe interface. This charge cloud overlaps with the nuclei of the Silicon atoms. These nuclei can also possess a spin. When one of the spins of the nuclei flips (even at low temperatures there is enough energy to make this happen at random), the energy of the spin qubit will change a bit. This causes random energy fluctuations, and will affect the coherence time of the qubit. In practice, we try to minimize this effect by using isotropically purified Silicon (e.g. 28Si has no nuclear spin).

The coherence time of spin qubits can be quite long (e.g. 100μs, with 28Si). The time for a single quantum operation will take about 100ns or less. This means there is a factor 1000 difference between gate time and coherence time.
A “universal” set of quantum gates

 

 

A quantum algorithm typically consists of a set of operations, let’s call them U1, U2 ,… . These operations represent changes to the state of the qubits. One can think of this as the quantum mechanical equivalent of classical AND,NAND, OR,… gates. In the quantum mechanical case, you have two kinds of gates, single qubit gates and two qubit gates.

Single qubit gates, as the name suggest are used to manipulate a single qubit. For spin qubits, this is done using microwave photons. The energy of these photons corresponds to the energy difference of the spin up and down state. When both energies are exactly matched, one can controllably change the spin from up to down or any combination in between.

Two qubit gates are gates that entangle two qubits. This means that both qubits get a kind of awareness of each other. In practice, making two qubit gates is quite straightforward. This can be done by pulsing the barrier gate which is in between the two qubits. One could visualize this interaction as pushing the electrons close together, so that they interact with each other. One of the topics I’m actively working on, is making these kind of gates very reliable. At the moment these gates are often plagued by a lot of noise. In practice, this means that your gate operation is not reliable anymore. To solve this, we are exploring different regimes where you can make a two qubit gate. The interaction between the qubits is, for example, highly dependent on the energy difference between the two qubits you want to entangle.

A gate set is universal when you can perform single qubit gates and two qubit gates on all the qubits. As just described, this is the case for spin qubits.

A qubit-specific measurement capability

When a quantum algorithm is done, you want to know what the outcome is. To know the outcome, you need to measure the state of the system (e.g all or a part of the qubits that were used for the experiment). When performing a quantum measurement, the wavefunction of the qubit is collapsed. A wavefunction of one qubit could look like:

 

|ψ⟩ = 0.1 |0⟩ + 0.9 |1⟩

 

This function describes the probabilities of the electron being in 0 or 1. In this case there would be 10% chance on a 0 and 90% chance on a 1. The collapse of the wavefunction means, that after a measurement, it will become 1 or 0 depending on which state you measured. A quantum algorithm typically starts by making a full superposition. When you have a lot of qubits, this would mean that you are in all the possible states at the same time (e.g. for 2 qubits, |ψ⟩ = |00⟩ + |01⟩ + |01⟩ + |11⟩). Then some computation is done, which should yield one result. The power of quantum computation is that you can evaluate all the possibilities at the same time. The hard thing is to describe your problem in such a way that you have a high likelihood of measuring the right outcome and a low probability of measuring bad outcomes.

Measurements for spin qubits can be done using the Elzerman method. It is quite similar to the initialization. When manipulating your qubits there is a big barrier between the reservoir and the spin qubit (single electron). When an electron is in spin up, it will be able to tunnel out into the reservoir. When this happens, there is for a short while no net charge in the place where the qubit was. After this, a spin down will tunnel back in, as we saw before. Presence or absence of charge is a property we can measure. A charge detector is placed close by the place where the electron can tunnel out. This sensor will give a change in signal whenever you have a spin up. In case of spin down, this charge sensor will not respond since no electron jumps out. Note that this readout method also directly initializes the qubit in spin down.

27145355_10214938339829497_831785465_o.png

 

In conclusion, going through these 5 criteria, shows that spin qubits have great potential for quantum computation. The only points where spin qubits fall short is in the scalability part. But let’s note that the field of quantum information is still in its infancy and there is still a lot of room for new developments.

 

By Stephan Philips, PhD student at the Vandersypen group at Delft University of Technology, Delft, Holland.

Advertisements

Quantum Computation and Simulation: A Beginners Guide

Quantum Computation in a nutshell

 

In recent years, the search for topological states of matter became one of the major subjects of research in the condensed matter physics community. The rapid growth of interest in this field has many reasons. For one, there is an intellectual appeal to it. If the predictions of the theory are true, it will be an example of how powerful and abstract mathematical concepts can be used to describe the behavior of a certain class of physical systems. Secondly, there is a practical interest in the field, since it constitutes the backbone of a potential topological quantum computer. As for myself, some of my colleagues in the Spin-Nano network do fundamental research on topics with potential application in quantum computation, which is why I want to give a brief introduction to how a quantum computer works and why it could be useful in this blogpost.

 

Firstly, how does a classical computer work? In a classical computer, information is stored in a binary fashion, and the smallest unit of information is called a binary digit or short – bit. The state of a bit is either 0 or 1. In a computer, any number or letter is represented by a string of bits. The task of a computer is then to take a string of bits as input, perform some action on it, e.g. adding two strings of bits, and then give back some output also in the form of bits.

Physically, most bits are represented by an electrical voltage or current pulse. Complex electrical circuits allow for a manipulation of these bits by the implementation of logical gates (AND, OR, NOT, etc.). As the data that is being manipulated gets bigger and the manipulations more complex, the computation time increases. This problem is tackled by using ever more circuits. In order for the physical size of the computer not to become the size of a house or even larger, the circuits are required to get smaller and smaller. Nowadays, on a typical computer chip of about 100 million transistors are packed into 1 mm2, and the distance between two transistors is around 10 nanometers. It is apparent that one day, a fundamental limit will be reached, where transistors cannot get any smaller, and therefore the computational power we can achieve has an upper bound.

 

A quantum computer works fundamentally differently to a classical computer, and based on theoretical considerations it is expected that it will able to solve certain mathematical problems much more efficiently. Let’s first have a look at the basic idea behind a quantum computer. As the name suggests, a quantum computer exploits the peculiar laws of quantum physics. In a quantum computer, the smallest unit of information is stored in a so-called quantum bit – or qubit for short.

 

In the abstract formalism of quantum mechanics, the state of a system is represented by a vector in an n-dimensional vector space. The dimension of this vector space is given by the properties of the system. A popular example is the spin of a free electron, i.e. an electron in empty space. The value of the spin projection on a given axis is quantized to two values and thus, the vector space is two-dimensional. In this vector space, there exist two mutually orthogonal (state) vectors associated to the two possible values of the spin projection. The laws of quantum mechanics state, that the system doesn’t have to be exclusively in one of the two states, but can be in any linear combination of the two. Systems with states in a two-dimensional vector space are called two-level systems, and these take on a central role in quantum computation: a qubit is a quantum mechanical two-level system. Now, in a classical computer, the bit is either in one of two possible states, 0 or 1. As stated above, a qubit can be in a superposition of two distinct states, i.e. 0 and 1 at the same time. In the quantum circuit model, the working principle of a quantum computer is similar to that of a classical computer: take a string of qubits as input, by a combination of logical gates (so-called quantum gates) manipulate the qubits, read out the result in the form of qubits. There are at least two tasks a quantum computer can do more efficiently than a classical computer: integer factorization of large numbers and quantum simulation. In the last part of this post, we will focus on quantum simulation.

 

Quantum Simulation and engineering of new drugs

 

Whenever physicists make predictions about physical systems, they first develop a model that describes this system to a certain degree of accuracy, i.e. the complexity is reduced and maybe, due to our ignorance, some properties are neglected. Then, in order to know how the model behaves, mathematical equations have to be solved. In the case of quantum mechanics, the underlying equation is the Schrödinger equation:

 

schroedinger_equation

 

In this formalism, ψ (psi) represents the state of the system and H is called the Hamiltonian, which is the central entity of every quantum mechanical system. Let’s assume one is interested in the properties of a molecule consisting of several atoms. The Hamiltonian contains information on the mass and charge of the atoms, and how the atoms interact with each other. Solving the Schrödinger equation then allows to predict how, once its initial state is specified, the molecule evolves with time and what energetic states it can be in. The problem is that, only for the simplest problems, such as e.g. the hydrogen atom and the quantum harmonic oscillator, exact solutions are available. For more complex physical systems physicists have to make approximations and solve the Schrödinger equation numerically with classical computers. Already describing a relatively small system, as e.g. a system consisting of thirty particles with spin-1/2, requires the manipulation of 230 x 230 matrices, which exceeds the capability of current supercomputers. Since the quantum computer itself is a quantum mechanical system, its temporal evolution also follows the laws of quantum mechanics and it could be used to simulate the dynamics of another quantum system. The reason thereof is the fact that any physically relevant Hamiltonian can be mapped to a Hamiltonian of a number of qubits. The number of qubits is usually larger than the number of physical particles that one intends to simulate. Schematically, the steps for simulating a quantum system is as follows:

 

1.) Represent the Hamiltonian of the system by the Hamiltonian of n qubits, this includes fine tuning interactions between the qubits in order for them to represent the original system faithfully.

2.) Choose an initial state of the system that shall be simulated, translate it to the qubit system and prepare the qubits accordingly.

3.) Let the qubit system evolve in time.

4.) Read out the final state and translate it back to the original system.

 

Of course, this is very schematic and a lot of challenges have to be overcome in order to build a universal quantum simulator, but as illustrated, the basic idea is rather simple. As it offers so many new possibilities, the expectations in such a device are high. For example, when designing new drugs, a quantum simulator could potentially allow for a faster production and the development of more precise drugs causing less side effects. A quantum simulator could simulate how the new drug interacts with cells of a tumour on a molecular level, eliminating the need of guessing a model that describes the working principle.

 

A universal quantum simulator that can simulate large physical systems is still not in near reach, but a lot of effort is put in its development and we have witnessed several breakthroughs over the last years.

 

By Yanick Volpez, PhD student at the Condensed Matter and Quantum Computing group at the University of Basel, Basel, Switzerland

Valleytronics in a nutshell

Both classical and quantum computing face significant challenges. On the classical side, silicon field effect transistor technology is reaching the fundamental limits of scaling and there is no replacement technology which has yet demonstrated even comparable performance to the current generation of commercially available silicon CMOS. On the quantum side, scaling the number of entangled superconducting or trapped ion qubits to that required to solve useful problems is an enormous challenge with current device technology. Both fields stand to benefit from transformational devices based on new physical phenomena. Two-dimensional transition metal dichalcogenides (TMDs) possess a number of intriguing electronic, photonic, and excitonic properties.

A lack of inversion symmetry coupled with the presence of time-reversal symmetry endows 2D TMDs with individually addressable valleys in momentum space at the K and K’ points in the first Brillouin zone. This valley addressability opens the possibility of using the momentum state of electrons, holes, or excitons as a completely new paradigm in information processing.

Periodic semiconductor crystal lattices often have degenerate minima in the conduction band at certain points in momentum space. We refer to these minima as valleys, and devices which exploit the fact that carriers are present in one valley versus another are referred to as valleytronic devices. Though degenerate valleys are present in many periodic solids, in most cases it is impossible to address or manipulate carriers in one valley independently from another as the valley state of a carrier is not coupled to any external force we can apply. Thus it is not possible to construct valleytronic devices out of most materials. This is in contrast to spintronics, for example, where the electron spin is readily manipulated by magnetic fields through the electron spin magnetic moment or (less easily) by electric fields through spin-orbit coupling.

In some cases, carrier mass anisotropy along different crystal orientations can result in valley polarization; preferential scattering occurs from one valley into another. This has been shown in diamond, aluminum arsenide, silicon, and bismuth at cryogenic temperatures. However, these materials still lack a strong coupling between the valley index (sometimes called the valley pseudospin) and any external quantity such as an applied field.  It is not clear that there is a way to use mass anisotropy to produce a useful device such as a switch. So we do not consider this class of materials in our discussion of valleytronics.

The recent emergence of 2D materials has provided a more encouraging space in which to explore manipulation and control of the valley index. 2D materials with hexagonal lattices such as graphene or transition metal dichalcogenides (TMDs) can have valleys at the K and K’ points in the Brillouin zone. But to detect or manipulate carriers selectively in one valley we need some measureable physical quantity which distinguishes the two.

The 2H phases of 2D transition metal dichalcogenides lack inversion symmetry and as a result exhibit contrasting Berry curvatures and orbital magnetic moment between the K and K’ valleys. if the Berry curvature has different values at the K and K’ points one can expect different electron, hole, or exciton behavior in each valley as a function of an applied electric field. If the orbital magnetic moment has different values at the K and K’ points one can expect different behavior in each valley as a function of an applied magnetic field. Contrasting values of Berry curvature and orbital magnetic moment at the K and K’ points give rise to optical circular dichroism between the two valleys which allows selective excitation through photons of right or left helicity. Monolayer 2D transition metal dichalcogenides meet this requirement and are the most promising candidates for valleytronic applications.

 

valleytronics
Sketch denoting the circular optical dichroism of the K and -K valley in TMDC monolayers

 

By Riccardo Pisoni, PhD student at the Ensslin Nanophysics group at ETH Zurich, Zurich, Switzerland

 

References:

The Valleytronics Materials, Architectures, and Devices Workshop, sponsored by the MIT Linclon Laboratory Technology Office and co-sponsored by NSF, MIT Samberg Center on August 22-23, 2017.

Wu, W. Yao, D. Xiao, T.F. Heinz, “Spin and pseudospins in layered transition metal dichalcogenides”, Nature Physics, 10:343 (2014)

Life of a PhD Student: Conferences

One of the integral part of being a graduate student is participating in research conferences. Today, I want to change gears from previous blog posts to talk about an important but often undervalued aspect of researcher’s career: attending conferences. I will start with discussing importance of conference for young researchers and later share my personal experience with conferences so far. To begin with, let’s discuss what a typical physics conference entails. Origin of the word conference is Latin word conferre which means ‘bring together’. The formal definition of conference is ‘a formal meeting of people with shared interests’. A scientific conference is an event which generally brings together researcher with all level of expertise and experience working in particular research area or group of areas to present their results and discuss recent advances in the field. This is done usually in a combination of talks and posters by attendees along with more informal coffee and dinner sessions.

 

So that leads us to ask following questions, why do scientists have to attend conferences? Isn’t main goal of our job to work in a lab on something novel and unknown? Isn’t conventional way of publishing paper enough to inform community about the work we are doing? The simplest answer is not really. There are several benefits of attending conferences which makes it essential part of work of a scientist.

 

First of all, dissemination of research through an oral medium like talk or poster is often more effective than simply publishing online or in scientific journal. You are also able to receive feedback on your data/results at various stages of experiment, which can help in guiding project. It also leads you to think about their research with different point of view to present it to broader audience and enables you to improve your communication skills.

 

Attending a conference will also facilitate learning about cutting-edge research in your own research area. It can help you realize how important your current work is for the community and connect it to big-picture research goals. This often leads to new ideas or techniques whose validity you can assess at conference itself and later implement in your own lab.

 

Third benefit is networking. Meeting colleagues working in same field can also help in generating collective ideas or resources and collaborations for future experiments. The importance of collaboration is obvious in present day science, most famous examples are LIGO which led to observation of gravitational waves and CERN led observation of Higgs Boson particle, two of the biggest scientific breakthroughs of 21st century. Another benefit of networking for young researchers is potential to check the fit with different PIs and groups for future positions.

 

Last but not the least, conference gives a huge opportunity to travel around the world, see new culture and make new friends. It gives you a chance to refresh mentally and therefore, it can be considered as fun work time.

 

Our Innovative Training Network Spin-NANO considers scientific conference an important part of training young Early Stage Researchers and plans to organize a conference/meeting every half a year. Most recent meeting was organized at TU Delft in June 2017 along with our Industry Partners. It was a highly engaging 2 days conference where ESRs and project partners were able to introduce, to rest of the network, their work or company respectively. I really enjoyed learning about the industrial culture and challenges from our partners along with scientific results from my fellow ESRs, over talks and conversation over coffee and lunch sessions.

 

The meeting was followed by Think Ahead Workshop where we discussed topics like communicating research to public and presentation skills with immediate feedback on our own presentations during meeting earlier, making us learn about how to better disseminate our research. It was also the first time that all ESRs came together, we had wonderful interaction during the meeting and I am sure it will be start to many wonderful collaborations in next years.

 

Another conference that I attended after starting my PhD was Resonator QED which is organized by Nanosystem Initiative Munich (NIM) in Munich. It is organized every 2 years and is a combination of talks, tutorial talks and poster sessions. It brought together scientists from different fields studying Quantum Electrodynamics (QED) including solid-state cavity QED, atomic cavity QED, circuit QED, single photon sources and quantum memories. This made the entire conference really interesting as scientific goals of these communities are overlapping but system and technology used are widely diverse. While it is not in the scope of blog to summarize entire conference, here I chose to give outline of two of many interesting talks that were presented. Hopefully it will demonstrate exciting work going on in developing quantum technologies using different platforms and also generate your interest to find out more:

 

1) Nanocavity QED: from inverse design to implementation by Prof Vuckovic, Stanford, USA

In her talk, Prof Vuckovic described the work in her group using nanophotonic structures. In one experiment, they have shown strong light-matter coupling between quantum dot (QD) and photonic-crystal cavity creating quasiparticle called polariton and use it to observe dynamic Mollow triplet [1]. In later part of her talk, she discussed an inverse design technique/algorithm to obtain more efficient photonic devices. For example, they have used their algorithm to design compact wavelength demultiplexer (see Figure 1) [2]. This is interesting as it illustrates that the methods from Computer Science are helping physicist to advance integrated photonics.

 

Picture1
Figure 1: Schematic of wavelength demultiplexer with one input and two output waveguide and design region (from ref: [2])
Picture2
Figure 2: Optical micrograph of sample showing superconducting structure and DQD (from ref: [3])

2) Strong coupling of a superconducting resonator to a charge qubit by Prof Ensslin, ETH Zurich, Switzerland

Prof Ensslin, whose group is in fact a member of Spin-Nano network, talked about an experiment that combined solid-state qubit and superconducting quantum interference device (SQUID) array resonator (see Figure 2). SQUID resonator operates in microwave regime and was frequency tunable using magnetic field, it was coupled with GaAs double quantum dot (DQD). They were able to demonstrate strong coupling limit by showing vacuum Rabi mode splitting [3]. This will enable future experiments in quantum information processing using this platform, also known as semiconductor circuit QED.

 

I will conclude by saying that conferences are going to be important part of academic career and one should utilize it fully to their benefit.

 

[1] K. Fischer et al, Nature Photonics 10, 163 (2016) [Web Link]

[2] A. Piggott et al, Nature Photonics 9, 374 (2015) [Web Link]

[3] A. Stockklauser et al, Phys. Rev. X 7, 011030 (2017) [Web Link]

 

By Samarth Vadia, PhD student at attocube and Nanophotonics Group of LMU Munich, Munich, Germany

From the lab into a computer

When I started my PhD in the lovely city of Toulouse I already knew that an increasingly large portion of the solid-state physics community was focusing on the study of 2D materials. Graphene was already well known even outside the scientific community and everyone was describing it as the miraculous material of the future. That was one of the reasons why I started looking for a position in this field.

 

What I’m actually working on different 2D crystals which, contrary to Graphene, act like semiconductors. This family is called Transition Metal Dichalcogenides (TMDs) and in this Spin-NANO blog you will find a lot of details about them: the description of their properties, the possibilities they open up in the creation of new devices or in improving existing ones as well as the challenges in the way. That’s why today I’m not focusing on all of this but I’ll prefer doing something different.

 

I was just checking my newsfeed on Facebook when I saw a post of one of my former colleagues, who just got a permanent position at École Polytechnique in Paris. It was a Nature Communications paper whose title immediately caught my attention: A microprocessor based on a two-dimensional semiconductor. I already knew that in 2011 a MoS2  monolayer based Field Effect Transistor (FET) was built and proven to be operational. However, this paper appeared to talk about a more complex and integrated device, a full microprocessor. I read it as soon as I could.

 

Unfortunately, because of my limited knowledge of microprocessor logic and operation, I didn’t grasp every detail about the device but I was still fascinated by the overall message. The team at the Institute of Photonics at Vienna University of Technology replaced the silicon in the FETs channel with MoS2 bilayer achieving a microprocessor made of 115 transistors which is able to execute user-defined programs, perform operations and communicate the result to outer devices.

 

Microprocessor.png
Microscope image of the TMD transistor microprocessor.

 

One intriguing property of this device is the fact that the substrate can be bendable, opening up the possibility of having flexible electronic devices. However, the most obvious advantage of replacing silicon in transistors with 2D crystals comes from the better geometric scaling and less power consumption that these materials will provide. Which ultimately results in smaller devices with long lasting batteries!

 

Obviously, this prototype microprocessor is still far from the performances of its commercially available counterparts but this is not diminishing my interest on this result as it represents a proof of concept. These devices are doable and making them even more efficient than the ones which are available is just a technological challenge and thus just a question of time.

 

I’m talking about this topic because I work on the other side of the research process: the fundamental research. My aim is principally the understanding of the intrinsic properties of these crystals, and this kind of study is mainly performed for a sense of curiosity rather than for reaching an application goal. It is thus easier to lose the connection between what I’m actually studying and why so many people are working around the world are working on these materials. It’s true that in the introduction of every paper on TMDs you can read how many applications will be possible when our control over them will be good enough, but these often appear distant. Reading that paper instead reminded me how close we are to turn those world into reality.

 

By Marco Manca, PhD student at LPCNO laboratory in Toulouse, France

 

Wachter, S. et al. A microprocessor based on a two-dimensional semiconductor. Nat. Commun. 8, 14948 doi: 10.1038/ncomms14948 (2017)

 

Optical innovations: high reflectance mirrors for the James Webb space telescope

When Galileo pointed his telescope to the sky over 400 years ago, revolutionary scientific discoveries has been made and our look to the natural world has been forever changed. Telescopes have since then replaced the naked eye for observing and discovering the universe. In the following centuries, more powerful and complex telescopes have been introduced. In the early 20s, astronomer Edwin Hubble used the largest telescope of his day to observe galaxies beyond our own at the Mt. Wilson Observatory near Pasadena in California. In April 1990, a telescope of his name was launched from Kennedy Space Center in Florida. The Hubble telescope was the first telescope to be launched in space and allowed since its servicing in 1990 to perform 1.3 million observations and provide valuable data for more than 15 000 scientific papers.

 

hst-sm4
Figure 1: photograph of Hubble Space Telescope taken on the fourth servicing mission to the observatory in 2009, Credits to NASA

 

The Hubble telescope will be soon (Spring 2019) replaced by The James Webb Space Telescope (JWST). NASA Goddard space flight Center, the headquarters of this telescope, has built along the way collaboration with the European and Canadian Space agencies, five more aerospace companies and uses test facilities from several NASA agencies in order to bring this telescope to the world.

 

Originally known as the Next Generation Space Telescope, the biggest asset of the James Webb over the Hubble is its ability to operate in the infrared range only, allowing catching photons from the earliest days of the universe (from 13.5 billion years ago).  Figure 1 helps seeing the design and the structure of the James web telescope: it is a three-mirror anastigmatic telescope. Very close to the Hubble, a Cassegrain telescope, the James Webb telescope is composed of a primary mirror with an opening in its middle which gathers the light and bounces it off a secondary mirror in front of it. The secondary mirror will focus the light into the aft optical subsystem, which contains the tertiary mirror and a fine-steering mirror that helps to stabilize the image.

 

labeledspacecraft
Figure 2: JWST’s subsystems, credit to STSci (NASA, Goddard Space flight Center)

 

The honeycomb primary mirror has a diameter of 6.5m (the width of a tennis court) allowing a light collecting area of 25m2, 6 times greater than the Hubble’s. The primary mirror is composed of 18 hexagonal segments. Each segment is around 1.3m wide. The mirror substrate is made out of Beryllium instead of glass in order to reduce the weight of the mirror. Beryllium (Be) is a light weight material with a low thermal expansion coefficient allowing the mirror to hold its shape in cryogenic temperatures. Beryllium is mined in Utah and purified by Brush Wellman in Ohio. It starts as a fine powder pressed to a flat shape. The block is then cut to blanks put together to form a segment and sent to Axsys Technologies. This company provides the final shape of the Beryllium substrate by cutting away most of the back side and leaving a thin rib structure (Figure 3b). Each rib is 1mm thick. This process allows reducing the substrate weight down to 20kg.

 

After the mirror has been shaped, the front surface is perfectly polished and smoothed out. A cryogenic testing is performed by Ball Aerospace and NASA. The substrates are cooled down to 30K in order to ensure that the material will hold its shape in the space. Corrections are made to the mirrors shape. The mirrors are then sent for gold coating. This coating is performed by Quantum Coatings Inc. in Moorestown.  A 100nm thick gold coating is evaporated on the surface of the substrates with 10nm uniformity over 1.5m wide substrate. The gold reflectance is estimated to 99% over a range from 0.8 to 26μm. The gold coating is very pure and soft. In order to protect the surface from scratches and contaminants, a SiO2 coating is applied. Several tests are performed to check the stress, the reflectance and the roughness of the gold coatings. The mirrors undergo another series of cryogenic testings at NASA for holding it shape.

 

Screen Shot 2017-10-30 at 09.57.35
Figure 3:  a. top photograph: a segment of the primary mirror being controlled by NASA Engineers, b: bottom photograph: back of Beryllium substrate.

 

This article gives an overview of all the process steps, challenges and qualifications for large area mirrors destined to space applications. The James Webb telescope has however more technical challenges related to its numerous components such as NIRSpec, the near infrared camera spectrograph and the Mid Infrared instrument (MIRI).

The James Webb telescope will be launched in Spring 2019 on an Arianne A5 vehicle in French Guiana.

 

By Najwa Sidqi, early stage research at Helia Photonics, Edinburgh.

 

References:

[1] James Webb Space Telescope, Goddard Space Flight Center:

https://jwst.nasa.gov/whatsNext.html

 

[2] NASA: Hubble Space telescope

https://www.nasa.gov/mission_pages/hubble/story/index.html

 

[3] Gold Mirror Coatings for James Webb Space Telescope (JWST)

http://www.quantumcoating.com/projects/

 

[4] James Webb Space Telescope Successor TO Hubble

http://www.ball.com/aerospace/programs/jwst

 

2D Heterostructures

One of the most urgent problems of current electronics industry is related to the size of the elements used in the building of the various devices. All our computers, phones, etc are fabricated using complex, engineered structures generally composed by several layers of materials with different properties.

Society demands that these structures meet, among others, two critical criteria:

– On the one hand, they must be small. Laptops, smart phones, etc are the result of the search for more and more compact devices.

– At the same time, they must become more and more powerful, i.e. perform more operations and quicker. That is, we want smaller structures at no expense of the power of the device.

Gordon E. Moore, one of the co-founders of Intel, empirically stated in 1965 that the number of transistors (the basic electronic element these devices are based off of) per unit area would double every year. It is thanks to the efforts of the scientific community and industry that this has been possible, thus achieving our original goal: the miniaturization of devices at almost no expense. Currently, the largest number of transistors in a commercially available single-chip processor is 7.2 billion in 456 mm2, with transistor sizes in the range of tens of nanometres (1 nm = 1 millionth of a mm), which is basically just some tens of atoms.

However, it is possible that things might get too small. Indeed, transistors, as well as any other electronic devices, are based on the principle by which the motion of electrons can be controlled by tuning some energy levels inside the materials involved. When, however, things are very small, we enter the realm of Quantum Mechanics. This is where the phenomenon of tunnelling might occur: electrons going through a material which should be forbidden to them due to a lack of sufficient energy.

As a comparison, imagine the following. In classical physics, if you have a metallic ball and throw it against a wall, usually the metallic ball will bounce back. However, if you throw it with sufficient energy, it might go through the wall. In quantum physics, though, throwing a ball at low energy implies that the ball has also a non-zero chance of traversing the wall, even though this is classically forbidden.

You can therefore now see the problem: if things get too small we cannot control the motion of electrons anymore by using the energetics of the materials, because these electrons will be able to go through them even in those cases where we don’t want them to.

This is where the 2D materials come in. To build a proper semiconductor structure, at least the following elements are needed: a metal, a semiconductor, and an insulator. As it happens, the realm of 2D materials are capable of providing us with all those three elements: graphene acting as a metal, transition metal dichalcogenides (TMDs) acting as semiconductors, and hexagonal boron nitride (hBN) acting as an insulator.

 

2dheterostructure
Representation of different types of 2D materials and how they can be stacked one on top of the other. Nature 499, 419 (2013)

 

These materials have the advantage of being only 1-3 atoms thick, thus reducing the current size limit of electronic elements by a factor of 10.

The combination of these materials require, however, proper engineering to build so-called 2D heterostructures. You can imagine them as placing Lego bricks one on top of the other to achieve complex structures such as quantum light-emitting diodes1 (LEDs which emit single photons; see post of Luca Sortino to know more about this).

Therefore, the arising of the use of 2D heterostructures has opened the door to many new and exciting physics.

 

1. Nat Comun 7, 12978 (2016)

 

By Alejandro Rodriguez, PhD Student of the Quantum Information and Nanoscale Metrology group of Prof. Atature at the University of Cambridge, UK.