Van der Waals Heterostructures: To Infinity and Beyond

Two weeks ago we witnessed to an event that will probably remain in the history books; I am speaking about the successful test of the Falcon Heavy that puts SpaceX on to the front pages of pretty much all magazines that are even remotely interested in space technology.

To sum up, the episode was classified as an astonishing success, and had a large impact on social media thanks to the eccentric personality of Elon Musk, the founder of SpaceX. In fact, the very first test of a new type of rocket is extremely risky (due to the high possibility that everything will blow up on the launch platform), so no one offered to load a satellite on this run. Instead of simulating the load with concrete weights in the payload capsule, Musk decided to send in to space his own cherry-red Tesla Roadster while playing David Bowie’s “Life on Mars” and with the first SpaceX suit sitting as a driver and astronaut. The images are literally astonishing and look like they were photo edited, but apparently this is one of the times when reality overcomes imaginations. From a practical point of view this is most powerful rocket actually in operation and is extremely cheap (£90 million per launch) and opened a door to a new space race.


a): Image of the Tesla Roadster with the Earth in the background, b): Falcon Heavy on the platform during the final tests c): Falcon Heavy engines, formed by 27 Merlin engines (image credit: SpaceX,


On the wave of this excitement I think it is interesting to focus our attention on how the research on two dimensional materials and Van der Waals heterostructures can potentially be useful also for space exploration technologies. Two of the most pressing problems that we have to solve to gain easier and broader access to our space neighbourhood are the rocket efficiency to lift heavy weights from the Earth’s surface, and energy harvesting once in space. The first one is currently under heavy development (vivified by the entry on stage of private companies, such as SpaceX, blue origin and Orbital ATK); the second one is actually dominated by multi-junction inorganic solar cells, which gather solar light to produce electricity.

They actually hold the record of efficiency over 45%, but such high efficiency is gained at the cost of increasing complexity and manufacturing price, in fact, each single ones of these elements are composed of several different solar cells, each one of them absorbing in a specific range of wavelengths. Moreover, they need to be extremely engineered to be folded during the launch and fully extended and oriented once that they reach the final destination. Last but not least, there is a growing demand of energy to sustain larger and more complex satellites or even human missions.


On the left schematic structure of a multi-junction solar cell; on the right, image of the solar panel on the international space station (ISS) (image credit NASA


This is the scenario in which two dimensional materials could give a contribution, especially transition metal dichalcogenides (TMDs). In fact, they have some desirable properties which make them excellent candidates for solar energy harvesting.

Light absorption: a single layer TMD flake of can absorb an extremely large amount of photons in a significant part of the visible spectrum [1]. This is a key property for a material to be employed for light harvesting devices. Considering that these materials have a sub-nanometre thickness their absorption-to-weight ratio is really promising for photovoltaic applications. Especially in the space sector when the weight is a critical parameter to consider.

Efficient carrier separation: putting in contact two different TMDs will create a heterojunction that has a type II band alignment. This means that the electrons will be confined in one material, while the holes (or electron vacancies) are confined in the other material. In a heterojunction the recombination process (which is exactly the opposite of the photovoltaic effect) is a serious problem to overcome, to convert the light absorbed into electrical current.


Schematic sketch of photovoltaic effect and radiative recombination process in a heterojunction.


heterobilayers have an extremely efficient carrier separation, this means that when a photon is absorbed the two carriers (electron and holes) are split in less than 50 femtoseconds in to the two different layers [2]. These properties could be the key to massively increasing the efficiency of the device.

Flexibility: since they are so thin and flexible they can be integrated in many more elements of the spaceships, so instead of having big ships with huge solar panels, they can be also built into the surface of the spaceship modules themselves. This will optimize the light collection efficiently by exploiting the entire useful surface. Also due to their flexibility, new solutions can be developed to compress the solar panels during the launch.

Engineering material and devices: Adopting the same strategy seen for multi-junction solar cells, many isolated heterobilayers can be stacked together to form a similar structure, in which each single pair can be tailored at will to absorb most efficiently in a specific spectral region. This will maximize the energy collected and therefore the power gained, moreover, the overall thickness of the devices is indeed negligible due to the two dimensional nature of these materials.

To conclude, we are at the beginning of a new era of space exploration that can offer unprecedented benefits in countless fields, from global marketing to fundamental research, from resource access to aerospace engineering. In this space race, two dimensional materials can also express their full range of potential and lead to a major breakthrough which will bring us a step closer to the stars.


Per aspera ad astra


By Alessandro Catanzaro, PhD student at the LDSD group at University of Sheffield, Sheffield, United Kingdom.


  1. Bernardi M, Palummo M, Grossman J.C, Nano Lett. 2013; 13(8):3664-3670. doi:10.1021/nl401544y.
  2. Yu Y, Hu S, Su L, et al. 2014. doi:10.1021/nl5038177.

Acknowledging and dealing with mental health issues as a PhD

The exotic idiosyncrasies of topological insulators are fascinating and would no doubt prove fertile grounds to write this blog post. Nevertheless, I want to write about something more important and urgent for our generation of young scientist: Mental Health issues that many PhD students struggle with but seldom mention.  Graduate school is an exciting time as we, young scientists, work on the cutting edge of science, testing new and exciting ideas.  In striving for excellence, however, many of us struggle with balancing our academic and personal life like I have tried since I embarked on my academic career.


Constant high stress levels can severely affect our work, physical and emotional health and social life. There is a tacet understanding that PhD is difficult but like a boastful sleep deprived marine or medical intern on their 30th hour in the emergency ward, there is also an unhealthy culture discouraging many of us from speaking openly about our “weaknesses”. As a PhD student, job anxiety and high stress levels are seen as part of being an adult and having a responsible job. We do not want our colleagues with whom we are racing for publications to find out that we may not be doing so well personally. We meet them with confident bright faces hiding our ever looming fears of uncertain futures and failures inside.


In rare studies conducted in US and Belgium universities (see reference links at the end), up to 50% of graduate students from different academic fields were reported to be facing some form of mental health problem. Most of them reported one or more of the following:

  • low self-esteem
  • constantly feeling unhappy
  • being depressed
  • losing sleep because of anxiety
  • loss of appetite and concentration
  • feeling like they have poor control on their job and its progress
  • not being able to overcome difficulties
  • failing to enjoy day-to-day activities
  • low career optimism and feeling hopeless
  • feeling scared of making decisions
  • feeling low in energy levels
  • failing at maintaining a life-work balance


A big fraction admitted to having thought about suicide. This puts most of the PhD students at the risk of developing severe psychiatric problems. However, we tend to casually accept feeling depressed and constantly dealing with high levels of stress as part of “being a PhD”.


Figures taken from the berkeley report []
Although self reported data can be unreliable, particularly with respect to recall of activities over long time spans, it is still shocking to see two bimodel peaks at 2.5 and 5.7 hours on sleep on average and many students indicating high frequency of depressive symptoms.


I think a huge change can be made by just acknowledging that these are mental health issues that are not negligible and can be treated.  We really need to change our attitude towards how we look down upon our colleagues who confess about having a harder time than us. We tend to smirk at them thinking they are not as smart.


We need to realize that there can be many more complicated factors at play. For example:

  • having to support a family and larger financial pressures
  • starting PhD in a foreign country/group can be very intimidating. Other than adapting to a new culture, it may also involve relearning social roles and their hierarchies. For example, the role of a teacher/professor may be perceived very differently in different cultures.
  • female researchers may find themselves dealing with an implicit culture of misogyny where they might not only feel a larger pressure to prove themselves but also considerably lesser space to make mistakes. Consequently, they end up with depression, low self-esteem and the fraud syndrome.
  • lacking a social support network or friends
  • balancing research, paying bills and a personal life etc


We can perhaps learn from Martin Seligman’s theory of “Learned Helplessness” where animals with uncontrollable stress/punishments eventually gave up on even trying to solve their problems and showed much worse cognitive and social skills down the lane. We often feel helpless too when we can not control the results of our experiments or fail at doing the multiple tasks we have to manage simultaneously as a PhD student. Professors (or PIs) may well be very alien to these tumultuous events happening inside their students. Most of us will be afraid to take our PIs in confidence regarding these issues. However, many universities now offer psychological counselling and help. We should realize that our mere understanding can play a crucial role in helping someone deal with these problems. So, kindly support your PhD friends who are struggling and encourage them to get external support as they themselves may feel helpless or trapped in their own predicament.







By Aroosa Ijaz, PhD student at the Ensslin Nanophysics group at ETH Zurich, Zurich, Switzerland.

Making quantum computers with spin qubits

In the previous blog post, Yannick explained what a quantum computer is and motivated why it is a useful tool. In this blog article we will stay on the same theme and talk about what is needed to make a quantum computer. What your quantum computer will look like, all depends on the chosen qubit implementation. Currently, there are a few qubit implementations that look quite promising. The most prominent examples are superconducting qubits, ion traps and spin qubits. In this article, we will focus on the latter one, since that’s the one I’m working on. All the platforms mentioned above fulfill the so called DiVincenzo criteria. These criteria, defined in 2000 by David DiVincenzo, need to be fulfilled for any physical implementation of a quantum computer:


  1. A scalable physical system with well characterized qubits.
  2. The ability to initialize the states of the qubits to a simple fiducial state, such as |000⟩.
  3. Long relevant coherence times, much longer than the gate operation time.
  4. A “universal” set of quantum gates.
  5. A qubit-specific measurement capability.


In this article we will go through all these criteria and show why spin qubits fulfill these criteria, but before doing that, let’s first introduce spin qubits.


Spin qubits are qubits where the information is stored in the spin momentum of an electron. A spin of a single electron can either be in the spin down (low energy) or in the spin up (high energy) state. Comparing to a classical bit, the spin down will be the analogue to a zero and spin up to a one.



One of the first steps in a spin qubit experiment is to obtain a single electron that is isolated

from its environment. The isolation from the environment is needed to make sure that the electron does not undergo unwanted interactions that will affect its quantum state in an uncontrollable manner.

Isolated single electrons are obtained by shaping a two dimensional electron gas (2DEG).

When stacking different materials on top, one can get, under certain conditions a 2DEG. A two dimensional electron gas can be seen as a plane where electrons are free to move wherever they want. One could compare the 2DEG with a very thin metal layer.


Once we have a plane of electrons, the trick is actually to push or attract electrons in this plane by using gate electrodes placed on top of the 2DEG. This will allow you to make a row of electrons (= isolated electrons). Here each electron will form one spin qubit on the chip.


Now we covered the basic concepts of spin qubits, we can return to the discussion why spin qubits can form a good platform for quantum computation. In the following I will go through the DiVincenzo criteria, one by one, and show why and how spin qubits satisfy these criteria.


A scalable physical system with well characterized qubits


This criteria consists of two parts, a scalable and characterized system. The characterized part means that you know how to simulate the qubit (e.g. you know the Hamiltonian). This knowledge is needed to design the required gate operations. In case of spin qubits, the system is described well by the Fermi-Hubbard model.

The second requirement for this criterion was the argument of scalability. Where scalable means that you can extend your qubit control to millions of qubits.

A first thing to look at could be the size of one qubit. For a spin qubit this will correspond to an area of 70nm × 70nm. This means one could easily put a billion of qubits on a chip of 1 by 1 cm.

However, this is not the big challenge. Problems arise when you start thinking about the control of the qubits. In the lab we observe that each qubit has its own personality. Examples are, different gate voltages for every qubit, the different amount of power required to operate a qubit, different energy differences between spin up and spin down, … . The reason that every qubit has its own character has to do with the fact that we work with imperfect materials and an imperfect fabrication process. This is one of the big reasons we are collaborating with industrial chip manufacturers (e.g. Intel). Aside form the fabrication issues, there are also very practical problems, for instance how to connect all the control electronics to the qubits. For example, to have full control of a 5 qubit system, you need 40+ connections. 14 of these connections are high bandwidth connections. This means that very fast signals need to be send through these lines to the qubits. Therefore they need to be connected by dedicated coaxial cables (big). The picture below shows how a chip carrier for a 5 qubit chip looks like and where the coaxial cables need to be connected. Note also that such a chip needs to operate at extremely low temperatures, which also means you have a limited thermal budget (e.g. μW range). A typical operating point for spin qubits is 10mK, which is close to absolute zero.

So are spin qubits scalable? The answer is, we don’t know. Until now, people have only focused on small systems (few qubits). The same is true for superconducting qubits and ion traps.



The ability to initialize the states of the qubits to a simple fiducial state, such as |000⟩




When working with a quantum computer, one could imagine that, before running any algorithm, the user should be able to start from a well known quantum state. Typically, the lowest energy state is taken (e.g. 3 qubits in 0, |000⟩). For spin qubits one can achieve this using the Elzerman method. The idea is to load one electron with spin down from the reservoir in the quantum dot. The reservoir is an area next to the qubits filled with a lot of electrons. Note that when you put a lot of electrons together, there will be no difference anymore in the energy between spin up and down (electrons strongly interact with each other).

Now the idea is to align the energy level of the spin qubit (yes, you can control those!) with the reservoir in such a way that only an electron with spin down can tunnel in.



Long relevant coherence times, much longer than the gate operation time



In any quantum system there is a characteristic time that tells you how long a quantum state will be maintained if left alone. This time is called the coherence time. It is important that the coherence time is much longer than a typical gate time (e.g. change a qubit from 0 to 1 and vice versa). When this is not the case, all the quantum information will be lost before the algorithm is done. The loss of quantum information is mainly due to the interaction of the qubits with the environment.

The coherence time of spin qubits in mainly limited by the nuclear spins in silicon. When considering a spin qubit, one can imagine the electron as a charge cloud that is sitting at the Si/SiGe interface. This charge cloud overlaps with the nuclei of the Silicon atoms. These nuclei can also possess a spin. When one of the spins of the nuclei flips (even at low temperatures there is enough energy to make this happen at random), the energy of the spin qubit will change a bit. This causes random energy fluctuations, and will affect the coherence time of the qubit. In practice, we try to minimize this effect by using isotropically purified Silicon (e.g. 28Si has no nuclear spin).

The coherence time of spin qubits can be quite long (e.g. 100μs, with 28Si). The time for a single quantum operation will take about 100ns or less. This means there is a factor 1000 difference between gate time and coherence time.
A “universal” set of quantum gates



A quantum algorithm typically consists of a set of operations, let’s call them U1, U2 ,… . These operations represent changes to the state of the qubits. One can think of this as the quantum mechanical equivalent of classical AND,NAND, OR,… gates. In the quantum mechanical case, you have two kinds of gates, single qubit gates and two qubit gates.

Single qubit gates, as the name suggest are used to manipulate a single qubit. For spin qubits, this is done using microwave photons. The energy of these photons corresponds to the energy difference of the spin up and down state. When both energies are exactly matched, one can controllably change the spin from up to down or any combination in between.

Two qubit gates are gates that entangle two qubits. This means that both qubits get a kind of awareness of each other. In practice, making two qubit gates is quite straightforward. This can be done by pulsing the barrier gate which is in between the two qubits. One could visualize this interaction as pushing the electrons close together, so that they interact with each other. One of the topics I’m actively working on, is making these kind of gates very reliable. At the moment these gates are often plagued by a lot of noise. In practice, this means that your gate operation is not reliable anymore. To solve this, we are exploring different regimes where you can make a two qubit gate. The interaction between the qubits is, for example, highly dependent on the energy difference between the two qubits you want to entangle.

A gate set is universal when you can perform single qubit gates and two qubit gates on all the qubits. As just described, this is the case for spin qubits.

A qubit-specific measurement capability

When a quantum algorithm is done, you want to know what the outcome is. To know the outcome, you need to measure the state of the system (e.g all or a part of the qubits that were used for the experiment). When performing a quantum measurement, the wavefunction of the qubit is collapsed. A wavefunction of one qubit could look like:


|ψ⟩ = 0.1 |0⟩ + 0.9 |1⟩


This function describes the probabilities of the electron being in 0 or 1. In this case there would be 10% chance on a 0 and 90% chance on a 1. The collapse of the wavefunction means, that after a measurement, it will become 1 or 0 depending on which state you measured. A quantum algorithm typically starts by making a full superposition. When you have a lot of qubits, this would mean that you are in all the possible states at the same time (e.g. for 2 qubits, |ψ⟩ = |00⟩ + |01⟩ + |01⟩ + |11⟩). Then some computation is done, which should yield one result. The power of quantum computation is that you can evaluate all the possibilities at the same time. The hard thing is to describe your problem in such a way that you have a high likelihood of measuring the right outcome and a low probability of measuring bad outcomes.

Measurements for spin qubits can be done using the Elzerman method. It is quite similar to the initialization. When manipulating your qubits there is a big barrier between the reservoir and the spin qubit (single electron). When an electron is in spin up, it will be able to tunnel out into the reservoir. When this happens, there is for a short while no net charge in the place where the qubit was. After this, a spin down will tunnel back in, as we saw before. Presence or absence of charge is a property we can measure. A charge detector is placed close by the place where the electron can tunnel out. This sensor will give a change in signal whenever you have a spin up. In case of spin down, this charge sensor will not respond since no electron jumps out. Note that this readout method also directly initializes the qubit in spin down.



In conclusion, going through these 5 criteria, shows that spin qubits have great potential for quantum computation. The only points where spin qubits fall short is in the scalability part. But let’s note that the field of quantum information is still in its infancy and there is still a lot of room for new developments.


By Stephan Philips, PhD student at the Vandersypen group at Delft University of Technology, Delft, Holland.

Quantum Computation and Simulation: A Beginners Guide

Quantum Computation in a nutshell


In recent years, the search for topological states of matter became one of the major subjects of research in the condensed matter physics community. The rapid growth of interest in this field has many reasons. For one, there is an intellectual appeal to it. If the predictions of the theory are true, it will be an example of how powerful and abstract mathematical concepts can be used to describe the behavior of a certain class of physical systems. Secondly, there is a practical interest in the field, since it constitutes the backbone of a potential topological quantum computer. As for myself, some of my colleagues in the Spin-Nano network do fundamental research on topics with potential application in quantum computation, which is why I want to give a brief introduction to how a quantum computer works and why it could be useful in this blogpost.


Firstly, how does a classical computer work? In a classical computer, information is stored in a binary fashion, and the smallest unit of information is called a binary digit or short – bit. The state of a bit is either 0 or 1. In a computer, any number or letter is represented by a string of bits. The task of a computer is then to take a string of bits as input, perform some action on it, e.g. adding two strings of bits, and then give back some output also in the form of bits.

Physically, most bits are represented by an electrical voltage or current pulse. Complex electrical circuits allow for a manipulation of these bits by the implementation of logical gates (AND, OR, NOT, etc.). As the data that is being manipulated gets bigger and the manipulations more complex, the computation time increases. This problem is tackled by using ever more circuits. In order for the physical size of the computer not to become the size of a house or even larger, the circuits are required to get smaller and smaller. Nowadays, on a typical computer chip of about 100 million transistors are packed into 1 mm2, and the distance between two transistors is around 10 nanometers. It is apparent that one day, a fundamental limit will be reached, where transistors cannot get any smaller, and therefore the computational power we can achieve has an upper bound.


A quantum computer works fundamentally differently to a classical computer, and based on theoretical considerations it is expected that it will able to solve certain mathematical problems much more efficiently. Let’s first have a look at the basic idea behind a quantum computer. As the name suggests, a quantum computer exploits the peculiar laws of quantum physics. In a quantum computer, the smallest unit of information is stored in a so-called quantum bit – or qubit for short.


In the abstract formalism of quantum mechanics, the state of a system is represented by a vector in an n-dimensional vector space. The dimension of this vector space is given by the properties of the system. A popular example is the spin of a free electron, i.e. an electron in empty space. The value of the spin projection on a given axis is quantized to two values and thus, the vector space is two-dimensional. In this vector space, there exist two mutually orthogonal (state) vectors associated to the two possible values of the spin projection. The laws of quantum mechanics state, that the system doesn’t have to be exclusively in one of the two states, but can be in any linear combination of the two. Systems with states in a two-dimensional vector space are called two-level systems, and these take on a central role in quantum computation: a qubit is a quantum mechanical two-level system. Now, in a classical computer, the bit is either in one of two possible states, 0 or 1. As stated above, a qubit can be in a superposition of two distinct states, i.e. 0 and 1 at the same time. In the quantum circuit model, the working principle of a quantum computer is similar to that of a classical computer: take a string of qubits as input, by a combination of logical gates (so-called quantum gates) manipulate the qubits, read out the result in the form of qubits. There are at least two tasks a quantum computer can do more efficiently than a classical computer: integer factorization of large numbers and quantum simulation. In the last part of this post, we will focus on quantum simulation.


Quantum Simulation and engineering of new drugs


Whenever physicists make predictions about physical systems, they first develop a model that describes this system to a certain degree of accuracy, i.e. the complexity is reduced and maybe, due to our ignorance, some properties are neglected. Then, in order to know how the model behaves, mathematical equations have to be solved. In the case of quantum mechanics, the underlying equation is the Schrödinger equation:




In this formalism, ψ (psi) represents the state of the system and H is called the Hamiltonian, which is the central entity of every quantum mechanical system. Let’s assume one is interested in the properties of a molecule consisting of several atoms. The Hamiltonian contains information on the mass and charge of the atoms, and how the atoms interact with each other. Solving the Schrödinger equation then allows to predict how, once its initial state is specified, the molecule evolves with time and what energetic states it can be in. The problem is that, only for the simplest problems, such as e.g. the hydrogen atom and the quantum harmonic oscillator, exact solutions are available. For more complex physical systems physicists have to make approximations and solve the Schrödinger equation numerically with classical computers. Already describing a relatively small system, as e.g. a system consisting of thirty particles with spin-1/2, requires the manipulation of 230 x 230 matrices, which exceeds the capability of current supercomputers. Since the quantum computer itself is a quantum mechanical system, its temporal evolution also follows the laws of quantum mechanics and it could be used to simulate the dynamics of another quantum system. The reason thereof is the fact that any physically relevant Hamiltonian can be mapped to a Hamiltonian of a number of qubits. The number of qubits is usually larger than the number of physical particles that one intends to simulate. Schematically, the steps for simulating a quantum system is as follows:


1.) Represent the Hamiltonian of the system by the Hamiltonian of n qubits, this includes fine tuning interactions between the qubits in order for them to represent the original system faithfully.

2.) Choose an initial state of the system that shall be simulated, translate it to the qubit system and prepare the qubits accordingly.

3.) Let the qubit system evolve in time.

4.) Read out the final state and translate it back to the original system.


Of course, this is very schematic and a lot of challenges have to be overcome in order to build a universal quantum simulator, but as illustrated, the basic idea is rather simple. As it offers so many new possibilities, the expectations in such a device are high. For example, when designing new drugs, a quantum simulator could potentially allow for a faster production and the development of more precise drugs causing less side effects. A quantum simulator could simulate how the new drug interacts with cells of a tumour on a molecular level, eliminating the need of guessing a model that describes the working principle.


A universal quantum simulator that can simulate large physical systems is still not in near reach, but a lot of effort is put in its development and we have witnessed several breakthroughs over the last years.


By Yanick Volpez, PhD student at the Condensed Matter and Quantum Computing group at the University of Basel, Basel, Switzerland

Valleytronics in a nutshell

Both classical and quantum computing face significant challenges. On the classical side, silicon field effect transistor technology is reaching the fundamental limits of scaling and there is no replacement technology which has yet demonstrated even comparable performance to the current generation of commercially available silicon CMOS. On the quantum side, scaling the number of entangled superconducting or trapped ion qubits to that required to solve useful problems is an enormous challenge with current device technology. Both fields stand to benefit from transformational devices based on new physical phenomena. Two-dimensional transition metal dichalcogenides (TMDs) possess a number of intriguing electronic, photonic, and excitonic properties.

A lack of inversion symmetry coupled with the presence of time-reversal symmetry endows 2D TMDs with individually addressable valleys in momentum space at the K and K’ points in the first Brillouin zone. This valley addressability opens the possibility of using the momentum state of electrons, holes, or excitons as a completely new paradigm in information processing.

Periodic semiconductor crystal lattices often have degenerate minima in the conduction band at certain points in momentum space. We refer to these minima as valleys, and devices which exploit the fact that carriers are present in one valley versus another are referred to as valleytronic devices. Though degenerate valleys are present in many periodic solids, in most cases it is impossible to address or manipulate carriers in one valley independently from another as the valley state of a carrier is not coupled to any external force we can apply. Thus it is not possible to construct valleytronic devices out of most materials. This is in contrast to spintronics, for example, where the electron spin is readily manipulated by magnetic fields through the electron spin magnetic moment or (less easily) by electric fields through spin-orbit coupling.

In some cases, carrier mass anisotropy along different crystal orientations can result in valley polarization; preferential scattering occurs from one valley into another. This has been shown in diamond, aluminum arsenide, silicon, and bismuth at cryogenic temperatures. However, these materials still lack a strong coupling between the valley index (sometimes called the valley pseudospin) and any external quantity such as an applied field.  It is not clear that there is a way to use mass anisotropy to produce a useful device such as a switch. So we do not consider this class of materials in our discussion of valleytronics.

The recent emergence of 2D materials has provided a more encouraging space in which to explore manipulation and control of the valley index. 2D materials with hexagonal lattices such as graphene or transition metal dichalcogenides (TMDs) can have valleys at the K and K’ points in the Brillouin zone. But to detect or manipulate carriers selectively in one valley we need some measureable physical quantity which distinguishes the two.

The 2H phases of 2D transition metal dichalcogenides lack inversion symmetry and as a result exhibit contrasting Berry curvatures and orbital magnetic moment between the K and K’ valleys. if the Berry curvature has different values at the K and K’ points one can expect different electron, hole, or exciton behavior in each valley as a function of an applied electric field. If the orbital magnetic moment has different values at the K and K’ points one can expect different behavior in each valley as a function of an applied magnetic field. Contrasting values of Berry curvature and orbital magnetic moment at the K and K’ points give rise to optical circular dichroism between the two valleys which allows selective excitation through photons of right or left helicity. Monolayer 2D transition metal dichalcogenides meet this requirement and are the most promising candidates for valleytronic applications.


Sketch denoting the circular optical dichroism of the K and -K valley in TMDC monolayers


By Riccardo Pisoni, PhD student at the Ensslin Nanophysics group at ETH Zurich, Zurich, Switzerland



The Valleytronics Materials, Architectures, and Devices Workshop, sponsored by the MIT Linclon Laboratory Technology Office and co-sponsored by NSF, MIT Samberg Center on August 22-23, 2017.

Wu, W. Yao, D. Xiao, T.F. Heinz, “Spin and pseudospins in layered transition metal dichalcogenides”, Nature Physics, 10:343 (2014)

Life of a PhD Student: Conferences

One of the integral part of being a graduate student is participating in research conferences. Today, I want to change gears from previous blog posts to talk about an important but often undervalued aspect of researcher’s career: attending conferences. I will start with discussing importance of conference for young researchers and later share my personal experience with conferences so far. To begin with, let’s discuss what a typical physics conference entails. Origin of the word conference is Latin word conferre which means ‘bring together’. The formal definition of conference is ‘a formal meeting of people with shared interests’. A scientific conference is an event which generally brings together researcher with all level of expertise and experience working in particular research area or group of areas to present their results and discuss recent advances in the field. This is done usually in a combination of talks and posters by attendees along with more informal coffee and dinner sessions.


So that leads us to ask following questions, why do scientists have to attend conferences? Isn’t main goal of our job to work in a lab on something novel and unknown? Isn’t conventional way of publishing paper enough to inform community about the work we are doing? The simplest answer is not really. There are several benefits of attending conferences which makes it essential part of work of a scientist.


First of all, dissemination of research through an oral medium like talk or poster is often more effective than simply publishing online or in scientific journal. You are also able to receive feedback on your data/results at various stages of experiment, which can help in guiding project. It also leads you to think about their research with different point of view to present it to broader audience and enables you to improve your communication skills.


Attending a conference will also facilitate learning about cutting-edge research in your own research area. It can help you realize how important your current work is for the community and connect it to big-picture research goals. This often leads to new ideas or techniques whose validity you can assess at conference itself and later implement in your own lab.


Third benefit is networking. Meeting colleagues working in same field can also help in generating collective ideas or resources and collaborations for future experiments. The importance of collaboration is obvious in present day science, most famous examples are LIGO which led to observation of gravitational waves and CERN led observation of Higgs Boson particle, two of the biggest scientific breakthroughs of 21st century. Another benefit of networking for young researchers is potential to check the fit with different PIs and groups for future positions.


Last but not the least, conference gives a huge opportunity to travel around the world, see new culture and make new friends. It gives you a chance to refresh mentally and therefore, it can be considered as fun work time.


Our Innovative Training Network Spin-NANO considers scientific conference an important part of training young Early Stage Researchers and plans to organize a conference/meeting every half a year. Most recent meeting was organized at TU Delft in June 2017 along with our Industry Partners. It was a highly engaging 2 days conference where ESRs and project partners were able to introduce, to rest of the network, their work or company respectively. I really enjoyed learning about the industrial culture and challenges from our partners along with scientific results from my fellow ESRs, over talks and conversation over coffee and lunch sessions.


The meeting was followed by Think Ahead Workshop where we discussed topics like communicating research to public and presentation skills with immediate feedback on our own presentations during meeting earlier, making us learn about how to better disseminate our research. It was also the first time that all ESRs came together, we had wonderful interaction during the meeting and I am sure it will be start to many wonderful collaborations in next years.


Another conference that I attended after starting my PhD was Resonator QED which is organized by Nanosystem Initiative Munich (NIM) in Munich. It is organized every 2 years and is a combination of talks, tutorial talks and poster sessions. It brought together scientists from different fields studying Quantum Electrodynamics (QED) including solid-state cavity QED, atomic cavity QED, circuit QED, single photon sources and quantum memories. This made the entire conference really interesting as scientific goals of these communities are overlapping but system and technology used are widely diverse. While it is not in the scope of blog to summarize entire conference, here I chose to give outline of two of many interesting talks that were presented. Hopefully it will demonstrate exciting work going on in developing quantum technologies using different platforms and also generate your interest to find out more:


1) Nanocavity QED: from inverse design to implementation by Prof Vuckovic, Stanford, USA

In her talk, Prof Vuckovic described the work in her group using nanophotonic structures. In one experiment, they have shown strong light-matter coupling between quantum dot (QD) and photonic-crystal cavity creating quasiparticle called polariton and use it to observe dynamic Mollow triplet [1]. In later part of her talk, she discussed an inverse design technique/algorithm to obtain more efficient photonic devices. For example, they have used their algorithm to design compact wavelength demultiplexer (see Figure 1) [2]. This is interesting as it illustrates that the methods from Computer Science are helping physicist to advance integrated photonics.


Figure 1: Schematic of wavelength demultiplexer with one input and two output waveguide and design region (from ref: [2])
Figure 2: Optical micrograph of sample showing superconducting structure and DQD (from ref: [3])

2) Strong coupling of a superconducting resonator to a charge qubit by Prof Ensslin, ETH Zurich, Switzerland

Prof Ensslin, whose group is in fact a member of Spin-Nano network, talked about an experiment that combined solid-state qubit and superconducting quantum interference device (SQUID) array resonator (see Figure 2). SQUID resonator operates in microwave regime and was frequency tunable using magnetic field, it was coupled with GaAs double quantum dot (DQD). They were able to demonstrate strong coupling limit by showing vacuum Rabi mode splitting [3]. This will enable future experiments in quantum information processing using this platform, also known as semiconductor circuit QED.


I will conclude by saying that conferences are going to be important part of academic career and one should utilize it fully to their benefit.


[1] K. Fischer et al, Nature Photonics 10, 163 (2016) [Web Link]

[2] A. Piggott et al, Nature Photonics 9, 374 (2015) [Web Link]

[3] A. Stockklauser et al, Phys. Rev. X 7, 011030 (2017) [Web Link]


By Samarth Vadia, PhD student at attocube and Nanophotonics Group of LMU Munich, Munich, Germany

From the lab into a computer

When I started my PhD in the lovely city of Toulouse I already knew that an increasingly large portion of the solid-state physics community was focusing on the study of 2D materials. Graphene was already well known even outside the scientific community and everyone was describing it as the miraculous material of the future. That was one of the reasons why I started looking for a position in this field.


What I’m actually working on different 2D crystals which, contrary to Graphene, act like semiconductors. This family is called Transition Metal Dichalcogenides (TMDs) and in this Spin-NANO blog you will find a lot of details about them: the description of their properties, the possibilities they open up in the creation of new devices or in improving existing ones as well as the challenges in the way. That’s why today I’m not focusing on all of this but I’ll prefer doing something different.


I was just checking my newsfeed on Facebook when I saw a post of one of my former colleagues, who just got a permanent position at École Polytechnique in Paris. It was a Nature Communications paper whose title immediately caught my attention: A microprocessor based on a two-dimensional semiconductor. I already knew that in 2011 a MoS2  monolayer based Field Effect Transistor (FET) was built and proven to be operational. However, this paper appeared to talk about a more complex and integrated device, a full microprocessor. I read it as soon as I could.


Unfortunately, because of my limited knowledge of microprocessor logic and operation, I didn’t grasp every detail about the device but I was still fascinated by the overall message. The team at the Institute of Photonics at Vienna University of Technology replaced the silicon in the FETs channel with MoS2 bilayer achieving a microprocessor made of 115 transistors which is able to execute user-defined programs, perform operations and communicate the result to outer devices.


Microscope image of the TMD transistor microprocessor.


One intriguing property of this device is the fact that the substrate can be bendable, opening up the possibility of having flexible electronic devices. However, the most obvious advantage of replacing silicon in transistors with 2D crystals comes from the better geometric scaling and less power consumption that these materials will provide. Which ultimately results in smaller devices with long lasting batteries!


Obviously, this prototype microprocessor is still far from the performances of its commercially available counterparts but this is not diminishing my interest on this result as it represents a proof of concept. These devices are doable and making them even more efficient than the ones which are available is just a technological challenge and thus just a question of time.


I’m talking about this topic because I work on the other side of the research process: the fundamental research. My aim is principally the understanding of the intrinsic properties of these crystals, and this kind of study is mainly performed for a sense of curiosity rather than for reaching an application goal. It is thus easier to lose the connection between what I’m actually studying and why so many people are working around the world are working on these materials. It’s true that in the introduction of every paper on TMDs you can read how many applications will be possible when our control over them will be good enough, but these often appear distant. Reading that paper instead reminded me how close we are to turn those world into reality.


By Marco Manca, PhD student at LPCNO laboratory in Toulouse, France


Wachter, S. et al. A microprocessor based on a two-dimensional semiconductor. Nat. Commun. 8, 14948 doi: 10.1038/ncomms14948 (2017)