This is the blog of a PhD community of early career researchers funded through a collaborative European Innovative Training Network (ITN) from The Marie Sklodowska-Curie Actions (MSCA) in Horizon 2020. These researchers are exploring nanoscale solid-state spin systems in emerging quantum technologies.
This is the blog of a PhD community of early career researchers funded through a collaborative European Innovative Training Network (ITN) from The Marie Sklodowska-Curie Actions (MSCA) in Horizon 2020.
These researchers are exploring nanoscale solid-state spin systems in emerging quantum technologies.
Graphene’s isolation from its bulk sister crystal graphite blew open the field of near 2-dimensional solid-state physics, introducing us to such novel physics as massless electrons, due to the Dirac cone band structure, and technological possibilities such as spintronics, the long-range preservation an electrons spin for informatic purposes. Since then, nearly every year we add to the pallet of near 2D crystals, all with various applications and properties such as semiconductors with optically addressable spin-valley isospin freedom, magnetic insulators with single layer ferromagnetism and staked anti-ferromagnetism and even low dimensional dielectrics that enhance the optical properties of neighbouring 2D crystals. It is safe to say that the field is definitely maturing and will have far reaching effects on future research and technology.
It would have been easy to assume, that there were not many surprises left graphene to give the community, being studied so rigorously for over the past decade. However, yet again new research into novel twisted bilayer graphene has once again blown the field wide open again with access to new rich veins of new and fascinating physics. Bilayer graphene, exactly as the name suggests is simply two layers of graphene layered atop one another. Such systems have been studied for their ability to allow for the opening a tuneable band gap not allowed in single layer graphene, for applications to a number of quantum technologies such as Quantum dots of Quantum point contacts. A twister bilayer graphene system simply takes one of these sheets of graphene and rotates them slightly, off of its usual alignment found in nature.
It’s not immediately obvious why simply twisting the two layers of a stacked system with respect to one another would change the materials properties at all. The reason is more obvious when you consider crystal lattice structure form the top down as you change the angle between the two sheets. Long range order can be seen when you superimpose the two twisted 2D crystals on top of each other known as a Morié lattice. This long-range order can effectively be treated as the new crystal structure, with tuneable parameters given by the twist angle between the layers. So far it has been demonstrated that selecting different twist angles offers such novel physics not observed in graphene before such as non-BCS superconductivity and topological helical edge state transport.
Topological edge state transport is one of the numerous systems desired for discussed topological quantum technologies, which offer strong protection against local disorder, often the limiting factor in current quantum processors, as the disorder often limits the time window in which the processor operates in a quantum regime. Chiral edge state transport allows for electrons of specific spin to robustly conduct around the edge of a system, in a specific direction. This discovery adds a new dimension to the spintronic applications of graphene.
So-called “Magic Angle” twisting of bilayer graphene also opens up a new phase of unconventional non-BCS superconductivity. Its unconventionality comes from the incredibly low carrier density at relatively high critical temperatures need to access the superconductivity regime. This suggests that the superconductivity is caused by electron correlations/ordering within the system, as opposed to electron-phonon/electron-lattice vibration interactions as is usually found.
These fascinating, easy to produce and incredibly tuneable systems not only open up new theoretical and experimental research in bilayer graphene, but also ask us what other new phases of matter we might find by applying the same method to different 2D materials, or even stacked layers of different 2D crystals. This discovery has once again excited the 2D research community, it really does seem the possibilities are endless with these almost magical materials.
By Matthew Brooks, PhD Student of the Burkard Group at Univeristät Konstanz, Germany.
Many people spend their free time with playing video games, but most of video games must follow laws of physics and that’s why video games usually must be created by people who know a little bit of physics.
Basic code of any video game in most of cases contains simulations of Newtonian physics within the environment, and collision detection, it solves the problem of determining when any two or more physical objects in the environment cross each other’s path. A package with such code usually called “physics engine”. It simulate walking or running or driving of characters based on Newton’s laws. In addition, it simulate supporting effects such explosion, or destroying objects around or any other influence on a media by the character.
For example, many video games contain effects like explosion, and realistic simulation of such effects is actually a big challenge for physics engine. Early computer games used the simple expedient of repeating the same explosion in each circumstance. However, in the real life an explosion can be different depending on the terrain, altitude of the explosion, and the type of exploded materials. Nowadays, the effects of the explosion can be modeled as a system of many particles propelled by the expanding gas. A particle system model allows a variety of other physical phenomena to be simulated, including smoke, moving water, precipitation, and so forth.
Another simulation connected with many particles is simulation of cloth. System of particles and springs based on the Hooke’s law can be used to simulate complex motion of non-rigid materials such as cloth. Fig. 2 illustrates a network of particles and springs that form the basis for a waving flag simulation. Such a flag may appear on top of a castle in an adventure game. Or perhaps the robes of a wizard character would be modelled in this way to achieve a realistic flowing motion as the wizard ran or waved his arms while casting a spell.
Another realistic simulation in games is using a ragdoll physics. One of the applications of this technique can be simulation of death of a character. Early video games used manually created animations for characters’ death sequences. As computers increased in power, it became possible to do limited real-time physical simulations. A ragdoll is a system of multiple rigid bodies connected together by a system of constraints that restrict how the bones may move relative to each other. The Jurassic Park licensed game Jurassic Park: Trespasser exhibited ragdoll physics in 1998 but received very polarised opinions; most were negative, as the game had a large number of bugs. It was remembered, however, for being a pioneer in video game physics.
All examples discussed above rely on Newton’s laws of motion. However, sometimes games take into account rules of non-classical (quantum) physics. One of the examples can be Quantum MiniGolfcreated by one of the our colleagues. In this game a ball can be at several places at once, it can diffract around obstacles and interfere with itself. So, the ball acts as a wavepacket. And it’s a nice example of quantum physics in games.
And there are a lot of other examples of physics in games, like nice gravity simulations in Mario Galaxy series, or cool characters mechanic in Grant Theft Auto series. And thanks to people who work on physics engines we have realistic video gameswhich helps us to immerse deeper into this virtual world.
The Wikipedia definition of vacuum is “a region of space devoid of matter”. In other words, just empty space. Still, within the strange world of quantum mechanics, vacuum is not that “empty”. There are so-called quantum fluctuations (or vacuum fluctuations) which give rise to some interesting physical phenomena and can (to some extent) be controlled!
In a perfect crystal, the atoms are organised in a repeating pattern, called a crystal lattice. If one, or more atoms do not follow this repeating pattern, we have a crystal defect. In everyday life, you might think of something being defect as being broken. However, in solid-state physics, crystal defects prove to be very interesting, and to be of high importance! One example of a crystal defect is the nitrogen-vacancy (NV) centre in diamond. As the name suggests, the NV centre consists of a substitutional nitrogen atom (e.g. a carbon atom replaced by a nitrogen atom) next to a vacant position (called a vacancy).
The NV centre has a total of 5 valence electrons, two from the nitrogen atom and three from surrounding carbon atoms. The two electrons from the nitrogen atom alongside two from the carbon atoms form pairs, leaving one unpaired electron. The unpaired electron can efficiently trap an electron from a nearby donor , leaving the NV centre with a net charge of minus 1. We call these negatively charged NV centres for NV–. For the remaining of this article, I will simply refer to NV– as NV centres.
Diamond is made out of carbon atoms, where 99% of the carbon atoms are carbon-12 (12C for short). The nucleus of 12C consists of six protons and six neutrons, meaning that all the nucleons have “paired up” with another nucleon. In addition, all the six electrons also “pair up”. The significance of this pairing, is that the diamond lattice is more or less spin-free. Nitrogen, on the other hand, has one additional proton and neutron, resulting in a net spin of 1. The spin-free lattice of diamond is a good host for the NV centre spin, resulting in a long spin coherence time , which is the time for which the spin is pointing along one axes before flipping due to interactions with the environment. When irradiated with green laser light, the NV centre fluoresces bright in red. However, the intensity of this fluorescence depends on the spin state of the NV centre. In other words, it is possible address the NV centre spin optically!
In a previous blog post, Matteo explained how one could use the long spin coherence time of the NV centre to create a quantum network (if you have not read it yet, I encourage you to do so). For the sake of this article, I will quickly summarise the key concepts:
A photon emitted from the NV centre can be entangled with the spin state. Photons from two remote NV centres can interact to form an entangled pair, provided the photons are indistinguishable. Since the photons were already entangled to the spin of the host NV centre, and the photons are entangled with each other, the two remote NV centre spins become entangled to each other ! This spin-spin entanglement can then act as a node in a larger quantum network. The entanglement between distant NV centres has been demonstrated in a famous experiment performed by Ronald Hanson’s group at TU Delft, where they used two NV centres separated by 1.3km to demonstrate a loop-hole free Bell inequality violation .
One of the biggest challenges for a large scale quantum network based on NV centres, is the low flux of coherent, indistinguishable photons. Firstly, only approximately 3% of the photons emitted from the NV centre are emitted along the so-called zero-phonon line (ZPL), a purely electronic transition resulting in indistinguishable photons with a wavelength of 637 nm. The remaining photon emission is accompanied by the emission of a phonon (a quantised vibration of the crystal lattice). Secondly, with a large refractive index of 2.4, total internal reflection at the diamond-air interface results in the photons primarily being emitted into modes propagating latterly in the diamond. Thirdly, the NV centre possesses a relatively long lifetime of 12 ns, resulting in a low flux of photons. Luckily, all three problems can be solved by engineering of the optical environment, which slowly leads us back to the properties of vacuum. However, before we can start talk about optical engineering, we need to understand the mechanism behind spontaneous emission of photons.
The rate of spontaneous emission of photons from a NV centre (or any other quantum system), is governed by the coupling between the NV centre and the previously mentioned vacuum fluctuations. Therefore, by controlling the vacuum fluctuations, one can alter the rate of photon emission. One way to control the vacuum fluctuations is by the use of an optical microcavity . An optical microcavity consists of two highly reflective opposite facing mirrors, separated by a small distance. The light inside the microcavity bounces back and forth between the two mirrors, in a similar fashion to the sound wave inside a guitar. Only light, provided the special condition that the cavity length is equal to an integer number of half the wavelength, will be transmitted through the cavity. In addition, provided the separation of the mirrors in the microcavity is comparable to the wavelength of the light, the vacuum fluctuations become confined. In other words, the vacuum fluctuations become stronger inside the cavity, resulting in a stronger coupling between the NV and the aforementioned vacuum fluctuations. Hence, the probability of photon emission into the cavity is enhanced. One can picture this as the photons being funnelled into one direction, and with one specific wavelength. Enhanced emission rates, by confinement of the vacuum fluctuations, is known as the Purcell effect. We can use mirrors to control the vacuum!
With the use of piezoelectric nano-positioners, the separation between the two mirrors can be controlled with sub nanometre precision. By tuning the cavity length to overlap with the NV zero-phonon line (637 nm), my colleagues at the University of Basel have experimentally demonstrated Purcell effect from single NV centres . The experimental evidence to back of this claim, is the reduction of the lifetime from 12 ns outside the cavity, to 7 ns inside the cavity. This experiment shows that, with the use of two mirrors, one can address and solve the three problems with the NV centres mentioned above; the broad emission spectra, the low extraction efficiency and the long lifetime.
The reduction of the lifetime, and hence increased flux of coherent photons, is an important step in the right direction for a large scale diamond based quantum network. The next steps will be to make better performing cavities, alongside the implementation of spin control inside the cavity. Then, in principle, two NV centres located in two remote microcavities can be entangled, and act as two nodes in a large scale quantum network!
So to look back on this article, the key takeaway message is that we can use two highly reflective mirrors to control the vacuum fluctuations, and hence alter the emission of photons from individual NV centres. Please have a look at my video for a visual introduction to this work: https://www.youtube.com/watch?v=xeovlntM66U&t=99s
Written by Sigurd Flågan, PhD student in Richard Warburton’s group at the University of Basel, Switzerland.
 L. Novotny, Principles of Nano-optics 2nd ed, Cambridge University Press 2012.
 G. Balasubramanian et al, Ultralong spin coherence time in isotopically engineered diamond, Nature Materials 8 383-287 (2009)
 H. Bernien et al, Heralded entanglement between solid-state qubits separated by three meters, Nature 497, 86 (2013)
 B. Hensen et al, Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, Nature 526, 682–686 (2015)
 R.J Barbour et al, A tunable microcavity, J. Appl. Phys. 110, 053107 (2007)
 D. Riedel et al, Deterministic Enhancement of Coherent Photon Generation from a Nitrogen-Vacancy Center in Ultrapure Diamond, PRX, 7, 031040 (2017)
Yes, it is true. The Earth contains the coldest place in the known cosmos. In fact, over the last century, scientists have been able to reach temperatures fractions of Kelvin (K) from absolute zero, the coldest temperature physically possible, equal to 0 K or -273.15 °C. This result is astonishing if you consider that the average temperature of the universe is around 2.7 K. The coldest natural temperature is found in the Boomerang Nebula of the constellation Centaurus, and has been estimated to be around 1 K.
How is it possible to reach such low temperatures artificially?
The journey for the development of cryogenic temperature started almost 150 years ago , when Carl von Linde patented equipment to liquefy air, which has a boiling point around 80 K. The process was rather simple, making use of pure thermodynamic laws and exploiting the Joule–Thomson expansion process. By adiabatically (i.e. by avoiding heat exchange with the external environment) lowering the pressure of the gas through its expansion, the increase in intermolecular distance caused a lowering of the kinetic energy of the atoms therefore the temperature of the gas. The cooled gas was subsequently reintegrated in the cycle, undergoing the process until all the gas reached the liquid state. Liquefaction of air was subsequently followed by nitrogen and oxygen.
About 30 years later, Dewar managed to liquefy hydrogen, exploiting a similar process to the one previously described. However, this turned out to be more complex problem, since the inversion temperature of hydrogen (and helium) is below 50 K at 1 atmosphere, whereas for air, nitrogen and oxygen is well above 500 K. The inversion temperature is the temperature at which the Joule-Thomson process start to be effective and lead to gas cooling. Therefore, the Linde cycle needs to be started already at very low temperature. Dewar succeeded in his purpose using liquid nitrogen (77 K) to precool the hydrogen, before starting the liquefaction process. However, in order to do that, he had to invent a new instrument which allowed him to thermally decouple the hydrogen from room temperature, the Dewar flask: a vacuum-insulated, silver-plated glass container. Sounds familiar? Yes, it is exactly what we use every day to keep our coffee warm, a Thermos!
In 1908 Kamerlingh Onnes, the “father of low temperature physics”, managed to successfully cool down 4He to its boiling point, reaching the record temperature of 4.2 K and opening the few Kelvin temperature range to science. He was awarded with the Nobel prize in physics in 1913 “for his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium”. Eventually, Onnes realized that by increasing the evaporation rate of the liquid, it was possible to reduce its temperature even further. In 1920 he successfully cooled down a bath of 4He to 0.8 K, by pumping away the helium vapor from the chamber. However, since the vapor pressure decreases exponentially with the temperature, this method starts to be inefficient very soon.
At this point in history, the technique of gas liquefaction reached its bottleneck, and fundamentally different technologies were required. One solution was offered from adiabatic electronic demagnetization techniques, which was proposed in the late ‘20s, and developed further during mid 20th century, culminating in the development of adiabatic nuclear magnetization technique which allowed to reach 100 microkelvin at the end of the ‘60s. The main idea behind this technique is to adiabatically disorder magnetic domains in materials by removing the magnetizing field. The process of inducing disorder requires energy, which is absorbed from the external environment. However, as the name suggests, this technique requires working with magnetic materials which are uncommon, and at the same time it doesn’t allow to maintain such a low temperature over long periods.
Another approach started to be developed during the second half of the last century, and it dealt with the dilution of 3He (a stable isotope of helium) in 4He, which allowed to implement continuous refrigeration methods down to the millikelvin regime. The first prototype was experimentally realized in 1964 in the Kamerlingh Onnes Laboratorium at Leiden University, in the Netherlands. When the mixture of 3He and 4He is cooled to sufficiently low temperature, like that in a low-pressure liquid 4He bath, the mixture will tend to separate into two liquid phases, one which is pure 3He and a the other which is mostly 4He with a small amount of 3He, the proportion of which is fixed at equilibrium. By evaporating 3He from its diluted phase, the ratio of the mixture is altered, and it tries to get back to equilibrium by moving 3He from the pure to the dilute phase. This is a endoenergetic process, which absorbs energy from the external environment as heat, leading to the cooling of the system. The equipment that exploits this process is known as dilution refrigerator, which is the main apparatus for low temperature physics experiments, such as the computation with spin qubits, which Stephan and Yanick discussed in a previous blog post.
Towards the end of the 20th century, the novel method of atomic laser cooling proved to be successful in reaching even lower temperatures. This technique reduces the average velocity of a few atoms, by using fast energy pulses at precise frequencies, which can be only provided by lasers. Smaller average velocities mean smaller kinetic energy, corresponding to a reduced temperature of the atoms (actually this technique is much more complicated, so for more information I suggest this reading ). Applying this procedure, allowed researchers in 2015 to lower the temperature of few rubidium atoms down 50 picokelvin, that is 50 trillionths of a degree above absolute zero! 
Eventually, after this long story, I am pretty sure you are wondering, why are scientist interested in reaching low temperature?
One important phenomenon that only happens at low temperature is superconductivity. In 1911 Onnes observed that the resistance of mercury dropped to zero as the temperature reached 4.2 K, the first superconducting transition, and the first superconductor. This discovery that revolutionized the field of condensed matter physics as well as leading to powerful technologies such as maglev trains and magnetic resonance imaging (MRI). However, even 100 years after the discovery of the first superconductor, scientists are still discovering new materials that turn superconducting, and no general theory able to fully describe this phenomenon exists yet.
By Fabio Ansaloni, PhD student at the Centre for quantum devices group at the University of Copenhagen, Copenhagen, Denmark.
 F. Pobell, Matter and Methods at Low Temperatures, springer (2007)
 M.A. Kasevich et al., Phys. Rev. Lett. 114, 143004 (2015)
 W.D Philips, Laser Cooling and Trapping of Neutral Atoms, Nobel Lecture (1998)
We are constantly connected to Internet. With our computers, our smartphones, our cars, our fridges (mine is not, yet, but you get the idea). In its very first days the Internet was a very rudimentary, yet revolutionary, connection between computers . It enabled one computer on the network to send messages to any other computer on the network, whether it was directly connected to it (that is, with a cable) or not. Some of the computers on the network acted as routing nodes for the information, so that it could get directed toward the destination. In 1969 there were four nodes on the then-called ARPANET. By 1973 there were ten times as many. In 1981 the number of connected computers was more than 200. Last year the number of devices capable of connecting to Internet was 8.4 billion (with a b!) .
Computers on their own are already great, but there is a whole range of applications that without a network infrastructure would be inaccessible. Do you see where I am going?
A couple of posts ago Stephan and Yanick explained what a quantum computer is, how to make it and, most importantly, why we should bother to build one. If you have not read their posts yet I strongly suggest you to do so, since they are two really nice reads. But if I had to shrink them to a handful of words: quantum computers will exploit the weird laws of micro(nano!)scopical objects to solve some problems way faster than any future normal computer. They will do so by encoding the information in quantum systems, which will therefore be quantum information. A Quantum Internet is a network capable of routing this quantum information between quantum computers. We can already foresee some nice application for this quantum network, like establishing an inviolable secure communication link between any two nodes, connecting far-apart telescopes to take ultra-sharp images of stars and galaxies, linking small quantum computers into a huge powerful one (a bit like cloud computing, but much stronger) . In all likelihood the best way to use these new technologies will come over time, with applications we cannot anticipate. For example, in the early 1960s the invention of the laser was welcomed as “a solution looking for a problem” . Now we have lasers everywhere! I think that the same thing will happen for quantum computers and their internet.
One of the quirks of quantum information is that it cannot be copied. When you try to copy it, you irreversibly destroy the original information. This is, under the hood, what makes quantum communication so secure; but on the other hand, it could also make sharing quantum information excessively hard! If we had to rely on a single quantum system (a photon for example, a particle of light) to travel undisturbed across the globe, we could as well stop this now.
Fortunately quantum mechanics offers us a solution to this problem: teleportation. It’s not like Star Trek, we can’t teleport people, but we can teleport quantum information. We can transfer the information stored in a single atom in Amsterdam to an electron in London, without reading the atom, without knowing what the information is and, crucially, without making it “physically” travel the distance. Of course this does not come for free, we have to pay a price, and that price is entanglement.
Quantum mechanics predicts the possibility of a rather weird phenomenon. If you take two separate quantum objects, say two nanometer-sized M&M’s, and you make them interact in some particular way, you can make them entangled: the two M&M’s lose their individuality and can only be described as one of the parts of a two M&M’s system. Let’s say that our nano-M&M’s can only be of either two colours: red and blue. A non-entangled scenario would be, for example, if the first was red and the second blue. Each M&M’s has its own colour, its own identity. Now let’s take the two M&M’s (one red and one blue), shuffle them a little a bit, just to lose track of which one is which, and then send them one to you and one to me. When you observe the colour of your M&M’s, you immediately get to know also the colour of mine! While this is interesting, this is not quantum entanglement. This is called correlation and it is not quantum at all. The two M&M’s were always of the same colour, we simply didn’t know what it was.
When you entangle the two M&M’s, they actually lose their own colour! You can make an entangled two-M&M’s system, in which you know that the M&M’s will have different colour when you will observe them, but until then their colour will not be assigned. This effect is so weird that even great scientists believed it was too weird to be true . Now we have the tools to prove with experiments that the effect is real , and we can exploit it to build new technologies, such as a quantum internet. The idea is to share entangled objects between the nodes of the network, protect their entanglement from the noisy non-quantum environment in which we live, such that when we need to send a quantum bit of information, we can spend the connection of the entangled objects to teleport the (qu)bit. But how can we share entanglement on this network?
This is where our diamonds finally find a place. Diamonds are crystals made of carbon atoms arranged in a very compact way. Sometimes the crystal may have some defects, like an intruder atom (say nitrogen) or a missing carbon atom (what we call a vacancy). If we are lucky enough, these two defects happen one next to the other. Such a system is called an N-V center (nitrogen-vacancy).
N-V centers are one of the most promising candidates to act as nodes of a quantum network.
A node is made of three ingredients: a processor to handle information, a memory to temporary store it and a link to the other nodes.
We use the spin of a pair of electrons localized around the N-V center as the processor of our node. We can read its state using lasers and manipulate it using microwave pulses. The information stored in this spin has a short lifetime: the system loses memory of the information we store in it too quickly to use it as a reliable memory. Luckily enough, Nature provides us with a strong quantum memory not far apart from the N-V center. About 99% of the carbon atoms in nature are C12 which is spin-less, it does not have a spin. Most of the remaining carbons are C13, which has a spin (due to the additional neutron in the nucleus). We can talk to these C13 atoms in the diamond thanks to their spin-spin interaction with our electrons in the N-V center. Since the C13 spin “feels” the electronic spin, we can manipulate the latter to perform operations on the first.
The last ingredient of the node is the ability to link it to other nodes. We do this by making the N-V center emit a photon that is entangled with the electronic spin (like the M&M’s). A second N-V center, in a second diamond, in a different place, does the same. Then, by making the two photon interact we can transfer their entanglement to the two electronic spins, entangling the nodes . This entanglement can then be used for quantum network applications.
We are living the second quantum revolution. We will not be just spectators of quantum mechanics, we will use it as a technology. In a couple of decades everybody will have access to quantum computers connected through a quantum internet, to design drugs, to optimize airports, to play videogames and who knows for what else. Aren’t you excited? I certainly am.
Two weeks ago we witnessed to an event that will probably remain in the history books; I am speaking about the successful test of the Falcon Heavy that puts SpaceX on to the front pages of pretty much all magazines that are even remotely interested in space technology.
To sum up, the episode was classified as an astonishing success, and had a large impact on social media thanks to the eccentric personality of Elon Musk, the founder of SpaceX. In fact, the very first test of a new type of rocket is extremely risky (due to the high possibility that everything will blow up on the launch platform), so no one offered to load a satellite on this run. Instead of simulating the load with concrete weights in the payload capsule, Musk decided to send in to space his own cherry-red Tesla Roadster while playing David Bowie’s “Life on Mars” and with the first SpaceX suit sitting as a driver and astronaut. The images are literally astonishing and look like they were photo edited, but apparently this is one of the times when reality overcomes imaginations. From a practical point of view this is most powerful rocket actually in operation and is extremely cheap (£90 million per launch) and opened a door to a new space race.
On the wave of this excitement I think it is interesting to focus our attention on how the research on two dimensional materials and Van der Waals heterostructures can potentially be useful also for space exploration technologies. Two of the most pressing problems that we have to solve to gain easier and broader access to our space neighbourhood are the rocket efficiency to lift heavy weights from the Earth’s surface, and energy harvesting once in space. The first one is currently under heavy development (vivified by the entry on stage of private companies, such as SpaceX, blue origin and Orbital ATK); the second one is actually dominated by multi-junction inorganic solar cells, which gather solar light to produce electricity.
They actually hold the record of efficiency over 45%, but such high efficiency is gained at the cost of increasing complexity and manufacturing price, in fact, each single ones of these elements are composed of several different solar cells, each one of them absorbing in a specific range of wavelengths. Moreover, they need to be extremely engineered to be folded during the launch and fully extended and oriented once that they reach the final destination. Last but not least, there is a growing demand of energy to sustain larger and more complex satellites or even human missions.
This is the scenario in which two dimensional materials could give a contribution, especially transition metal dichalcogenides (TMDs). In fact, they have some desirable properties which make them excellent candidates for solar energy harvesting.
Light absorption: a single layer TMD flake of can absorb an extremely large amount of photons in a significant part of the visible spectrum . This is a key property for a material to be employed for light harvesting devices. Considering that these materials have a sub-nanometre thickness their absorption-to-weight ratio is really promising for photovoltaic applications. Especially in the space sector when the weight is a critical parameter to consider.
Efficient carrier separation: putting in contact two different TMDs will create a heterojunction that has a type II band alignment. This means that the electrons will be confined in one material, while the holes (or electron vacancies) are confined in the other material. In a heterojunction the recombination process (which is exactly the opposite of the photovoltaic effect) is a serious problem to overcome, to convert the light absorbed into electrical current.
heterobilayers have an extremely efficient carrier separation, this means that when a photon is absorbed the two carriers (electron and holes) are split in less than 50 femtoseconds in to the two different layers . These properties could be the key to massively increasing the efficiency of the device.
Flexibility: since they are so thin and flexible they can be integrated in many more elements of the spaceships, so instead of having big ships with huge solar panels, they can be also built into the surface of the spaceship modules themselves. This will optimize the light collection efficiently by exploiting the entire useful surface. Also due to their flexibility, new solutions can be developed to compress the solar panels during the launch.
Engineering material and devices: Adopting the same strategy seen for multi-junction solar cells, many isolated heterobilayers can be stacked together to form a similar structure, in which each single pair can be tailored at will to absorb most efficiently in a specific spectral region. This will maximize the energy collected and therefore the power gained, moreover, the overall thickness of the devices is indeed negligible due to the two dimensional nature of these materials.
To conclude, we are at the beginning of a new era of space exploration that can offer unprecedented benefits in countless fields, from global marketing to fundamental research, from resource access to aerospace engineering. In this space race, two dimensional materials can also express their full range of potential and lead to a major breakthrough which will bring us a step closer to the stars.
Per aspera ad astra
By Alessandro Catanzaro, PhD student at the LDSD group at University of Sheffield, Sheffield, United Kingdom.
Bernardi M, Palummo M, Grossman J.C, Nano Lett. 2013; 13(8):3664-3670. doi:10.1021/nl401544y.
Yu Y, Hu S, Su L, et al. 2014. doi:10.1021/nl5038177.
The exotic idiosyncrasies of topological insulators are fascinating and would no doubt prove fertile grounds to write this blog post. Nevertheless, I want to write about something more important and urgent for our generation of young scientist: Mental Health issues that many PhD students struggle with but seldom mention. Graduate school is an exciting time as we, young scientists, work on the cutting edge of science, testing new and exciting ideas. In striving for excellence, however, many of us struggle with balancing our academic and personal life like I have tried since I embarked on my academic career.
Constant high stress levels can severely affect our work, physical and emotional health and social life. There is a tacet understanding that PhD is difficult but like a boastful sleep deprived marine or medical intern on their 30th hour in the emergency ward, there is also an unhealthy culture discouraging many of us from speaking openly about our “weaknesses”. As a PhD student, job anxiety and high stress levels are seen as part of being an adult and having a responsible job. We do not want our colleagues with whom we are racing for publications to find out that we may not be doing so well personally. We meet them with confident bright faces hiding our ever looming fears of uncertain futures and failures inside.
In rare studies conducted in US and Belgium universities (see reference links at the end), up to 50% of graduate students from different academic fields were reported to be facing some form of mental health problem. Most of them reported one or more of the following:
constantly feeling unhappy
losing sleep because of anxiety
loss of appetite and concentration
feeling like they have poor control on their job and its progress
not being able to overcome difficulties
failing to enjoy day-to-day activities
low career optimism and feeling hopeless
feeling scared of making decisions
feeling low in energy levels
failing at maintaining a life-work balance
A big fraction admitted to having thought about suicide. This puts most of the PhD students at the risk of developing severe psychiatric problems. However, we tend to casually accept feeling depressed and constantly dealing with high levels of stress as part of “being a PhD”.
Although self reported data can be unreliable, particularly with respect to recall of activities over long time spans, it is still shocking to see two bimodel peaks at 2.5 and 5.7 hours on sleep on average and many students indicating high frequency of depressive symptoms.
I think a huge change can be made by just acknowledging that these are mental health issues that are not negligible and can be treated. We really need to change our attitude towards how we look down upon our colleagues who confess about having a harder time than us. We tend to smirk at them thinking they are not as smart.
We need to realize that there can be many more complicated factors at play. For example:
having to support a family and larger financial pressures
starting PhD in a foreign country/group can be very intimidating. Other than adapting to a new culture, it may also involve relearning social roles and their hierarchies. For example, the role of a teacher/professor may be perceived very differently in different cultures.
female researchers may find themselves dealing with an implicit culture of misogyny where they might not only feel a larger pressure to prove themselves but also considerably lesser space to make mistakes. Consequently, they end up with depression, low self-esteem and the fraud syndrome.
lacking a social support network or friends
balancing research, paying bills and a personal life etc
We can perhaps learn from Martin Seligman’s theory of “Learned Helplessness” where animals with uncontrollable stress/punishments eventually gave up on even trying to solve their problems and showed much worse cognitive and social skills down the lane. We often feel helpless too when we can not control the results of our experiments or fail at doing the multiple tasks we have to manage simultaneously as a PhD student. Professors (or PIs) may well be very alien to these tumultuous events happening inside their students. Most of us will be afraid to take our PIs in confidence regarding these issues. However, many universities now offer psychological counselling and help. We should realize that our mere understanding can play a crucial role in helping someone deal with these problems. So, kindly support your PhD friends who are struggling and encourage them to get external support as they themselves may feel helpless or trapped in their own predicament.