Physics World https://physicsworld.com Tue, 11 Jul 2023 10:00:59 +0100 en-GB Copyright by IOP Publishing Ltd and individual contributors hourly 1 https://wordpress.org/?v=5.5.12 Physics is full of captivating stories, from ongoing endeavours to explain the cosmos to ingenious innovations that shape the world around us. In the Physics World Stories podcast, Andrew Glester talks to the people behind some of the most intriguing and inspiring scientific stories. Listen to the podcast to hear from a diverse mix of scientists, engineers, artists and other commentators. Find out more about the stories in this podcast by visiting the Physics World website. If you enjoy what you hear, then also check out the Physics World Weekly podcast, a science-news podcast presented by our award-winning science journalists. Physics World clean episodic Physics World dens.milne@ioppublishing.org dens.milne@ioppublishing.org (Physics World) Copyright by IOP Publishing Ltd and individual contributors Physics World Stories Podcast Physics World https://physicsworld.com/wp-content/uploads/2021/01/PW-podcast-logo-STORIES-resized.jpg https://physicsworld.com TV-G Monthly Engineering world-changing materials: Nicola Spaldin on the importance of curiosity-driven research and what it means to be a physicist https://physicsworld.com/a/engineering-world-changing-materials-nicola-spaldin-on-the-importance-of-curiosity-driven-research-and-what-it-means-to-be-a-physicist/ Tue, 11 Jul 2023 10:00:59 +0000 https://physicsworld.com/?p=108688 Materials scientist Nicola Spaldin talks about her award-winning research and her somewhat circuitous career path

The post Engineering world-changing materials: Nicola Spaldin on the importance of curiosity-driven research and what it means to be a physicist appeared first on Physics World.

]]>
When Nicola Spaldin began studying natural sciences at the University of Cambridge in 1988, she planned on becoming a physicist, but then quickly reconsidered. “After about the second lecture I completely changed my mind,” she recalls. “I thought ‘I’m absolutely not clever enough to be a physicist.’ Everybody was very brilliant and I was not.”

Yet it seems Spaldin was vastly underestimating herself. Now a professor of materials science at ETH Zurich, she won two major awards for physics last year: the EPS Europhysics Prize and the Hamburg Prize for Theoretical Physics. Both accolades cited Spaldin’s pioneering work on the theory of magnetoelectric multiferroics – materials that are both ferromagnetic and ferroelectric. These properties are rarely found together, making it very difficult to engineer substances with both, but they have many exciting potential applications, from microelectronics to medicine.

At first glance, Spaldin’s path to materials science might appear circuitous. After focusing on chemistry and geology at Cambridge, she did a PhD in theoretical chemistry at the University of California, Berkeley – on the optical properties of nanoscale semiconductors – before working as a postdoc on ferromagnetism in the applied physics department at Yale University. But despite apparent changes in discipline, all her projects were to do with materials, which Spaldin points out lie at the intersection of chemistry, physics and engineering. “In that sense I haven’t really changed much,” she says. “I’ve just changed the department I’ve done it in.”

So, although fundamental physics is the basis for her research, she considers herself a materials scientist first and foremost. Her passion for this field shines through when she talks about how eras of human civilization – from the Stone Age onwards – have been defined by and named after the materials we have learned to wield. And she believes that discovering new materials is still the key to shaping our future. “Had I known about materials science at school, I probably would have studied it as an undergraduate,” she says.

Mixing materials

Ferromagnetic materials are substances that can form permanent magnets. Ferroelectric materials, in contrast, can form electric dipoles on a macroscopic scale when exposed to an applied electric field, retaining this polarization even after the field is removed. But when doing her postdoc on ferromagnets at Yale, Spaldin found herself working alongside people studying ferroelectrics and was struck by how different the two types of materials are. She therefore began to wonder if the two properties could co-exist as a “multiferroic” material. Spaldin discovered that both phenomena are the result of different electron configurations in the respective materials.

Ferromagnetic elements – such as iron, cobalt and nickel – have partially filled outermost electron shells and the spins of unpaired electrons contribute to an overall magnetic moment. On the other hand, ferroelectric materials tend to contain ions with empty outer electron shells, enabling strong chemical bonds with neighbouring atoms to create electric dipoles. “Materials that are good at making magnetic moments and materials that are good at being ferroelectric are in different places in the periodic table,” says Spaldin. “There’s no law or rule that they can’t do both, it’s just hard to find the combination of atoms that can.”

Applied curiosity

Having proved that there is no fundamental law of physics keeping ferromagnetism and ferroelectricity separate, Spaldin and her colleagues set about devising materials with both. It’s not easy though and there are two main ways of tackling this challenge. One is to design materials that contain both types of atom. These often have a crystal structure containing magnetic atoms, oxygen and positive metal ions found in ferroelectric materials.

Another approach is to embed magnetic atoms within lattices whose shapes are amenable to creating dipoles. For example, the unusual layered structure of yttrium manganite allows electric dipoles to form from the relative displacements of the yttrium and manganite substructures, despite the presence of the magnetic manganese ions.

Spaldin enjoys the challenge, but that isn’t the only reason this is such exciting research. “They’re hard to make so it’s fun to engineer them,” she says, “but then when you have them it means you can control or tune the magnetic properties using an electric field.”

One potentially transformative application would be in microelectronic technologies, where magnetism is used for data storage. The magnetic properties of components currently have to be controlled with magnetic fields, but these require significantly more energy to generate than electric fields. A material whose magnetic properties could be coordinated instead with an electric field could mean much cheaper and more sustainable electronics.

The blackboard in Nicola Spaldin's office

Meanwhile, the reverse capability – of controlling a material’s electrical polarization with a magnetic field – is of great interest in medicine. For instance, researchers are already working on targeted drug-delivery techniques, whereby an external magnetic field guides multiferroic particles through the body, and then changes their electric polarization to release the drug where it is needed.

These innovations notwithstanding, Spaldin is keen to emphasize the importance of purely blue-skies research, which is what originally led her down this path. “When we started playing with multiferroics there were no device physicists waiting for them,” she explains. “It wasn’t that they didn’t exist, it was that nobody had really thought of trying to combine the properties. So I think there has to be a bit of completely, absolutely curiosity-driven work – ‘just give this a try and see what happens’.”

Diverse perspectives

Reflecting on her career path to date, Spaldin speculates that she was initially put off physics thanks to the prevailing inflexible view of what a theoretical physicist should look like. “The way physics was taught, it wasn’t approached with what today we call a ‘growth mindset’ for learning,” she says, explaining that the attitude was that “you were there and you were brilliant or you should go and do something else instead”.

Spaldin takes a different approach with her own research group, aiming to have as diverse a team as possible, not only in terms of gender and background, but also in terms of people’s strengths and potential.

“I want to have a group that has many different perspectives and many different aspects of excellence,” she explains. “There’s still a rather narrow picture of what an excellent scientist is, even a young scientist who still really has time to develop, and that’s not always very diverse or different from the previous picture of what a scientist should be.”

This is also relevant to Spaldin’s science-policy work. She is currently a member of the European Research Council’s Scientific Council, which is required to distribute its allotted budget based only on excellence. “But how to assess excellence is of course an open question.” The council discusses and commissions studies on how to best evaluate the merit of a scientist or their proposals. Changes to evaluation criteria might also help to promote gender parity in the European scientific community, where women continue to be under-represented, particularly in physics.

Spaldin notes that under the traditional ways of assessing researchers, women were more likely to be in a situation where they would be poorly evaluated. For instance, if they had more responsibilities for looking after a family, they might have less freedom to migrate for work. The ERC has therefore removed the question of mobility on applicants’ CVs to try to level the playing field. “These developments would be good for everybody,” Spaldin adds, “but maybe particularly good for women.”

Finding what’s fun

Although Spaldin’s research was ignited by her fascination with multiferroics, her work could have a more far-reaching impact in physics. For example, she is now also looking at the possibility of engineering a room-temperature superconductor. Although it is unlikely that a multiferroic material would exhibit this property, progress could stem from the same underlying theoretical picture.

In another project, more cross-disciplinary still, Spaldin is working with astrophysicists to detect dark matter. There are many proposed theoretical descriptions of this enigmatic substance, and Spaldin is helping to predict how each hypothetical kind of dark-matter particle would interact with electrons within different materials.

I want to have a group that has many different perspectives and many different aspects of excellence

So far, this project has mostly involved whittling down the possible characteristics of dark matter by excluding models that would imply interactions that have not been observed in existing detectors. But Spaldin and her collaborators are also thinking about which materials might make for promising new detectors, depending on the remaining possible descriptions. “This for me is just so much fun because it’s so far from my comfort zone,” Spaldin says. “It’s almost a different language that we speak in the project, so I’m enjoying it very much.”

And this enthusiasm is what she returns to again and again. Of course, achieving a room-temperature superconductor or detecting dark matter would be revolutionary, whether in the infrastructure of our daily lives, or in our fundamental understanding of the cosmos. But even these profound prospects seem to be secondary factors in Spaldin’s motivation. “Mostly I really want to work on problems that are fun,” she says. “Life is short!”

The post Engineering world-changing materials: Nicola Spaldin on the importance of curiosity-driven research and what it means to be a physicist appeared first on Physics World.

]]>
Careers Materials scientist Nicola Spaldin talks about her award-winning research and her somewhat circuitous career path https://physicsworld.com/wp-content/uploads/2023/07/2023-07-careers-Spaldin_lab.jpg
Positioning system uses cosmic muons to navigate underground https://physicsworld.com/a/positioning-system-uses-cosmic-muons-to-navigate-underground/ Tue, 11 Jul 2023 08:19:24 +0000 https://physicsworld.com/?p=108860 System can be used where satellite navigation systems fail

The post Positioning system uses cosmic muons to navigate underground appeared first on Physics World.

]]>
Cosmic muon navigation

Cosmic muons could provide a practical alternative to global navigation satellite systems (GNSSs) in places where radio signals cannot reach. That is the conclusion of the muPS collaboration, which has created a system that worked deep in the basement of a university building. The muPS team was led by Hiroyuki Tanaka at the University of Tokyo, and its new system could allow users to navigate in indoor, underground and underwater environments.

GNSSs such as GPS work by transmitting radio signals from a group of satellites to a receiver on the ground. While GNSSs have revolutionized how we get around, GNSS signals are rapidly attenuated by materials like metal, concrete, rock and water – limiting its use indoors, underground and underwater.

In 2020 Tanaka’s team introduced an entirely new approach that tracks the position of a receiver using cosmic muons. These particles are created when high-energy cosmic rays collide with Earth’s atmosphere, and they are constantly raining down on us.

Moving through mountains

“Cosmic muons are not intercepted like radio waves, since they can penetrate even through pyramids or mountains, making the technique suitable for universal indoor or underground navigation,” Tanaka explains.

Dubbed the muPS Wireless Navigation System (muWNS) by its inventors, the system replaces satellites with a network of three or more reference muon detectors, which are synchronized with a receiver detector. These reference detectors could be set up on roofs or higher floors for indoor navigation, or at ground or sea level for navigation through underground or underwater environments.

The system works by identifying muons that have passed through one of the reference detectors and then passed through the receiver. These muons travel at close to the speed of light, which allows muWNS to calculate the distance between the reference detector and receiver. By doing this several times using different reference detectors, the system uses triangulation to determine the receiver’s position.

While the concept is simple, Tanaka’s team had to overcome several challenges while developing muWNS. The initial design required the receiver to be wired to each reference detector to guarantee precise time synchronization, which severely restricted the range and usefulness of the system.

Precision timekeeping

To get around this problem, the team fitted the detectors with ultra-precise quartz crystal clocks, which were synchronized to allow them to compare muon arrival times wirelessly.

The researchers also managed to improve the accuracy of their initial system. “When muWNS was demonstrated for the first time about a year ago, the navigation accuracy was only down to 10 m,” Tanaka recalls. “This is far from the satisfactory level for practical implementation.”

By further improving the accuracy of the clocks, the team has now significantly reduced the errors that accumulate in the timings. In the latest demonstration, Tanaka’s team has shown that muWNS is now accurate enough to be useful for indoor navigation.

In a new study, the researchers used muWNS to track a user’s route across the basement floor of the University of Tokyo’s Institute of Industrial Science – an area that cannot be reached by conventional GNSS. This was done using reference detectors placed on the building’s sixth floor.

Noticeable improvement

When the user was in close range with the reference detectors, the system showed a noticeable improvement. “The current accuracy of muWNS is 2–25 m, with a range of up to 100 m, depending on the depth and speed of the person walking,” Tanaka explains. “This is as good as, if not better than, single-point GPS positioning aboveground in urban areas.”

However, Tanaka says there is still much room for improvement. “MuWNS is still far from practical. People need one-metre accuracy, and the key to this is the time synchronization.”

The researchers hope future improvements could be made by using chip-scale atomic clocks for timing. These clocks are an order of magnitude more precise than quartz crystals, but are too expensive today for practical use. Tanaka’s team also intends to miniaturize the system’s components, and believes it could eventually fit onto a handheld device.

The research is described in iScience.

The post Positioning system uses cosmic muons to navigate underground appeared first on Physics World.

]]>
Research update System can be used where satellite navigation systems fail https://physicsworld.com/wp-content/uploads/2023/07/Muons-in-the-atmosphere.jpg
Minecraft adventure explores the solar system, machine learning generates potions worthy of Hogwarts https://physicsworld.com/a/minecraft-adventure-explores-the-solar-system-machine-learning-generates-potions-worthy-of-hogworts/ Fri, 07 Jul 2023 14:17:33 +0000 https://physicsworld.com/?p=108831 Excerpts from the Red Folder

The post Minecraft adventure explores the solar system, machine learning generates potions worthy of Hogwarts appeared first on Physics World.

]]>
While Minecraft is the best-selling video game in history and might have kids glued to their screens more than their parents would like, it does come with some educational benefits. One example is a new cosmic adventure called Our Place in Space. Based on the work of artist Oliver Jeffers and Queen’s University Belfast astronomer Stephen Smartt, the free download allows Minecraft gamers to travel through the solar system and throughout history.

As well as learning about the planets, users will also uncover and learn about issues such as war, famine, slavery and even fake news. The game, which has been downloaded over a million times since it was launched in April,  was created as part of the Our Place in Space installation – a recreation of the solar system as a 10 km sculpture trail with an accompanying augmented reality app.

“It can be difficult to visualise and appreciate the scale of the universe,” says Smartt, who features as a Minecraft character in the game. “Our own solar system is only a tiny part of our galaxy, yet its dimensions are colossal.”

Useful potions

In a paper that you would usually expect to see published on 1 April, health scientists Christoph Kurz at the Helmholtz Zentrum München and Adriana König at the Ludwig Maximilian University of Munich investigated whether machine learning could generate useful potion recipes for Hogwarts School of Witchcraft and Wizardry (arXiv: 2307.00036).

They were inspired to do so given the recent interest in the pharmaceutical industry in using artificial intelligence, or AI, in drug discovery. The pair collected 72 potion recipes from the Harry Potter wiki page and then generated 10,000 new potion recipes by randomly picking between three to eight ingredients such as mistletoe berries and, er, unicorn horns.

They then used a neural network to predict the category of each potion, finding that half the recipes were psychoanaleptics – drugs that restore mental health – followed by 15% being dermatologicals – or treatments for the skin. “Two muggles with (presumably) no magical abilities performed the study,” the authors write. “Thus it is difficult to assess the validity and classification quality of the generated recipes.”

The post Minecraft adventure explores the solar system, machine learning generates potions worthy of Hogwarts appeared first on Physics World.

]]>
Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/07/Our-place-in-space.jpg
Biodegradable ultrasound implant could improve brain tumour treatments https://physicsworld.com/a/biodegradable-ultrasound-implant-could-improve-brain-tumour-treatments/ Fri, 07 Jul 2023 09:00:00 +0000 https://physicsworld.com/?p=108696 A biodegradable ultrasound transducer based on piezoelectric nanofibres could improve outcomes for patients with brain cancer

The post Biodegradable ultrasound implant could improve brain tumour treatments appeared first on Physics World.

]]>
A new type of biodegradable ultrasound implant based on piezoelectric nanofibres could improve outcomes for patients with brain cancer.

Researchers led by Thanh Nguyen from the at the University of Connecticut’s department of mechanical engineering fabricated the devices from crystals of glycine, an amino acid found in the human body. Glycine is not only non-toxic and biodegradable, it is also highly piezoelectric, enabling the creation of a powerful ultrasound transducer that could help treat brain tumours.

Brain tumours are particularly difficult to treat because the chemotherapy drugs that would be effective in tackling them are blocked from entering the brain by the blood–brain barrier (BBB). This barrier is a very tight junction of cells lining the blood vessel walls that prevents particles and large molecules from making their way through and damaging the brain. However, ultrasound can be safely used to temporarily alter the shape of the barrier cells such that chemotherapy drugs circulating in the bloodstream can pass through to the brain tissues.

Currently, to achieve such BBB opening requires the use of multiple ultrasound transducers located outside the body, together with very high intensity ultrasound to enable penetration through the thick human skull bone.

“That strong ultrasound can easily damage brain tissues and is not practical for multiple-time applications which are required to repeatedly deliver chemotherapeutics,” Nguyen tells Physics World.

By contrast, the team’s new device would be implanted during the tumour removal surgery, and “can generate a powerful acoustic wave deep inside the brain tissues under a small supplied voltage to open the BBB”. The ultrasound would be triggered repeatedly as required to deliver the chemotherapy that kills off the residual cancer cells at tumour sites. After a set period of time following treatment the implant biodegrades, thereby eliminating the need for surgery to remove it.

The research, reported in Science Advances, demonstrated that the team’s device used in conjunction with the chemotherapy drug paclitaxel significantly extended the lifetime of mice with glioblastomas (the most aggressive form of brain tumour) compared with mice receiving the drugs but no ultrasound treatment.

Nanofibres of PCL with encapsulated glycine

However, there is a catch when making implantable ultrasound devices from glycine crystals. “The crystals are very brittle and highly water soluble, which makes the handling, fabrication and body implantation extremely challenging,” explains Nguyen. To tackle this problem, the team deliberately shattered the crystals into nanoparticles before encapsulating them inside a matrix of the flexible, biodegradable polymer polycaprolactone (PCL).

The researchers created nanofibres of PCL with encapsulated glycine via electrospinning – in which a polymer solution containing the glycine crystals is jetted out and stretched under a high voltage. Unlike conventional solvent-casting techniques, which randomize the crystal domains and dipoles in the crystals and so reduce their piezoelectric output, this processing creates oriented glycine crystals with high piezoelectric performance.

Next, they made the resulting nanofibres into piezoelectric films and then encased them within a biodegradable polymer that can be tuned to degrade at different rates by varying its thickness or chemistry. Lifetimes ranging from a few days to a few months are achievable, opening up the possibility of the device being useful for a variety of applications. These include enhancing the therapeutic effect of drugs for Alzheimer’s or Parkinson’s disease (which are difficult to get past the BBB), modulating neural signals in the brain, or monitoring brain pressure following traumatic brain injury.

The researchers now plan to test safety and effectiveness in larger animals, while concurrently studying the nanofibre’s piezoelectric and ferroelectric properties to optimize the implant’s performance. Understanding more about these fundamental properties “will enable us to achieve a stable, powerful ultrasound generator with a minimal need of power consumption, minimizing the risk of current leakage and extending the lifetime of the implanted device for broad applications,” says Nguyen.

The post Biodegradable ultrasound implant could improve brain tumour treatments appeared first on Physics World.

]]>
Research update A biodegradable ultrasound transducer based on piezoelectric nanofibres could improve outcomes for patients with brain cancer https://physicsworld.com/wp-content/uploads/2023/06/Nguyen-transducer-chip-and-research-group.jpg
Nuclear clocks: why an experiment at CERN brings them closer to reality https://physicsworld.com/a/nuclear-clocks-why-an-experiment-at-cern-brings-them-closer-to-reality/ Thu, 06 Jul 2023 13:50:12 +0000 https://physicsworld.com/?p=108821 This podcast features the nuclear physicist Sandro Kraemer

The post Nuclear clocks: why an experiment at CERN brings them closer to reality appeared first on Physics World.

]]>
Building a clock based on a nuclear transition has long been a goal of metrologists. As well as offering the potential of greater accuracy than atomic clocks, such a timekeeper could be more immune to external noise and could also be used to probe new physics beyond the Standard Model.

However, the challenges have been many and until recently researchers had not even managed to make a direct observation of the radiation associated with a potential nuclear-clock transition.

That changed earlier this year, when a team of researchers working at the ISOLDE experiment at CERN made the first direct observation of vacuum ultraviolet light from a transition in thorium-229. This episode of the Physics World Weekly podcast features team member Sandro Kraemer of the Institute for Nuclear and Radiation Physics at Belgium’s Catholic University of Leuven. He explains why physicists are keen on building a nuclear clock, why it has been so difficult, and what the ISOLDE measurement means for the future of timekeeping.

The post Nuclear clocks: why an experiment at CERN brings them closer to reality appeared first on Physics World.

]]>
Podcast This podcast features the nuclear physicist Sandro Kraemer https://physicsworld.com/wp-content/uploads/2023/07/Sandro-Kraemer-list.jpg newsletter
MRI study challenges our knowledge of how the human brain works https://physicsworld.com/a/mri-study-challenges-our-knowledge-of-how-the-human-brain-works/ Thu, 06 Jul 2023 09:00:27 +0000 https://physicsworld.com/?p=108705 Researchers dig deeper into the workings of the human brain, shedding insight into how the brain’s physical shape might influence its patterns of activity

The post MRI study challenges our knowledge of how the human brain works appeared first on Physics World.

]]>
How does the human brain work? It depends on who you ask.

At school, you were likely taught that our brains contain billions of neurons that process inputs and help us form thoughts, emotions and movements. Ask imaging specialists, and you’ll learn about how we can see the brain in different ways using a variety of imaging techniques and about what we can learn from each image. Neuroscientists also will tell you about the interactions between neurons and related chemicals, such as dopamine and serotonin.

If you ask a subgroup of neuroscientists who focus on mathematical frameworks for how the brain’s shape influences its activity – an area of mathematical neuroscience called neural field theory – you’ll begin to understand the relationship between brain shape, structure and function in yet another way.

Neural field theory builds upon our conventional understanding of how the brain works. It uses the brain’s physical shape – the size, length and curvature of the cortex, and the three-dimensional shape of the subcortex – as a scaffold upon which brain activity happens over time and space. Scientists then model the brain’s macroscopic electrical activity using the brain’s geometry to impose constraints. Electrical activity along the cortex, for example, might be modelled as a superposition of travelling waves propagating through a sheet of neural tissue.

“The idea that the geometry of the brain can influence or constrain whatever activity happens inside is not a conventional neuroscience question, right? It’s a very esoteric question…There’s been decades of work in trying to map the intricate wiring of the brain, and we’ve thought that all the activity that comes out of the brain is driven by this intricate wiring,” says James Pang, a research fellow at Monash University’s Turner Institute for Brain and Mental Health.

In a study published in Nature, Pang and his colleagues have challenged this prevailing understanding by identifying a strong relationship between brain shape and functional MRI (fMRI) activity.

The researchers were studying natural resonances called eigenmodes, which occur when different parts of a system vibrate at the same frequency, such as the excitations that occur in the brain during a task-evoked fMRI scan. When they applied mathematical models from neural field theory to over 10,000 activity maps and fMRI data from the Human Connectome Project, the researchers found that cortical and subcortical activity results from excitation of brain-wide eigenmodes with long spatial wavelengths up to and exceeding 6 cm. This result contrasts with a leading belief that brain activity is localized.

“We have long thought that specific thoughts or sensations elicit activity in specific parts of the brain, but this study reveals that structured patterns of activity are excited across nearly the entire brain, just like the way in which a musical note arises from vibrations occurring along the entire length of a violin string, and not just an isolated segment,” says Pang in a press statement.

Pang and his colleagues also compared how geometric eigenmodes, obtained from models of brain shape, performed relative to connectome eigenmodes, which are obtained from models of brain connectivity. They found that geometric eigenmodes imposed greater limits on brain activity than connectome eigenmodes, suggesting that the brain’s contours and curvature strongly influence brain activity – perhaps even to a greater extent than the complex interconnectivity between populations of neurons themselves.

Simply put, the scientists’ results challenge our knowledge of how the human brain works.

“We’re not saying that the connectivity in your brain is not important,” says Pang. “What we’re saying is that the shape of your brain also has a significant contribution. It’s highly likely that both worlds have some synergy…there’s been decades and decades of work from both sides of the research in the neural field theory world and the connectivity world, and both are important, in my opinion. This study opens up so many possibilities – we could study how geometric eigenmodes vary through neurodevelopment or are disrupted by clinical disorders, for example. It’s quite exciting.”

The post MRI study challenges our knowledge of how the human brain works appeared first on Physics World.

]]>
Research update Researchers dig deeper into the workings of the human brain, shedding insight into how the brain’s physical shape might influence its patterns of activity https://physicsworld.com/wp-content/uploads/2023/07/Alex-Fornito-James-Pang_5472.jpg newsletter1
New elastocaloric cooling system shows promise for commercial use https://physicsworld.com/a/new-elastocaloric-cooling-system-shows-promise-for-commercial-use/ Wed, 05 Jul 2023 16:09:43 +0000 https://physicsworld.com/?p=108812 Nickel titanium system will be used to create a wine chiller

The post New elastocaloric cooling system shows promise for commercial use appeared first on Physics World.

]]>
An elastocaloric cooling system that absorbs heat as tension is released in bundles of metal tubes has been developed by a team of researchers in the US and China. Led by Ichiro Takeuchi at the University of Maryland, the team’s scheme achieved a cooling performance on par with other caloric materials, and could pave the way for commercial use in the not-too-distant future.

Conventional refrigeration systems usually employ gases that have powerful greenhouse effects if released into the atmosphere. As a result, researchers are developing alternative solid-state refrigeration technologies based on caloric materials. These materials undergo temperature changes when exposed to external magnetic or electric fields, or in response to mechanical stress or pressure. As well as avoiding harmful chemicals, cooling systems based on caloric materials could also be more energy efficient than existing refrigerators.

So far, this research has focused mainly on magnetocaloric materials – but more recently, elastocaloric materials have emerged as even more promising candidates for commercial caloric cooling. Among these materials is the highly elastic and easily manufactured alloy nickel titanium (NiTi).

Under tension

As Takeuchi’s team first showed over a decade ago, thin wires of this alloy can expel large amounts of heat when under tension, and absorb it when the tension is released. “About 12 years ago, we discovered that NiTi can experimentally display a large span in temperature, which you can feel by hand,” Takeuchi recalls. “At the time, we demonstrated this by adding tension to readily available NiTi wires. This is how we started making elastocaloric devices.”

The researchers then set to work on developing commercially useful cooling applications. However, implementing elastocaloric cooling on a large scale has turned out to be a significant technical challenge. The main problem is that repeated cycles of tension and release damage NiTi wires, limiting their practical lifetimes.

To address this challenge, Takeuchi’s team developed a novel heat exchange system whereby water is pumped through bundles of NiTi tubes. “It took us a long time to overcome various engineering challenges, but with our recent demonstration, we were able to demonstrate what we envisioned 10 years ago. We are using water as heat exchange fluid – making the water colder, so it can be used in turn for refrigeration or air conditioning,” Takeuchi explains.

The team used two quantities to gauge the success of the approach. The first is “delivered cooling power”, which describes the rate of heat removal. The second is “temperature span”, which describes the difference in temperature between the water at each end of the system. “For these two important figures, we have been able to achieve 260 W and 22.5 K, respectively,” says Takeuchi. The researchers maximized each of these values in turn, simply by adjusting the operation sequences of valves in their heat-exchange system.

Catching up

These latest results are an example of how elastocaloric materials are catching up with the cooling performance of their magnetocaloric counterparts, and could soon be feasible candidates for commercial cooling systems.

However, Takeuchi concedes that the practical use of elastocaloric materials may still be some way off, as it will likely require more advanced materials to be developed first. “The high stress required for NiTi is still a problem, but there are materials on the horizon, other superelastic materials, which are known to exhibit elastocaloric effects with much smaller stress,” he says.

“These materials are less developed, and not commercially available yet, but we believe further development of these materials and implementing them in low-stress cooling systems is a really exciting prospect.” Takeuchi’s team has already drawn out plans for a compact, elastocaloric wine cooler, and hopes to demonstrate a successful prototype once these materials are available.

The research is described in Science.

The post New elastocaloric cooling system shows promise for commercial use appeared first on Physics World.

]]>
Research update Nickel titanium system will be used to create a wine chiller https://physicsworld.com/wp-content/uploads/2023/07/Elastocaloric-cooling.jpg newsletter1
Sarafina El-Badry Nance: an uplifting tale of passion, resilience and strength https://physicsworld.com/a/sarafina-el-badry-nance-an-uplifting-tale-of-passion-resilience-and-strength/ Wed, 05 Jul 2023 10:00:42 +0000 https://physicsworld.com/?p=108411 Clár-Bríd Tohill reviews Starstruck by Sarafina El-Badry Nance

The post Sarafina El-Badry Nance: an uplifting tale of passion, resilience and strength appeared first on Physics World.

]]>
Sarafina El-Badry Nance

When I was a girl, I was fascinated by science but had few – if any – female astrophysicists to look up to. As a woman in physics today, I know just how important it is to have a strong, empowering female role model to aspire to. Sarafina El-Badry Nance – currently doing graduate work in astrophysics and cosmology at the University of California, Berkeley – is exactly the kind of person I wish I’d known when I was growing up.

Starstuck: a Memoir of Astrophysics and Finding Light in the Dark describes the challenging – yet often unseen – road that unfortunately all too many women have to journey along to succeed. The path that El-Badry Nance followed to become the powerful and inspirational scientist she is today was fraught with challenges and doubt, yet she never quit or lost sight of her dream. Brilliantly weaving together El-Badry Nance’s personal story with our understanding of astronomy, Starstruck explains how she found freedom and solace in exploring the universe.

The book begins by describing the author’s love and passion for the night sky, which began as a child growing up in Austin, Texas. But the challenges she had to contend with at home and school quickly become apparent. Born to an Egyptian mother and an American father, El-Badry Nance faced both racism and sexism – and, like many women, was repeatedly told that girls were not cut out for science and maths. This bigotry led to huge amounts of anxiety and self-doubt.

While her parents’ relationship with each other caused much of her childhood trauma, El-Badry Nance was privileged to attend a school that helped her deal with her anxiety. The school, which was private, gave her access to opportunities that other people facing similar challenges might not have, letting her pursue her passion for astronomy. Her parents also helped El-Badry Nance to counteract the bigotry she faced from her teachers and peers.

Starstruck is a book that many women will relate to. But it is nonetheless an important tale to tell

Crucially, El-Badry Nance now recognizes her privilege and uses the platform it gave her to communicate the excitement of science – and the simple truth that it is a subject open for everyone. Indeed, despite the traumas of her childhood, the author ended up doing a degree in physics and astrophysics at the University of Texas, Austin, although even there she faced more sexism and misogyny from both her peers and professors.

Starstruck is a book that many women will relate to. But it is nonetheless an important tale to tell for we cannot begin to make progress on sexism in science without raising awareness of just how widespread this issue is. Worse still, the author was also subject to an abusive relationship during her degree. Her abuser took advantage of her insecurities, tormenting her both during their relationship and afterwards too.

This abuse took its toll both physically and mentally on El-Badry Nance, repressing her passion for astronomy. Thankfully, with help from a therapist and her family, she was able to escape this hell and learn how to heal. This part of the book is difficult to read, but it is what makes El-Badry Nance the woman she is today. The strength and resilience she has had to show are extraordinary – and will inspire many others who have faced similar experiences of their own.

El-Badry Nance’s hard work and passion for astronomy eventually saw her accept an offer from Berkeley’s graduate programme. Rewarded after years of anxiety and hard work, she was now able to realize the dream she had harboured since childhood of being a professional scientist. But there was more trouble in store: first, her father was diagnosed with cancer and then she was too.

El-Badry Nance had the misfortune of inheriting a genetic mutation that affected her grandmother, her father and now her. Aged just 23, El-Badry Nance was told she had an 87% chance of developing breast cancer. It was pretty much a case of when – not if – the cancer would appear, and the author was forced to make one of the hardest and bravest decisions of her life: to get a preventative double mastectomy.

Since her double mastectomy, El-Badry Nance has used her position of influence to bring awareness to breast cancer and the importance of self-testing to a wider audience

Since the operation, El-Badry Nance has used her position of influence online to bring awareness to this horrendous disease and the importance of self-testing to a wider audience. However, her inspirational story has only just begun. As well as doing a PhD in astronomy at Berkeley, she is also training as an “analogue astronaut” on a Mars simulation facility in Hawai’i.

Despite the anxiety and self-doubt she has faced, the book is an uplifting tale of passion, resilience and strength. El-Badry Nance is a remarkable young woman, using her past experiences to help others who might be struggling with their own lives. Still healing from her own trauma, she is a force to be reckoned with and I am looking forward to seeing what she will do next. For Sarafina El-Badry Nance, the sky really is the limit.

  • 2023 Dutton 336pp £20.99/$29.00hb

The post Sarafina El-Badry Nance: an uplifting tale of passion, resilience and strength appeared first on Physics World.

]]>
Opinion and reviews Clár-Bríd Tohill reviews Starstruck by Sarafina El-Badry Nance https://physicsworld.com/wp-content/uploads/2023/07/2023-07-Tohill-El-Badry-Nance-featured.jpg newsletter
Towards a cure for ALS: magnetic stimulation restores impaired motoneurons https://physicsworld.com/a/towards-a-cure-for-als-magnetic-stimulation-restores-impaired-motoneurons/ Wed, 05 Jul 2023 09:00:00 +0000 https://physicsworld.com/?p=108702 Magnetic fields that reactivate damaged motoneurons could one day provide a new treatment for neurodegenerative diseases

The post Towards a cure for ALS: magnetic stimulation restores impaired motoneurons appeared first on Physics World.

]]>
Thomas Herrmannsdörfer and Richard Funk

Amyotrophic lateral sclerosis (ALS) is a severe incurable disorder in which motoneurons – nerve cells in the brain and spinal cord that send signals to muscles to control movement – are damaged. Without functioning motoneurons, the muscles do not receive instructions and no longer work, leading to progressive paralysis, muscle atrophy and, eventually, failure of the respiratory system.

Currently, there is no successful treatment for ALS, with drug therapies only having a marginal impact on patient survival. Aiming to address this shortfall, an interdisciplinary research team headed up at Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and TU Dresden is investigating the potential of using magnetic fields to restore impaired motoneurons.

The influence of magnetic stimulation on neuronal diseases has been widely investigated. However, applications in peripheral nerves are scarce. In this latest study, reported in Cells, the researchers assessed whether magnetic stimulation of peripheral motoneurons could restore defects in stem cell-derived motoneurons from ALS patients with mutations in the FUS gene (FUS-ALS).

The team – headed up by physicist Thomas Herrmannsdörfer, cell biologist Arun Pal and physician Richard Funk, and supported by colleagues at TU Dresden and the University of Rostock – generated spinal motoneurons by reprogramming induced pluripotent stem cells obtained from skin biopsies of healthy individuals and patients with FUS-ALS. They designed and fabricated electromagnetic coils that can be operated in cell culture incubators, and used these to expose the motoneurons to tailored magnetic fields.

Each magnetic stimulation comprised four consecutive treatments (several hours in duration) using very low square-wave frequencies of 2 to 10 Hz. The treatments were performed after the cells had matured for 30 to 45 days in vitro, with the coils switched off in between. After the final treatment, the team maintained the cells in culture for two days before assessing the impact of the magnetic stimulation.

Pulsed magnetic fields could help fight neurodegenerative diseases

Restoring axonal defects

Motoneurons possess lengthy projections called axons, which can measure up to 1 m long, that transport substances and transmit information. Impairments in the transport of axonal organelles such as mitochondria and lysosomes contribute to neuronal degeneration in ALS. Thus the researchers used live cell imaging and immunofluorescent staining to measure the motility of these organelles in motoneurons exposed to magnetic fields.

They first examined the mean organelle speed. Quantitative tracking analysis revealed a decreased distal mean speed for both mitochondria and lysosomes in untreated mutant FUS motoneurons compared with control cells (derived from healthy donors). Exposure to magnetic fields reverted the mean speed in FUS motoneurons back to control levels, with the best effects seen using very low frequencies of about 10 Hz.

Another hallmark of ALS is a diminished ability of axons to grow and regenerate after injuries or during aging. Such growth is crucial for maintaining inter-neuronal connectivity across nerve endings and transmitting information. To study whether magnetic stimulation could improve such defects, the team used live imaging of cells in microfluidic chambers to analyse the new outgrowth of axonal growth cones after axotomy (severing of an axon).

The researchers observed a reduced mean axonal outgrowth speed in untreated FUS motoneurons compared with control cells. Magnetic stimulation of the FUS motoneurons at 10 Hz significantly increased the mean outgrowth speed back to control levels. The magnetic fields did not affect the mean outgrowth speed in control motoneurons.

In numerous experiments, the researchers showed that motoneurons from ALS patients respond to the magnetic fields, with impaired axonal transport of organelles reactivated by stimulation and axonal regeneration restored. Importantly, they also demonstrated that healthy cells were not damaged by the magnetic stimulation.

While these findings appear promising, the team highlights the need for long-term and in vivo studies. “We regard these in vitro results as an encouraging approach on the path to a potential novel therapy for ALS, as well as other neurogenerative diseases,” says Herrmannsdörfer in a press statement. “We also know, however, that detailed follow-up studies are required to corroborate our findings.”

Now working within the ThaXonian project, Herrmannsdörfer and his colleagues are planning further studies to optimize the parameters of the applied magnetic field, understand the cellular response to various magnetic stimuli, and test the treatment on other neurodegenerative disorders, such as Parkinson’s, Huntington’s and Alzheimer’s diseases.

The post Towards a cure for ALS: magnetic stimulation restores impaired motoneurons appeared first on Physics World.

]]>
Research update Magnetic fields that reactivate damaged motoneurons could one day provide a new treatment for neurodegenerative diseases https://physicsworld.com/wp-content/uploads/2023/06/Using-pulsed-magnetic-fields-to-fight-neurodegenerative-diseases.jpg
New bolometer could lead to better cryogenic quantum technologies https://physicsworld.com/a/new-bolometer-could-lead-to-better-cryogenic-quantum-technologies/ Tue, 04 Jul 2023 14:26:26 +0000 https://physicsworld.com/?p=108711 System can be easily calibrated

The post New bolometer could lead to better cryogenic quantum technologies appeared first on Physics World.

]]>
A new type of bolometer that covers a broad range of microwave frequencies has been created by researchers in Finland. The work builds on previous research by the team and the new technique could potentially characterize background noise sources and thereby help to improve the cryogenic environments necessary for quantum technologies.

A bolometer is an instrument that measures radiant heat. Instruments have existed for 140 years and are conceptually simple devices.  They use an element that absorbs radiation in a specific region of the electromagnetic spectrum. This causes the device to heat up, resulting in a parameter change that can be measured.

Bolometers have found applications ranging from particle physics to astronomy and security screening. In 2019 Mikko Möttönen of Aalto University in Finland and colleagues developed a new ultra-small, ultralow-noise bolometer comprising a microwave resonator made of a series of superconducting sections joined by a normal gold-palladium nanowire. They found that the resonator frequency dropped when the bolometer was heated.

Measuring qubits

In 2020, the same group swapped the normal metal for graphene, which has a much lower thermal capacity and thus should measure temperature changes 100 times faster. The result could have advantages over current technologies used to measure the states of individual superconducting quantum bits (qubits).

Superconducting qubits, however, are notoriously prone to the classical noise of thermal photons, and in the new work Möttönen and colleagues, together with researchers at the quantum technology company Bluefors, set out to tackle this. The graphene bolometer focuses on sensing a single qubit, and on measuring the relative power level as quickly as possible to determine its state. In this latest work, however, the researchers were looking for noise from all sources, so they needed a broadband absorber. They also needed to measure the absolute power, which requires the calibration of the bolometer.

One of the applications that the team demonstrated in their experiments was the measurement of the amount of microwave loss and noise in the cables running from room temperature components to low-temperature components. Previously, researchers have done this by amplifying the low-temperature signal before comparing it to a reference signal at room temperature.

Very time consuming

“These lines have typically been calibrated by running a signal down, running it back up and then measuring what happens,” explains Möttönen, “but then I’m a little bit unsure whether my signal was lost on the way down or up so I have to calibrate many times…and warm up the fridge…and change the connections…and do it again – it’s very time consuming.”

Instead, therefore, the researchers integrated a tiny electrical direct-current heater into the thermal absorber of the bolometer, allowing them to calibrate the power absorbed from the surroundings against a power supply that they could control.

“You see what the qubit will see,” says Möttönen. The femtowatt-scale heating used for calibration – which is turned off during the operation of the quantum device – should have no meaningful effect on the system. The researchers eschewed graphene, reverting to a superconductor–normal metal–superconductor design for the junctions because of the greater ease of production and better durability of the finished product: “These gold-palladium devices will remain almost unchanged on the shelf for a decade, and you want your characterization tools to remain unchanged over time,” Möttönen says.

The researchers are now developing the technology for more detailed spectral filtering of noise. “The signal that comes into your quantum processing unit has to be heavily attenuated, and if the attenuator gets hot, that’s bad…We would like to see what is the temperature of that line at different frequencies to get the power spectrum,” Möttönen says. This could help to decide on what frequencies are best to choose or help to optimize equipment for quantum computing.

“It’s impressive work,” says quantum technologist Martin Weides of the University of Glasgow. “It adds to a number of existing measurements on the transfer of power in cryogenic environments required for quantum technologies. It allows you to measure from dc up to microwave frequencies, it allows you to compare both, and the measurement itself is straightforward…If you’re building a quantum computer, you’re building a cryostat, and you want to characterize all your components reliably, you probably would like to use something like this.”

The research is published in Review of Scientific Instruments.    

The post New bolometer could lead to better cryogenic quantum technologies appeared first on Physics World.

]]>
Research update System can be easily calibrated https://physicsworld.com/wp-content/uploads/2023/06/New-bolometer.jpg
IceCube detects high-energy neutrinos from within the Milky Way https://physicsworld.com/a/icecube-detects-high-energy-neutrinos-from-within-the-milky-way/ Tue, 04 Jul 2023 13:29:48 +0000 https://physicsworld.com/?p=108785 Observing the Milky Way galaxy in particles rather than light opens up a new avenue of multimessenger astronomy

The post IceCube detects high-energy neutrinos from within the Milky Way appeared first on Physics World.

]]>
High-energy neutrinos emerging from the Milky Way galaxy have been spotted for the first time. That is according to new findings from the IceCube Neutrino Observatory at the Amundsen–Scott South Pole Station that open a new avenue of multimessenger astronomy by observing the Milky Way galaxy in particles rather than light.

Neutrinos are fundamental particles that have very small masses and barely interact with other matter, but they fill the universe with trillions passing harmlessly through your body every second.

Previously, neutrinos billions of times more energetic than those produced by fusion reactions within our Sun have been detected coming from extragalactic sources such as quasars. However, theory predicts that high-energy neutrinos should also be produced within the Milky Way.

When astronomers look at the plane of our galaxy, they see the Milky Way lit up with gamma-ray emissions that are produced when cosmic rays trapped by our galaxy’s magnetic field collide with atoms in interstellar space. These collisions should also produce high-energy neutrinos.

Researchers have now finally found convincing evidence for these neutrinos by using machine-learning techniques to sift through ten years of data from the IceCube Neutrino Observatory, which includes some 60,000 neutrino events. “[Just like gamma rays], the neutrinos that we observe are distributed throughout the galactic plane,” says Francis Halzen of the University of Wisconsin–Madison, who is IceCube’s principal investigator.

Cascade events

The IceCube detector is formed of a cubic kilometer of ice buried beneath the South Pole and strung through with 5160 optical sensors that watch for flashes of visible light on the rare occasions that a neutrino interacts with a molecule of water-ice. When a neutrino event occurs, the neutrino either leaves an elongated track or a “cascade event” whereby the neutrino’s energy is concentrated in a small, spherical volume within the ice.

When cosmic rays interact with matter in the interstellar medium they produce short-lived pions that quickly decay. “Charged pions decay into the neutrinos detected by IceCube and neutral pions decay into two gamma rays observed by [NASA’s] Fermi [Gamma-ray Space Telescope],” Halzen told Physics World.

The neutrinos had previously gone undetected because they were being drowned out by a background signal of neutrinos and muons caused by cosmic-ray interactions much closer to home, in Earth’s atmosphere.

This background leaves tracks that enter the detector, whereas the higher energy neutrinos from the Milky Way are more likely to produce cascade events. The machine-learning algorithm developed by IceCube scientists at TU Dortmund University in Germany was able to select only for cascade events, removing much of the local interference and allowing the signal from the Milky Way to stand out.

Although it is more difficult to obtain information about the direction a neutrino has come from in a cascade event, Halzen says that cascade events can be reconstructed with a precision of “five degrees or so”.  Although this precludes identifying specific sources of neutrinos in the Milky Way, Halzen says that it is sufficient to observe the radiation pattern from the galaxy and match it to the one observed of gamma rays by the Fermi space telescope.

The next step for the team is to try and identify specific sources of neutrinos in the Milky Way. This could be possible with the revamped IceCube, named Gen2, which will increase the size of the detector area to ten cubic kilometres of ice when it becomes fully operational by 2032.

The findings are published in Science.

The post IceCube detects high-energy neutrinos from within the Milky Way appeared first on Physics World.

]]>
Research update Observing the Milky Way galaxy in particles rather than light opens up a new avenue of multimessenger astronomy https://physicsworld.com/wp-content/uploads/2023/07/MilkyWay_neutrinos-small.jpg newsletter1
Moore’s law in peril and the future of computing https://physicsworld.com/a/moores-law-in-peril-and-the-future-of-computing/ Tue, 04 Jul 2023 12:48:12 +0000 https://physicsworld.com/?p=108772 Demand for computer power continues to soar, but can the hardware keep up?

The post Moore’s law in peril and the future of computing appeared first on Physics World.

]]>

Gordon Moore, the co-founder of Intel who died earlier this year, is famous for forecasting a continuous rise in the density of transistors that we can pack onto semiconductor chips. His eponymous “Moore’s law” still holds true after almost six decades, but further progress is becoming harder and eye-wateringly expensive to sustain. In this episode of the Physics World Stories podcast we look at the practicalities of keeping Moore’s law alive, why it matters, and why physicists have a critical role to play.

Right now, one of the key questions is whether computer hardware can keep up with the demands of large language models and other forms of generative AI. There is also concern over whether computing can help tackle today’s complex global challenges without skyrocketing energy demands. New computing paradigms are needed, and optical- and quantum based-computing may have key roles to play, but there are still big question makers over their practical usefulness at scale.

Physics Word Stories is presented by Andrew Glester and this month’s podcast guests are:

Find out more on this topic in the recent Physics World article ‘Moore’s law: further progress will push hard on the boundaries of physics and economics’.

The post Moore’s law in peril and the future of computing appeared first on Physics World.

]]>
Demand for computer power continues to soar, but can the hardware keep up? Demand for computer power continues to soar, but can the hardware keep up? Physics World Moore’s law in peril and the future of computing 1:01:09 Podcast Demand for computer power continues to soar, but can the hardware keep up? https://physicsworld.com/wp-content/uploads/2023/07/Supercomputer-1063965932-iStock_Vladimir_Timofeev.jpg newsletter
From bottom-up to top-down: computational scientist Amanda Barnard on the beauty of simulations, machine learning and how the two intersect https://physicsworld.com/a/from-bottom-up-to-top-down-computational-scientist-amanda-barnard-on-the-beauty-of-simulations-machine-learning-and-how-the-two-intersect/ Tue, 04 Jul 2023 10:00:51 +0000 https://physicsworld.com/?p=108621 Amanda Barnard talks to Hamish Johnston about research in applying machine learning to a range of problems

The post From bottom-up to top-down: computational scientist Amanda Barnard on the beauty of simulations, machine learning and how the two intersect appeared first on Physics World.

]]>
From using supercomputers to tap into new kinds of materials to training machine learning models to study complex properties at the nanoscale, Australian computational scientist Amanda Barnard works at the interface of computing and data science. A senior professor in the School of Computing at the Australian National University, Barnard is also deputy director and computational-science lead. These days, she uses a variety of computational methods to solve problems across the physical sciences, but Barnard began her career as a physicist, receiving her PhD in theoretical condensed-matter physics in 2003.

After spending the next few years as a postdoc at the Center for Nanoscale Materials at Argonne National Laboratory in the US, she began to broaden her research interests to encompass many aspects of computational science, including the use of machine learning in nanotechnology, materials science, chemistry and medicine.

A fellow of both the Australian Institute of Physics and the Royal Society of Chemistry, in 2022 Barnard was appointed a Member of the Order of Australia. She has also won a number of awards, including the 2014 Feynman Prize in Nanotechnology (Theory) and the 2019 medal from the Association of Molecular Modellers of Australasia. She speaks to Hamish Johnston about her interest in applying machine learning to a range of problems, and about the challenges and rewards of doing university administration.

Can you tell us a bit about what you do as a computational scientist?

Computational science involves designing and using mathematical models to analyse computationally demanding problems in many areas of science and engineering. This includes advances in computational infrastructure and algorithms that enable researchers across these different domains to perform large-scale computational experiments. In a way, computational science involves research into high-performance computing, and not just research using a high-performance computer.

We spend most of our time on algorithms and trying to figure out how to implement them in a way that makes best use of the advanced hardware; and that hardware is changing all the time. This includes conventional simulations based on mathematical models developed specifically in different scientific domains, be it physics, chemistry or beyond. We also spend a lot of time using methods from machine learning (ML) and artificial intelligence (AI), most of which were developed by computer scientists, making it very interdisciplinary research. This enables a whole bunch of new approaches to be used in all these different scientific areas.

Machine learning enables us to recapture a lot of the complexity that we’ve lost when we derive those beautiful theories

Simulation was born out of the theoretical aspects of each scientific area that, with some convenient levels of abstraction, enabled us to solve the equations. But when we developed those theories, they were almost an oversimplification of the problem, which was done either in the pursuit of mathematical elegance or just for the sake of practicality. ML enables us to recapture a lot of the complexity that we’ve lost when we derive those beautiful theories. But unfortunately, not all ML works well with science, and so computational scientists spend a lot of time trying to figure out how to apply these algorithms that were never intended to be used for these kinds of data sets to overcome some of the problems that are experienced at the interface. And that’s one of the exciting areas that I like.

You began your career as a physicist. What made you move to computational science?

Physics is a great starting point for virtually anything. But I was always on the path to computational science without realizing it. During my first research project as a student, I used computational methods and was instantly hooked. I loved the coding, all the way from writing the code to the final results, and so I instantly knew that supercomputers were destined to be my scientific instrument. It was exciting to think about what a materials scientist could do if they could make perfect samples every time. Or what a chemist could do if they could remove all contaminations and have perfect reactions. What could we do if we could explore harsh or dangerous environments without the risk of injuring anyone? And more importantly, what if we could do all of these things simultaneously, on demand, every time we tried?

The beauty of supercomputers is that they are the only instrument that enables us to achieve this near-perfection. What captivates me most is that I can not only reproduce what my colleagues can do in the lab, but also do everything they can’t do in the lab. So from the very early days, my computational physics was on a computer. My computational chemistry then evolved through to materials, materials informatics, and now pretty much exclusively ML. But I’ve always focused on the methods in each of these areas, and I think a foundation in physics enables me to think very creatively about how I approach all of these other areas computationally.

How does machine learning differ from classical computer simulations?

Most of my research is now ML, probably 80% of it. I still do some conventional simulations, however, as they give me something very different. Simulations fundamentally are a bottom-up approach. We start with some understanding of a system or a problem, we run a simulation, and then we get some data at the end. ML, in contrast, is a top-down approach. We start with the data, we run a model, and then we end up with a better understanding of the system or problem. Simulation is based on rules determined by our established scientific theories, whereas ML is based on experiences and history. Simulations are often largely deterministic, although there are some examples of stochastic methods such as Monte Carlo. ML is largely stochastic, although there are some examples that are deterministic as well.

With simulations, I’m able to do very good extrapolation. A lot of the theories that underpin simulations enable us to explore areas of a “configuration space” (the co-ordinates that determine all the possible states of a system) or areas of a problem for which we have no data or information. On the other hand, ML is really good at interpolating and filling in all the gaps and it’s very good for inference.

Data flow concept

Indeed, the two methods are based on very different kinds of logic. Simulation is based on an “if-then-else” logic, which means if I have a certain problem or a certain set of conditions, then I’ll get a deterministic answer or else, computationally, it’ll probably crash if you get it wrong. ML, in contrast, is based on an “estimate-improve-repeat” logic, which means it will always give an answer. That answer is always improvable, but it may not always be right, so that’s another difference.

Simulations are intradisciplinary: they have a very close relationship to the domain knowledge and rely on human intelligence. On the other hand, ML is interdisciplinary: using models developed outside of the original domain, it is agnostic to domain knowledge and relies heavily on artificial intelligence. This is why I like to combine the two approaches.

Can you tell us a bit more about how you use machine learning in your research?

Before the advent of ML, scientists had to pretty much understand the relationships between the inputs and the outputs. We had to have the structure of the model predetermined before we were able to solve it. It meant that we had to have an idea of the answer before we could look for one.

We can develop the structure of an expression or an equation and solve it at the same time. That accelerates the scientific method, and it’s another reason why I like to use machine learning

When you’re using ML, the machines use statistical techniques and historical information to basically programme themselves. It means we can develop the structure of an expression or an equation and solve it at the same time. That accelerates the scientific method, and it’s another reason why I like to use it.

The ML techniques I use are diverse. There are a lot of different flavours and types of ML, just like there are lots of different types of computational physics or experimental physics methods. I use unsupervised learning, which is based entirely on input variables, and it looks at developing “hidden patterns” or trying to find representative data. That’s useful for materials in nanoscience, when we haven’t done the experiments to perhaps measure a property, but we know quite a bit about the input conditions that we put in to develop the material.

Unsupervised learning can be useful in finding groups of structures, referred to as clusters, that have similarities in the high-dimensional space, or pure and representative structures (archetypes or prototypes) that describe the data set as a whole. We can also transform data to map them to a lower-dimensional space and reveal more similarities that were not previously apparent, in a similar way that we might change to reciprocal space in physics.

I also use supervised ML to find relationships and trends, such as structure-property relationships, which are important in materials and nanoscience. This includes classification, where we have a discrete label. Say we already have different categories of nanoparticles and, based on their characteristics, we want to automatically assign them to either one category or another, and make sure that we can easily separate these classes based on input data alone.

I use statistical learning and semi-supervised learning as well. Statistical learning, in particular, is useful in science, although it’s not widely used yet. We think of that as a causal inference that is used in medical diagnostics a lot, and this can be applied to effectively diagnose how a material, for example, might be created, rather than just why it is created.

Your research group includes people with a wide range of scientific interests. Can you give us a flavour of some of the things that they’re studying?

When I started in physics, I never thought that I’d be surrounded by such an amazing group of smart people from different scientific areas. The computational science cluster at the Australian National University includes environmental scientists, earth scientists, computational biologists and bioinformaticians. There are also researchers studying genomics, computational neuroscience, quantum chemistry, material science, plasma physics, astrophysics, astronomy, engineering, and – me – nanotechnology. So we’re a diverse bunch.

Our group includes Giuseppe Barca, who is developing algorithms that underpin the quantum chemistry software packages that are used all around the world. His research is focused on how we can leverage new processors, such as accelerators, and how we can rethink how large molecules can be partitioned and fragmented so that we can strategically combine massively parallel workflows. He is also helping us to use supercomputers more efficiently, which saves energy. And for the past two years, he’s held the world record in the best scaling quantum chemistry algorithm.

Also on the small scale – in terms of science – is Minh Bui, who’s a bioinformatician working on developing new statistical models in the area of phylogenomics systems [a multidisciplinary field that combines evolutionary research with systems biology and ecology, using methods from network science]. These include partitioning models, isomorphism-aware models and distribution-tree models. The applications of this include areas in photosynthetic enzymes or deep insect phylogeny transcription data, and he has done work looking into algae, as well as bacteria and viruses such as HIV and SARS-CoV-2 (which causes COVID-19).

Minh Bui

On the larger end of the scale is mathematician Quanling Deng, whose research focuses on mathematical modelling and simulation for large-scale media, such as oceans and atmosphere dynamics, as well as Antarctic ice floes.

The best part is when we discover that a problem from one domain has actually been already solved in another, and even better when we discover one experienced in multiple domains so we can scale super linearly. It’s great when one solution has multiple areas of impact. And how often would you find a computational neuroscientist working alongside a plasma physicist? It just doesn’t normally happen.

As well as working with your research group, you’re also deputy director of the Australian National University’s School of Computing. Can you tell us a bit about that role?

It’s largely an administrative role. So as well as working with an amazing group of computer scientists across data science, foundational areas in languages, software development, cybersecurity, computer vision, robotics and so on, I also get to create opportunities for new people to join the school and to be the best version of themselves. A lot of my work in the leadership role is about the people. And this includes recruitment, looking after our tenure-track programme and our professional-development programme as well. I’ve also had the opportunity to start some new programmes for areas that I thought needed attention.

One such example was during the global COVID pandemic. A lot of us were shut down and unable to access our labs, which left us wondering what we can do. I took the opportunity to develop a programme called the Jubilee Joint Fellowship, which supports researchers working at the interface between computer science and another domain, where they’re solving grand challenges in their areas, but also using that domain knowledge to inform new types of computer science. The programme supported five such researchers across different areas in 2021.

I am also the chair of the Pioneering Women Program, which has scholarships, lectureships and fellowships to support women entering computing and ensure they’re successful throughout their career with us.

And of course, one of my other roles as deputy-director is to look after computing facilities for our school. I look at ways that we can diversify our pipeline of resources to get through tough times, like during COVID, when we couldn’t order any new equipment. I also look into how we can be more energy efficient, because computing uses an enormous amount of energy.

It must be a very exciting time for people doing research in ML, as the technology is finding so many different uses. What new applications of ML are you most looking forward to in your research?

Well, probably some of the ones you’re already hearing about, namely AI. While there are risks associated with AI, there’s also enormous opportunity, and I think that generative AI is going to be particularly important in the coming years for science – provided we can overcome some of the issues with it “hallucinating” [when an AI system, such as a large language model, generates false information, based on either a training data-set or contextual logic, or a combination of them both].

No matter what area of science we’re in, we’re restricted by the time we have, the money, the resources and the equipment we have access to. It means we’re compromising our science to fit these limitations rather than focusing on overcoming them

But no matter what area of science we’re in, whether computational or experimental, we’re all suffering under a number of restrictions. We’re restricted by the time we have, the money, the resources and the equipment we have access to. It means we’re compromising our science to fit these limitations rather than focusing on overcoming them. I truly believe that the infrastructure shouldn’t dictate what we do, it should be the other way around.

I think generative AI has come at the right time to enable us to finally overcome some of these problems because it has a lot of potential to fill in the gaps and provide us with an idea of what science we could have done, if we had all the resources necessary.

Indeed, AI could enable us to get more by doing less and avoid some of the pitfalls like selection bias. That is a really big problem when applying ML to science data sets. We need to do a lot more work to ensure that generative methods are producing meaningful science, not hallucinations. This is particularly important if they’re going to form the foundation for large pre-trained models. But I think this is going to be a really exciting era of science where we’re working collaboratively with AI, rather than it just performing a task for us.

The post From bottom-up to top-down: computational scientist Amanda Barnard on the beauty of simulations, machine learning and how the two intersect appeared first on Physics World.

]]>
Interview Amanda Barnard talks to Hamish Johnston about research in applying machine learning to a range of problems https://physicsworld.com/wp-content/uploads/2023/07/2023-07-Barnard-profile-Amanda.jpg
Risk analysis in the clinical practice: maximizing efficiency and value with a dedicated software solution https://physicsworld.com/a/risk-analysis-in-the-clinical-practice-maximizing-efficiency-and-value-with-a-dedicated-software-solution/ Tue, 04 Jul 2023 09:10:22 +0000 https://physicsworld.com/?p=108726 Join the audience for a live webinar on 18 July 2023 sponsored by IBA Dosimetry

The post Risk analysis in the clinical practice: maximizing efficiency and value with a dedicated software solution appeared first on Physics World.

]]>

Prospective risk management, based on methods like FMEA or FTA risk assessments, represents a powerful approach to establishing a stronger safety culture within a clinic. By implementing guidelines such as the AAPM Task Group 100 report, risks can be effectively assessed, and the necessary corrective actions can be identified. The potential benefits of this approach extend beyond radiation therapy and can be applied to other fields like proton therapy and nuclear medicine. Clinicians face the challenge of carrying out this activity with limited existing resources while ensuring that the investment yields improved safety and workflow efficiency. The utilization of a dedicated software tool can significantly enhance the efficiency and effectiveness of this process. During the presentation, Benjamin Sintay, executive director of radiation oncology and chief physicist at Cone Health, will demonstrate the practical application of myQA PROactive software in conducting risk analysis within a community hospital. Additionally, David Menichelli from IBA Dosimetry R&D will discuss the integration of prospective risk analysis with incident reporting.

Benefits of attending:

  • Gain insight into the importance of risk assessment
  • Acquire knowledge on the seamless integration of risk assessment into regular clinical procedures
  • Discover the tools that facilitate the implementation of a risk management program
  • Develop an understanding of how incident reporting can be utilized to validate and enhance your ongoing risk analysis.

Benjamin “BJ” Sintay is the executive director of radiation oncology & chief physicist for Cone Health and co-founder & chief innovation officer for Fuse Oncology. Benjamin received bachelor of science degrees in electrical and computer engineering from North Carolina State University in 2004. He received a PhD in biomedical engineering from the Virginia Tech-Wake Forest University School of Biomedical Engineering and Sciences in 2008. Benjamin is board certified in therapeutic medical physics by the American Board of Radiology. His interests include leader development, healthcare technology, radiotherapy quality & safety, software development, and product commercialization.

The post Risk analysis in the clinical practice: maximizing efficiency and value with a dedicated software solution appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 18 July 2023 sponsored by IBA Dosimetry https://physicsworld.com/wp-content/uploads/2023/06/myQA-PROactive-800-wide-1.jpg
Survey finds a third of Chinese scientists in the US feel unwelcome in the country https://physicsworld.com/a/survey-finds-a-third-of-chinese-scientists-in-the-us-feel-unwelcome-in-the-country/ Mon, 03 Jul 2023 15:00:43 +0000 https://physicsworld.com/?p=108749 The so-called China Initiative has led to many Chinese scientists to consider leaving the US

The post Survey finds a third of Chinese scientists in the US feel unwelcome in the country appeared first on Physics World.

]]>
Feelings of “fear and anxiety” have led many academic scientists of Chinese heritage to consider leaving the US since the launch in 2018 of the “China Initiative“. That is according to a new study from the Massachusetts Institute of Technology (MIT) and Princeton and Harvard universities, which also finds that the atmosphere has stopped Chinese scientists from applying for US government grants. The study’s authors warn that if the situation is not resolved, the US will lose “scientific talent to China and other countries”.

The China Initiative was set up by the Trump administration in November 2018 to root out and prosecute perceived Chinese spies in American research and industry. According to then Attorney General William Barr, the initiative sought to counter “the systemic efforts by the [People’s Republic of China] to enhance its economic and military strength at America’s expense”.

However, the initiative was criticized for unfairly targeting academics. The US Department of Justice (DOJ) brought more than 20 court cases against scientists of Chinese descent, charging that their connections with colleagues in China facilitated the transfer of sensitive US technology and intellectual property to the Chinese government. Most of those cases, however, ended in not guilty verdicts or hung juries.

Losing talent to other countries is actually causing national security issues

Kai Li

The US eventually shut down the intiative in early 2022, owing to what the justice department said were “perceptions that it unfairly painted Chinese Americans and United States residents of Chinese origin as disloyal”. Gisela Kusakawa, executive director of the Asian American Scholar Forum, told Physics World that the initiative “was criminalizing academic activity”.

‘Chilling effect’

The new study, which is based on a survey of over 1300 academic scientists of Chinese heritage in the US, found that over a third of respondents feel unwelcome in the US and 72% do not feel safe as an academic researcher. It also reveals that almost two thirds are worried about collaborations with China and 86% perceive it is harder to recruit top international students now compared to five years ago.

“The data are sobering,” says MIT engineer Gang Chen, who was not involved in the study but was arrested under the China Initiative in January 2021 only to have government drop all charges a year later.

The study also highlights that the US is losing top graduate students from China, who are instead choosing to stay in China or move to elsewhere in Asia or Europe. “The China Initiative and its chilling effects were caused by some policy-makers talking about national security,” says Kai Li, a computer scientist from Princeton University, who was not involved in the study. “But losing talent to other countries [as a result] is actually causing national security issues.”

Junming Huang from Princeton’s Center on Contemporary China who co-authored the study, says that returning to normality will not be easy. “It took one year after the China Initiative started to observe changes in the attitudes of Chinese scientists,” he told Physics World. “It will take longer to observe the change now that it’s ended.”

Indeed, Li thinks the pending cases of the initiative are continuing to have “a chilling effect”. As a result, people are choosing to go back to China while those who remain in the US, particularly in engineering and computer science, are not applying for federal grants for their research over fears of reprisals.

The post Survey finds a third of Chinese scientists in the US feel unwelcome in the country appeared first on Physics World.

]]>
News The so-called China Initiative has led to many Chinese scientists to consider leaving the US https://physicsworld.com/wp-content/uploads/2023/07/travel-airport-16762546-iStock.jpg newsletter
Here’s why Tesla’s Master Plan 3 makes a lot of sense for a sustainable future https://physicsworld.com/a/heres-why-teslas-master-plan-3-makes-a-lot-of-sense-for-a-sustainable-future/ Mon, 03 Jul 2023 11:22:28 +0000 https://physicsworld.com/?p=108586 James McKenzie examines the Master Plan 3 released earlier this year by Tesla

The post Here’s why Tesla’s Master Plan 3 makes a lot of sense for a sustainable future appeared first on Physics World.

]]>
Whether it’s buying Twitter for $44bn, running SpaceX, or winning approval for a clinical trial of the Neuralink brain implant, the physicist-turned-business leader Elon Musk is never far from the headlines. He was again in the news earlier this year when he promised to use an investor’s day in March to lay out his vision for a “fully sustainable future” for Tesla – the electric-car company he’s been chief executive of since 2008. Musk also said he’d explain how Tesla would scale up the firm’s operations.

Many investors and analysts had expected that Tesla would unveil a cheaper, base-model electric car. But when Musk and his team eventually presented what he had dubbed Master Plan 3 (MP3), there was much disappointment. Master Plan 3 looks like a flop” said the Seeking Alpha financial-news website, complaining of a lack of detail, an absence of new vehicles, and nothing about, say, self-driving cars. Tesla’s stock immediately dropped by 8%.

Tesla’s first master plan was published in 2006 and called The Secret Tesla Motors Master Plan (Just Between You and Me). The title was tongue in cheek, but the message was clear. “Build sports car,” it explained. “Use that money to build an affordable car. Use that money to build an even more affordable car. While doing above, also provide zero emission electric power generation options. Don’t tell anyone.”

The strategy proved successful and was followed 10 years later by Master Plan, Part Deux, presumably a nod to the 1993 spoof Rambo movie Hot Shots! Part Deux. In fact, Musk seems to be a fan of old movies. The fastest version of the Tesla Model S car is called the Plaid, while its vehicles have an acceleration mode called Ludicrous Speed, both references to the starship in the 1987 Mel Brooks movie Spaceballs.

Movie gags aside, there was more detail in the second plan than the first. “Create stunning solar roofs with seamlessly integrated battery storage,” it said. “Expand the electric vehicle product line to address all major segments. Develop a self-driving capability that is 10x safer than manual via massive fleet learning. Enable your car to make money for you when you aren’t using it.”

Much of the luke-warm reaction to Tesla’s new master plan was simply down to the US stock markets’ notoriously short-term view of the economy

The final two aims have not yet happened but Tesla’s plan is coming along fast. Indeed, if you watch Musk’s presentation at the investor day, he believes that, with the right measures, we can sustainably support a planet with more than eight billion people. I believe that much of the luke-warm reaction to MP3 was simply down to the US stock markets’ notoriously short-term view of the economy; for them, it’s all about the quarterly figures. Trouble is, dealing with climate change requires a long-term plan.

Well-thought out

When the MP3 was published on the Tesla website in early April, an initial skim read suggested a well-thought plan that covered all the bases. But when I examined it in more detail on holiday, I was extremely impressed. Using data from the International Energy Agency, the plan reminds us that the world currently uses about 165 petawatt-hours of energy per year (PWh/yr), of which 80% is from fossil fuels. Losses and inefficiencies, however, mean that barely 36% of the total energy is actually used for the purpose intended (59 PWh/yr).

But because electrically-driven power sources are far more efficient than combustion engines, the “electric economy” only needs 82 PWh/yr to do the same work. A Tesla Model 3, for example, is 3.9 times more energy efficient than a petrol-powered Toyota Corolla, while a heat pump is 3–4 times better than a gas boiler. Of course, a truly electric economy will need vast amounts of materials to build solar panels, wind turbines, batteries and so on.

What’s more, as the MP3 report estimates, we’d need 240 TWh/yr of battery storage to manage the 30 TW power generated from solar, wind and other renewable-energy sources. That in turn would require us to spend up to $10 trillion mining, refining and manufacturing everything from concrete, glass and steel to all sorts of rare-earth elements needed in batteries.

It is an eye-watering figure but, according to the MP3 analysis, it’s actually less than the $14 trillion the world is projected to spend over the next two decades on fossil fuels. What’s more if the $10 trillion were spread out over 10 years, it would be only 1% of the world’s total GDP (currently $100 trillion) and only 0.5% if spread out over 20 years. It doesn’t sound implausible if we put our minds to it, especially when you realize that fossil-fuel firms made a total of $4 trillion in profits last year.

The challenge will be to persuade oil and gas companies to rethink their strategies because without any compunction, nothing will change how money is invested

In fact, we’d need to turn over less than 0.21% of the global land mass to build enough wind and solar power plants. Another advantage is that less mining would be required in an electrical economy than in a combustion economy. The challenge, I suspect, will be to persuade oil and gas companies to rethink their strategies because without any compunction, nothing will change how money is invested.

Wind power

Five steps to success

MP3 outlines five steps we need to take to reach an all-electrical economy. First, we need to switch to renewable power, which would cut our use of fossil fuels by 35%. Second, move to electrically-powered vehicles (a 21% reduction). Third, install heat pumps (a 22% saving). Fourth, get industry to switch to “green” hydrogen for processing metals and other high-temperature operations (a 17% cut). Finally, sustainably fuel planes and boats (a 5% saving).

Of course, none of this is new. Many companies, governments and institutions around the world have been talking about the need to expand renewable energy production, while many car firms already plan to move mostly (or completely) to electric vehicles at some point in the future. But Musk – and Tesla – make the case much more clearly than most in one well-presented report. Sure, you could challenge some of the assumptions outlined in MP3, but I don’t believe anything would fundamentally change what he has to say.

The world, for example, might adopt more nuclear, geothermal or hydroelectric power. True, but that would only mean it takes us less time to get there. It could also turn out harder than we think to remove rare-earth metals for the batteries and motors inside electric vehicles while still retaining their efficiency. But there are a lot of people working on this problem and who knows what technological breakthroughs lie round the corner?

Some have argued that the investment costs may be higher by 30–50% in certain areas. Yes, but whatever the precise figure, it will not materially change the points eloquently made by Musk at the end of the MP3 presentation. Tesla’s plans are entirely feasible and bring hope and optimism – not just for those who are investors in the company – but for all of us who are, ultimately, investors in the Earth.

The post Here’s why Tesla’s Master Plan 3 makes a lot of sense for a sustainable future appeared first on Physics World.

]]>
Opinion and reviews James McKenzie examines the Master Plan 3 released earlier this year by Tesla https://physicsworld.com/wp-content/uploads/2023/07/2023-06-Transactions-Tesla-image_mp3.png newsletter
Synchrotron X-rays image a single atom https://physicsworld.com/a/synchrotron-x-rays-image-a-single-atom/ Mon, 03 Jul 2023 09:00:43 +0000 https://physicsworld.com/?p=108693 Imaging advance will have important implications in many areas of science, including medical and environmental research

The post Synchrotron X-rays image a single atom appeared first on Physics World.

]]>
when X-rays illuminate an atom (red ball at the centre of the molecule), core level electrons are excited. X-ray excited electrons then tunnel to the detector tip via overlapping atomic/molecular orbitals, which provide elemental and chemical information about the atom

The resolution of synchrotron X-ray scanning tunnelling microscopy has reached the single-atom limit for the first time, thanks to new work by researchers at Argonne National Laboratory in the US. The advance will have important implications in many areas of science, including medical and environmental research.

“One of the most important applications of X-rays is to characterize materials,” explains study co-leader Saw Wai Hla, Argonne physicist and professor at Ohio University. “Since its discovery 128 years ago by Roentgen, this is the first time that they can be used to characterize samples at the ultimate limit of just one atom.”

Until now, the smallest sample size that could be analysed was an attogram, which is around 10,000 atoms. This is because the X-ray signal produced by a single atom is extremely weak and conventional detectors are not sensitive enough to detect it.

Exciting core-level electrons

In their work, which the researchers detail in Nature, they added a sharp metallic tip to a conventional X-ray detector to detect X-ray-excited electrons in samples containing iron or terbium atoms. The tip is placed just 1 nm above the sample and the electrons that are excited are core-level electrons – essentially “fingerprints” unique to each element. This technique is known as synchrotron X-ray scanning tunnelling microscopy (SX-STM).

Saw Wai Hla and Tolulope M. Ajayi,

SX-STM combines the ultrahigh-spatial resolution of scanning tunnelling microscopy with the chemical sensitivity provided by X-ray illumination. As the sharp tip is moved across the surface of a sample, electrons tunnel through the space between the tip and the sample, creating a current. The tip detects this current and the microscope transforms it into an image that provides information on the atom under the tip.

“The elemental type, chemical state and even magnetic signatures are encoded in the same signal,” explains Hla, “so if we can record one atom’s X-ray signature, it is possible to extract this information directly.”

Being able to investigate an individual atom and its chemical properties will allow for the design of advanced materials with properties tuned to specific applications, adds study co-leader Volker Rose. “In our work, we looked at molecules containing terbium, which belongs to the family of rare-earth elements, used in applications like electric motors in hybrid and electric vehicles, hard disk drives, high-performance magnets, wind turbine generators, printable electronics and catalysts. The SX-STM technique now provides an avenue to explore these elements without the need to analyse large amounts of material.”

In environmental research, it will now be possible to trace possibly toxic materials down to extremely low levels, adds Hla. “The same is true for medical research where biomolecules responsible for disease could be detected at the atomic limit,” he tells Physics World.

The team says it now wants to explore the magnetic properties of individual atoms for spintronic and quantum applications. “This will impact multiple research fields, from magnetic memory used in data storage devices, quantum sensing and quantum computing to name but a few,” explains Hla.

The post Synchrotron X-rays image a single atom appeared first on Physics World.

]]>
Research update Imaging advance will have important implications in many areas of science, including medical and environmental research https://physicsworld.com/wp-content/uploads/2023/07/One-Atom-X-ray.jpg newsletter1
European Space Agency launches Euclid dark-energy mission https://physicsworld.com/a/european-space-agency-launches-euclid-dark-energy-mission/ Sat, 01 Jul 2023 15:11:36 +0000 https://physicsworld.com/?p=108727 Craft will spend some six years creating the most accurate map yet of the large-scale structure of the universe

The post European Space Agency launches Euclid dark-energy mission appeared first on Physics World.

]]>
A craft to explore the nature of dark energy has been launched today aboard a Falcon 9 rocket from Florida’s Cape Canaveral Space Force Station at 11:12 local time. The €1.4bn Euclid mission will study the large-scale structure of the universe with the aim of understanding how it evolved following the Big Bang.

Over 25 years ago physicists were astounded by the discovery that the rate of expansion of the universe was increasing – not decreasing as had been previously thought. Many physicists believe that dark energy is the cause behind the accelerating expansion yet it remains one of the biggest mysteries in cosmology.

To better our understanding of the dark universe, Euclid – a space-based telescope made by the European Space Agency (ESA) – aims to create the most accurate map yet of the large-scale structure of the universe. It uses a 1.2 m-diameter telescope, a camera and a spectrometer to plot a 3D map of the distribution of more than two billion galaxies – a view that will stretch across 10 billion light-years.

At roughly 4.7 m tall and 3.7 m in diameter, Euclid will observe galaxies and clusters of galaxies at visible and near-infrared wavelengths, revealing details of the universe’s structure and its expansion over the last three-quarters of its history or some 10 billion years ago. Euclid will chart this expansion rate much further back in time than existing ground-based telescopes.

“The successful launch of Euclid marks the beginning of a new scientific endeavour to help us answer one of the most compelling questions of modern science,” says ESA director general Josef Aschbacher. “The quest to answer fundamental questions about our cosmos is what makes us human. And, often, it is what drives the progress of science and the development of powerful, far-reaching, new technologies.”

To boldly go

Euclid will now spend the next 30 days travelling to a spot in space called Lagrange Point 2 – a gravitational balance point some 1.5 million kilometres beyond the Earth’s orbit around the Sun. Once there it will then spend about three months in commissioning before studying the universe for at least six years.

Euclid was chosen for launch in 2011 and is a medium-class mission belonging to ESA’s Cosmic Vision 2015–2025. The mission was initially planned to be launched this year by a Russian Soyuz spacecraft rom Europe’s Spaceport in Kourou, French Guiana. But following international sanctions after Russia’s invasion of Ukraine, ESA sought out alternatives, choosing SpaceX and the Falcon 9 rocket in October 2022.

The Euclid consortium brings together over 2000 scientists in 300 labs in 17 different countries in Europe, US, Canada and Japan. The first data release is expected in 2025.

The post European Space Agency launches Euclid dark-energy mission appeared first on Physics World.

]]>
News Craft will spend some six years creating the most accurate map yet of the large-scale structure of the universe https://physicsworld.com/wp-content/uploads/2023/06/FzyBlRvXoAIl8Ls-small.jpg newsletter
Bin the Boffin campaign lights up London, why gravity is weak in the Indian Ocean https://physicsworld.com/a/bin-the-boffin-campaign-lights-up-london-why-gravity-is-weak-in-the-indian-ocean/ Fri, 30 Jun 2023 14:45:27 +0000 https://physicsworld.com/?p=108741 Excerpts from the Red Folder

The post Bin the Boffin campaign lights up London, why gravity is weak in the Indian Ocean appeared first on Physics World.

]]>
“Boffin” is a quintessentially British word that refers to a stereotypical scientist – usually portrayed as an oddball, grey-haired, white man in a lab coat. It is very popular with the UK’s tabloid press, which revels in headlines like “Boffins say don’t eat too many cakes” etc.

Earlier this year, the UK’s Institute of Physics (IOP) launched its “Bin the Boffin” campaign, saying that the term reinforces a harmful stereotype that may be preventing some people from considering careers in physics. The campaign was very successful in one sense; it was widely reported by the tabloids. Some newspapers were taken aback by the IOP’s drive and responded with headlines such as “Boffins: stop calling us boffins” (that one appeared in the Daily Star).

Now, the IOP has hit back by projecting its message onto the sides of London buildings associated with tabloids, including a skyscraper at Canary Warf that is home to the Star (see figure).

Not taken lightly

The IOP’s deputy chief executive Rachel Youngman explains, “Running around at night with projectors is not something the IOP does very often or does lightly, but we want to see the word ‘boffin’ binned once and for all”. She adds, “It’s a cliché, no-one knows quite what it means, and young people have told us it puts them off a career in physics”.

The projector campaign also targeted The Sun and Youngman says that she is keen to meet with editors from the two newspapers to talk about guidelines for reporting on physicists and physics that the IOP has drawn up.

Cast your mind back to your days as a physics undergraduate and you will recall that you used 9.8 m/s2 to calculate force of gravity as felt by objects on Earth. However, this value changes slightly as you move around the planet and it can vary by as much as 0.7% around the globe.

Puzzling geoid low

Gravity is particularly weak in the centre of the Indian Ocean – an effect called the Indian Ocean geoid low (IOGL). Geophysicists have long puzzled over the origins of the IOGL, but now two researchers at the Centre for Earth Sciences at the Indian Institute of Science in Bengaluru say that they have worked out why it exists.

According to the Guardian, the researchers reconstructed  last 140 million years of plate tectonics in the region. Debanjan Pal and Attreyee Ghosh believe that as pieces of the oceanic plate travel under the continent of Africa, large amounts of hot and less dense material are rising in the centre of the Indian Ocean.  This creates a large area of low density and therefore low gravity in the region.

The duo report their results in Geophysical Research Letters.

The post Bin the Boffin campaign lights up London, why gravity is weak in the Indian Ocean appeared first on Physics World.

]]>
Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/06/Bin-the-Boffin.jpg
Approaching infinity…and beyond https://physicsworld.com/a/approaching-infinity-and-beyond/ Fri, 30 Jun 2023 10:00:30 +0000 https://physicsworld.com/?p=108606 Siblings Eugenia Viti and Ivan Viti explore the mathematical limits of the everyday

The post Approaching infinity…and beyond appeared first on Physics World.

]]>
Have you ever stood in an apparently endless queue, or been trapped in an everlasting meeting, thinking to yourself “This is taking absolutely forever”? Have you perhaps pondered some other of the seeming “infinities” you might come across, outside of a calculus textbook?

In this whimsical comic, cartoonist Eugenia Viti and her physicist brother Ivan Viti contemplate a few scenarios of pushing the limits of objects from everyday life. They show: the tines of a fork multiplying until you have a spoon; a single window morphing into a greenhouse; a staircase transforming to a ramp; and finally a traffic jam turning a road into a car park.

A comic showing mathematical formulae for forks, glass panes, steps and cars

What other example of limits approaching infinity in our day-to-day can you think of? Drop us an e-mail at pwld@ioppublishing.org or send us a Tweet at @PhysicsWorld to let us know.

The post Approaching infinity…and beyond appeared first on Physics World.

]]>
Blog Siblings Eugenia Viti and Ivan Viti explore the mathematical limits of the everyday https://physicsworld.com/wp-content/uploads/2023/06/LT-2023-06-viti-comic-featured.jpg
Bioelectronic medicine aims to improve type-1 diabetes management https://physicsworld.com/a/bioelectronic-medicine-aims-to-improve-type-1-diabetes-management/ Thu, 29 Jun 2023 12:48:23 +0000 https://physicsworld.com/?p=108722 Our podcast guest is the biomedical engineer Amparo Güemes González

The post Bioelectronic medicine aims to improve type-1 diabetes management appeared first on Physics World.

]]>
Type-1 diabetes is a disease that arises from a person’s inability to produce insulin, which normally regulates glucose levels in the bloodstream. While there is no cure today, type-1 diabetes can be managed by monitoring glucose levels and treatment with insulin.

Looking to the future, bioelectronic medicine could improve diabetes management thanks to the work of  Amparo Güemes González – who is featured in this episode of the Physics World Weekly podcast.

Based at the UK’s University of Cambridge, the biomedical engineer is developing advanced algorithms and neurotechnology for integration in a closed loop platform for glucose control. This work has garnered her a 2023 Rising Talent Award from the L’Oréal-UNESCO For Women In Science programme.

The post Bioelectronic medicine aims to improve type-1 diabetes management appeared first on Physics World.

]]>
Podcast Our podcast guest is the biomedical engineer Amparo Güemes González https://physicsworld.com/wp-content/uploads/2023/06/A-Guemes-list.jpg
Photon-counting CT improves cardiac imaging in infants with heart defects https://physicsworld.com/a/photon-counting-ct-improves-cardiac-imaging-in-infants-with-heart-defects/ Thu, 29 Jun 2023 12:30:13 +0000 https://physicsworld.com/?p=108632 At a similar radiation dose, photon-counting CT offers better image quality than dual-source CT in babies and infants with suspected congenital heart defects

The post Photon-counting CT improves cardiac imaging in infants with heart defects appeared first on Physics World.

]]>
Cardiac photon-counting CT

Photon-counting CT (PCCT), an advanced medical imaging technique that measures the energy of each individual X-ray photon, is known to improve cardiovascular CT imaging in adults. Now, a study from Germany published in Radiology shows that PCCT similarly improves the image quality for newborn babies and infants suspected of having congenital heart defects.

Congenital heart defects, the most common type of birth defect, are usually diagnosed using pre- and post-natal ultrasound imaging. But ultrasound does not provide sufficient image quality to make a comprehensive assessment of individual anatomy, especially in complex malformations in infants. If surgery is required, CT and MRI can be employed for treatment planning; but both have limitations when used with babies.

Researchers at the RWTH Aachen University Hospital hypothesized that first-generation PCCT might produce better quality images than third-generation energy-integrating dual-source CT (DSCT) scans. PCCT offers the advantages of converting X-ray photons directly into electrical current, which may avoid signal loss at the detector. This should reduce electronic noise, thus increasing the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) and/or enable imaging with reduced radiation dose.

“Infants and neonates with suspected congenital heart defects are a technically challenging group of patients for any imaging method, including CT,” comments principal investigator Timm Dirrichs. “There is a substantial clinical need to improve cardiac CT of this vulnerable group. It’s essential to carefully map the individual cardiac anatomy and possible routes of surgical intervention using the highest possible diagnostic standards.”

Dirrichs and colleagues conducted a prospective study comparing image quality and radiation exposure of 83 infants with suspected congenital heart defects who underwent contrast-enhanced DSCT (using Siemens Healthineers’ Somatom Force), 30 who underwent contrast-enhanced PCCT (using the Naeotom Alpha) and one infant who had both scans.

For each image, the researchers calculated the SNR and CNR in standardized regions-of-interest placed in the descending aorta and subcutaneous fat tissue. They also estimated the effective radiation exposure using CT dose index and dose–length product. Two radiologists, one paediatric cardiologist and one paediatric cardiac surgeon independently rated the images on a five-point scale for sharpness, overall visual contrast, delineation of vessels, motion artefacts, ring artefacts, quality of 3D reconstructions and overall image quality.

In all but one of the PCCT scans (97%), the CT images were deemed of diagnostic quality, compared with 77% of the DSCT scans. The sole non-diagnostic PCCT exam was the result of a missed contrast agent bolus. The 19 non-diagnostic DSCT exams had prohibitively low SNR and CNR, image artefacts or inadequate contrast agent timing.

Quantitative assessment showed that both SNR and CNR were significantly higher for PCCT images, with a mean SNR of 46.3 and a CNR of 62.0, compared with 29.9 and 37.2, respectively, for DSCT. The mean effective radiation doses were similar: 0.50 mSv for PCCT and 0.52 mSv for DSCT.

Finally, in terms of overall image quality, PCCT significantly outperformed DSCT. The radiology team rated 40% of PCCT images as excellent and 47% as good, compared with 4% and 32%, respectively, for the DSCT images. The team reports that PCCT also outperformed DSCT in of all the other comparative categories.

The researchers point out that the results of their PCCT assessment are conservative, because the PCCT cohort had a younger median age, size and weight than the DSCT cohort. They attribute this to the fact that after a PCCT scanner became available, paediatric cardiac surgeons referred increasingly younger patients to them due to the image quality being obtained.

The investigators conclude that photon-counting CT offers better cardiovascular imaging quality than dual-source CT at a similar radiation dose in children with suspected heart defects. They believe that PCCT could also be useful for detailed tissue characterization, iodine mapping and creation of 3D models. “High SNR and CNR of the underlying cross-sectional CT images are crucial to delineate small cardiac structures on 3D models or virtual reality models,” they write. “The resulting holograms or 3D prints are increasingly required by paediatric cardiology surgeons for every surgery.”

The post Photon-counting CT improves cardiac imaging in infants with heart defects appeared first on Physics World.

]]>
Research update At a similar radiation dose, photon-counting CT offers better image quality than dual-source CT in babies and infants with suspected congenital heart defects https://physicsworld.com/wp-content/uploads/2023/06/PCCT-fig1-featured.jpg
Dry scroll pumps: filling the performance gap https://physicsworld.com/a/dry-scroll-pumps-filling-the-performance-gap/ Thu, 29 Jun 2023 12:00:26 +0000 https://physicsworld.com/?p=108600 With the launch of its compact mXDS3 and mXDS3s series, Edwards is offering scientific and industrial customers greater choice when it comes to the functionality and performance of its dry scroll vacuum pumps

The post Dry scroll pumps: filling the performance gap appeared first on Physics World.

]]>
Think small, win big. That mantra has, for some time, proved itself a reliable frame of reference for the vacuum specialist Edwards which, as part of a much broader product development roadmap, manufactures a portfolio of small vacuum pumps tailor-made for analytical instrumentation OEMs (to be integrated within their electron microscopy and mass spectrometry systems), high-energy physics laboratories (for deployment in accelerator beamlines and high-power laser systems), and a range of R&D and light industrial applications (including thin-film coating systems, surface-science instrumentation and leak detection).

Zoom in a little and that Edwards small-pump offering has, until now, spanned the EM and RV range of oil-sealed rotary-vane pumps (0.7–12 m3/h pumping speed); the XDD1 diaphragm pump (1.4 m3/h); and nXDSi dry scroll pumps (6–20 m3/h). Herein lies the opportunity. “We identified a gap in the scroll-pump product family and, on the back of that, a way to provide more choice, more options for our customers,” explains Dave Goodwin, Edwards’ product manager for scroll and rotary vane pumps. “The aim was to develop a compact pump with lower pumping speed than the nXDSi, enhanced performance versus the XDD1, while providing a dry alternative to our small, oil-sealed rotary-vane pumps.”

Fast-forward and that performance gap has now been filled, with the latest additions to the Edwards range of dry scroll pumps – the mXDS3 and mXDS3s – delivering a pumping speed of 3 m3/h together with an ultimate pressure of 0.1 mbar. The configured mXDS3s version (at 8 kg) comes factory-fitted with an inlet valve featuring delay opening (and is also supplied with an exhaust silencer), while the mXDS3 (7.8 kg) provides the standard pump option (with no inlet valve fitted). In the former, the inlet valve offers protection to the vacuum system when the pump is stopped (or stops due to a power failure) by preventing partially compressed gas from re-expanding through the pump inlet. “This is about additional peace of mind for the user,” says Goodwin. “The delayed opening means that when power is restored the valve does not open before the pump is up to full operating performance.”

Both versions of the pump have the same compact footprint (223x158x231 mm) and feature an IEC connector for mains supply, an on/off switch for easy control, plus nominal rotational speed of 3000 rpm (50 Hz) and 3600 rpm (60 Hz). “The mXDS3 and mXDS3s deliver a lot of pumping density – ideal for backing turbomolecular pumps working in the medium- and high-vacuum regime,” notes Goodwin. “They’re also great for instrumentation OEMs, small vacuum-system builders in academic laboratories, as well as industry end-users.”

Collaborative innovation

Operational benefits notwithstanding, it’s evident that the Edwards approach to product innovation is rooted firmly in a continuous-improvement mindset and ongoing dialogue with the company’s diverse customer base. “We’re systematic about taking inputs on board from our end-users,” explains Goodwin. “In this way, we endeavour to solve their problems by adapting existing products or developing new ones.”

Dave Goodwin

At the heart of that collective conversation is Edwards’ Global Technology Centre (GTC) in Burgess Hill, UK. As part of Edwards’ international R&D effort, the GTC employs a team of scientists and engineers dedicated to core technology development and validation across all of the company’s product lines, including the small-pump portfolio. Their goal: to bring through the right products, features and functionality to market – at the right time – to align with customers’ evolving vacuum requirements.

When developing the mXDS3 and mXDS3s, the first task for Goodwin and the cross-functional GTC project team – comprising fellow product managers, applications specialists and business line managers – was to work up a preliminary market requirement specification and an early-stage technology demonstrator. “We subsequently placed the demonstrators into a range of customer settings – academic, industry, OEM – to see if we had a viable product concept to push through to full commercial launch,” he notes.

The customer feedback confirmed that the Edwards team was heading in the right direction and, what’s more, underpinned a granular technical requirements-gathering exercise to fine-tune the product functionality. “At which point,” adds Goodwin, “all of the relevant learning and domain knowledge from the GTC was transferred to our manufacturing hub in Lutin, Czech Republic, for iteration of the pump design into a market-ready product and prelaunch ‘road-testing’ at selected customers.”

Making life easier for the customer

While the commercial positioning, for the most part, emphasizes compact footprint, lightweight design and pumping speed, Edwards is keen to highlight additional operational upsides of the mXDS3/mXDS3s platform. For starters, these are dry pumps, so there’s no oil for the user to check, top up or replace (as per oil-sealed rotary-vane pumps). That environmental win also translates into lower maintenance overhead and less intervention, with the user typically only required to change the tip-seal between the pump’s fixed and rotating scrolls every two years (versus two or three oil changes a year for a rotary-vane pump).

Other significant features include the helium pumping performance (which is similar to that for air and with no “memory effect”); low noise level – specified at 54.0±2.5 dB (A) – to ensure a better-quality working environment when the pump is running at ultimate vacuum; and the flexibility of being able to mount the pump horizontally (as standard) or vertically (with the motor on top) when integrating within an existing vacuum system.

“The mXDS3 and mXDS3s reinforce the breadth and depth of Edwards’ scroll-pump offering,” concludes Goodwin. “What’s more, these products showcase our unmatched application knowledge and technical expertise when it comes to design, improvement and innovation across the small-pump portfolio.”

The post Dry scroll pumps: filling the performance gap appeared first on Physics World.

]]>
Analysis With the launch of its compact mXDS3 and mXDS3s series, Edwards is offering scientific and industrial customers greater choice when it comes to the functionality and performance of its dry scroll vacuum pumps https://physicsworld.com/wp-content/uploads/2023/06/mXDS-duo.png
Pulsar timing irregularities reveals hidden gravitational-wave background https://physicsworld.com/a/pulsar-timing-irregularities-reveals-hidden-gravitational-wave-background/ Thu, 29 Jun 2023 08:36:38 +0000 https://physicsworld.com/?p=108716 Results from radio telescopes worldwide show that the universe is undulating with a background commotion of gravitational waves

The post Pulsar timing irregularities reveals hidden gravitational-wave background appeared first on Physics World.

]]>
The universe is undulating with a background commotion of gravitational waves that have been emitted by pairs of supermassive black holes. That is according to years’ worth of pulsar observations that have been conducted by several teams of radio astronomers.

Utilizing radio telescopes in the Africa, Asia, Australia, Europe and the US, the teams pioneered the innovative technique of watching for subtle variations in the timing of radio beams from millisecond pulsars.

Millisecond pulsars are one of nature’s most precise clocks – spinning neutron stars that flash radio pulses at us hundreds of times per second with unerring accuracy. As gravitational waves ripple through the space between us and the pulsars in our galaxy, they distort the distance that those pulses have to travel to reach us by about the size of a football pitch.

This results in a pulsar’s pulses arriving at Earth slightly early or slightly late, with the variations in the timing of the pulses amounting to billionths of a second, corresponding to gravitational waves with frequencies in the nanohertz regime.

This is a far lower frequency than the gravitational-wave events detected by LIGO and Virgo, which ranges from 5 to 20,000 Hz. The corresponding wavelength of these background gravitational waves is huge, with the waves stretching between two and 10 light-years from peak to peak.

Multiple detections

In the case of the European Pulsar Timing Array (EPTA), it incorporates five of the major radio observatories in France, Germany, Italy, the Netherlands and the UK, and the new results encompass over two decades’ worth of observations. Teams in Australia, China, South Africa and the US have also simultaneously published their pulsar timing data.

“We’re all basically seeing the same thing,” Michael Keith of the University of Manchester and EPTA told Physics World. “That’s certainly encouraging.”

What we’re looking at is a background of noise, with gravitational waves from all over constantly washing over the Earth

Michael Keith

And what everyone is seeing are myriad gravitational waves all overlapping with one another, having been emitted by distant extragalactic sources. Think of waves landing on a beach, one after another and on top of each other.

Keith stresses that these findings are different to the distinct events seen by LIGO, which come from merging neutron stars or stellar-mass black holes.

“What we’re looking at is a background of noise, with gravitational waves from all over constantly washing over the Earth.”

The puzzle of older data

However, the finding comes with a note of caution. Previously, the International Pulsar Timing Array (IPTA), which is an umbrella organization for all the different groups in the world working on these detections, had set out a criteria for confirming a detection.

“The slightly awkward thing is that I believe nobody has reached the threshold that IPTA set out,” says Keith. “But we do have quite a lot of confidence in the evidence.”

It’s opening a new window and a new way of looking at the universe

Michael Keith

Assuming the gravitational-wave background detection is real, then the gravitational waves are being emitted by binary supermassive black holes – the type we expect to find at the centres of galaxies. We get two supermassive black holes when there has been a galaxy merger, and just like their parent galaxies have, eventually the supermassive black holes will also merge.

When this happens, they’ll emit a stronger set of gravitational waves at a higher frequency that will be detectable by the space-based Laser Interferometer Space Antenna (LISA), which is a proposal for a mission to launch in the 2030s.

The EPTA team, which worked in conjunction with Indian and Japanese scientists, also found something puzzling. The EPTA telescopes have been observing and timing pulsars since the 1990s, but the gravitational-wave signal was strongest in the most recent 10-year dataset. When the entire data was added, the signal faded.

“Adding more data shouldn’t make things worse,” says Keith. “This is something that concerns us a little bit.”

One possible explanation is that the more recent data were collected using more sophisticated observing techniques and technology, and that the earlier data by comparison were lower quality with more noise. However, it could also mean that the signal really was weaker back then.

“It’s potentially very exciting because it could turn out that what we’re seeing is the gravitational-wave signal changing over time,” says Keith.

A window on an exotic universe

The EPTA researchers based their data on the study of 25 of the brightest and most stable millisecond pulsars in the galaxy. The US team at the NANOGrav Physics Frontiers Center had a larger sample of 67 pulsars observed across 15 years, so unfortunately doesn’t have an older dataset to compare with EPTA’s.

Either way, the amount of background gravitational waves suggests a huge population of binary supermassive black holes in the universe, with hundreds of thousands of pairs, if not millions. This is powerful evidence for models of the hierarchical formation of galaxies, whereby galaxies grow by merging with other galaxies.

The next step is to try and sift through the background and distinguish specific gravitational waves and trace them back to their sources, where the data can then be combined with observations in the electromagnetic regime of light. There’s also the potential for new and unexpected discoveries to be made in this new frontier.

“I do think it’s going to welcome in an exotic world over the next few years,” says Keith. “It’s opening a new window and a new way of looking at the universe.”

The results from EPTA are published in Astronomy and Astrophysics. The findings from NANOGrav are published in The Astrophysical Journal Letters. Data from the Chinese Pulsar Timing Array are published in Research in Astronomy and Astrophysics. The Australian Parkes Pulsar Timing Array has published its findings in The Astrophysical Journal Letters and Publications of the Astronomical Society of Australia.

The post Pulsar timing irregularities reveals hidden gravitational-wave background appeared first on Physics World.

]]>
Research update Results from radio telescopes worldwide show that the universe is undulating with a background commotion of gravitational waves https://physicsworld.com/wp-content/uploads/2023/06/NANOGrav_PTA_GWB_15yr_wide.jpg newsletter
Innovative devices ramp the resolution of PET imaging https://physicsworld.com/a/innovative-devices-ramp-the-resolution-of-pet-imaging/ Wed, 28 Jun 2023 12:00:35 +0000 https://physicsworld.com/?p=108639 The SNMMI annual meeting saw researchers highlight novel techniques for improving the performance of PET scanners

The post Innovative devices ramp the resolution of PET imaging appeared first on Physics World.

]]>
The Annual Meeting of the Society of Nuclear Medicine and Molecular Imaging (SNMMI), held this week in Chicago, saw researchers showcase the latest technology developments, clinical advances and new radiopharmaceuticals for imaging and treatment of disease. Among the device innovations highlighted at the meeting, investigators presented some novel instrumentation designed to improve the performance of PET.

‘Outsert’ device boosts PET performance

Researchers at Washington University in St. Louis are using a new technology called “augmented whole-body scanning via magnifying PET” (AWSM-PET) to enhance the resolution and sensitivity of clinical whole-body PET/CT imaging. The cost-effective technology uses high-resolution add-on detectors that simultaneously scan a patient during a standard whole-body PET scan.

“Whole-body PET/CT imaging is broadly used for cancer staging and restaging and to evaluate patients’ response to treatment interventions; however, its diagnostic accuracy is compromised when the lesions are very small or exhibit weak signals,” explained Yuan-Chuan Tai, who presented the study at the meeting. “Our novel AWSM-PET prototype helps to tackle two of the key limitations in whole-body PET imaging: image resolution and overall system sensitivity.”

The AWSM-PET technology utilizes two high-resolution PET detectors that are placed outside of a scanner’s axial imaging field-of-view, which the researchers call an “outsert” device. Each outsert panel comprises 32 LSO crystal arrays, each containing 30×30 elements (0.97×0.97×10.0 mm each). The device simultaneously acquires high-resolution PET data while a patient undergoes whole-body PET, requiring no additional scanning time. The team also developed custom reconstruction and correction algorithms to jointly reconstruct the data.

To test their technology, the researchers used a prototype AWSM-PET device implemented on a Siemens Biograph Vision PET/CT scanner. They imaged cylindrical phantoms containing tumour inserts of varying size, observing a clear improvement in image resolution when data from the outsert device were included. They note that the outsert detectors exhibited excellent spatial, energy and timing resolution, with a system-level coincidence resolving time of 217 ps – suitable for time-of-flight applications.

“The additional high-resolution data from the AWSM-PET device can enhance the overall image resolution and reduce statistical noise,” noted Tai. “The potential improvement in diagnostic accuracy of clinical whole-body PET/CT may benefit cancer patients.”

Tai and colleagues plan to start a pilot human imaging trial later this year at the Washington University School of Medicine in St. Louis. The study will compare the diagnostic accuracy of AWSM-PET versus standard-of-care whole-body PET/CT.

A quantum leap in brain PET resolution

High spatial resolution is essential for effective brain PET, to enable visualization and characterization of biological processes occurring in small cerebral structures. With this aim, a research team headed up at the Université de Sherbrooke in Canada has developed an ultrahigh-resolution (UHR) brain PET scanner. The system enabled characterization of previously indistinguishable brain regions that are involved in conditions such as Alzheimer’s disease, depressive and visual attention disorders, and tinnitus.

“Up until now, PET has been useful for the study of neurological phenomena and for diagnostic purposes, but its potential has been somewhat limited by the poor spatial resolution of current PET systems,” explained master’s student Vincent Doyon, who shared the first brain images acquired with the new scanner.

The UHR scanner features pixelated detectors with one-to-one coupling between the scintillators and photodetectors. This results in a spatial resolution of 1.12 mm – more than twice as good as a the current state-of-the-art for brain PET imaging. Having demonstrated the UHR scanner’s imaging capabilities using resolution phantoms and preclinical studies, the researchers have now investigated its potential for human brain imaging.

Four patients underwent a clinical 18F-FDG PET exam on a whole-body PET scanner for 10 to 20 min, followed by a 30- to 60-min brain scan on the UHR scanner. The team reconstructed the UHR images using an OSEM algorithm, with CT-based attenuation and scatter correction, and then performed region identification and calculated standardized uptake values relative to the cerebellum.

Ultrahigh-resolution brain PET imaging

The UHR images clearly identified several regions of the brain (particularly in the brainstem) that could not be resolved by the whole-body scanner, including many structures that had never been seen before using FDG-PET. The team also observed hypermetabolic regions along the cortical surface in the UHR images that were hardly perceived with the whole-body PET scanner.

Doyon pointed out that while standard PET images usually visualize the thalamus as a uniform mass, the UHR images could be segmented into smaller thalamic nuclei. This is a promising finding, he explained, as these nuclei are involved in many physiological functions and affected by diseases in specific ways.

“The UHR scanner is a quantum leap for PET image resolution,” said Doyon. “Proper visualization of brainstem nuclei will provide the ability to detect early changes associated with many diseases and offer a potential avenue for early diagnosis. This will impact both research and clinical settings.” The first UHR prototype is now fully operational and being used for research at the Sherbrooke Molecular Imaging Center.

The post Innovative devices ramp the resolution of PET imaging appeared first on Physics World.

]]>
Research update The SNMMI annual meeting saw researchers highlight novel techniques for improving the performance of PET scanners https://physicsworld.com/wp-content/uploads/2023/06/SNMMI-AWSM-PET.jpg
Applied magnetic field flips a material’s thermal expansion https://physicsworld.com/a/applied-magnetic-field-flips-a-materials-thermal-expansion/ Wed, 28 Jun 2023 09:00:55 +0000 https://physicsworld.com/?p=108562 New technique could be an alternative to chemical substitution for regulating the expansion of magnetic materials

The post Applied magnetic field flips a material’s thermal expansion appeared first on Physics World.

]]>
Most materials expand when heated. A few, such as water just above freezing, contract. Now, for the first time, physicists have found a material that switches from expanding to contracting in the presence of an applied magnetic field. The discovery of this field-induced sign change could offer a new way of controlling a material’s thermal expansion – a prospect that would have many industrial applications as well as interest for fundamental research.

In devices made from many different materials, any mismatch in how these materials behave when heated – their positive or negative coefficients of thermal expansion (CTEs) – can have important and sometimes unwanted consequences. For example, a component that combines materials with very different CTEs may be prone to deforming, cracking or otherwise failing when the temperature changes.

Since this effect is ultimately due to different atoms vibrating at different frequencies, it is sometimes possible to tune the size of the CTE by substituting one element for another in the material’s chemical formula. However, for most materials, this chemical substitution process is very limited in its scope.

From negative to positive

Using external variables such as magnetic or electric fields to tune a material’s CTE would be much more flexible than chemical substitution, and researchers had previously shown that this was possible with certain magnetic materials. In those studies, however, only the magnitude of the CTE had changed with magnetic field, not its sign.

In the new work, a team led by Youwen Long prepared a rare-earth chromate, DyCrO4, in two isomorphic phases: a zircon-type phase and a scheelite-type phase. The first of these phases was created using standard solid-state annealing at ambient pressure, while the second used high-pressure annealing. To their surprise, the researchers found that for both phases, the sign of the CTE changes when a magnetic field is applied.

A figure showing how the coefficient of thermal expansion in DyCrO4 changes with magnetic field, and diagrams of the different forms of DyCrO4.

At zero magnetic field, Long explains that zircon-type DyCrO4 exhibits a negative CTE at temperatures below the ferromagnetic order temperature of 23 K. When they increased the magnetic field to 1.0 T, however, the CTE turned positive. In the scheelite phase, a magnetic field of up to 2.0 T can switch the initially positive CTE to negative. What is more, a “reentrant positive” CTE can be induced by increasing the field further, up to and over 3.5 T.

The researchers say that this is the first time anyone has observed a magnetic-field-induced change in the sign of a material’s CTE. “Our study provides the first example where external magnetic fields can significantly change the thermal expansion, including the magnitude and especially the sign, opening up a new avenue to readily control the thermal expansion beyond conventional chemical substitution,” they report. “We believe that our work will be of broad interest in fundamental and applied material sciences.”

According to Long, the anomalous effect stems from the unusually strong spin-lattice coupling in DyCrO4, and it could have broad applications in applied materials science. “One immediate application area, for example, might be to control the CTE of permanent magnet motors,” he tells Physics World.

The researchers are now exploring the possibility of using magnetic fields to tune the CTE in other magnetic functional materials, to see whether this could be a universal method for regulating their CTEs. They detail their present work in Chinese Physics Letters.

The post Applied magnetic field flips a material’s thermal expansion appeared first on Physics World.

]]>
Research update New technique could be an alternative to chemical substitution for regulating the expansion of magnetic materials https://physicsworld.com/wp-content/uploads/2023/06/magnetic-field-landscape-174962093-iStock_enot-poloskun.jpg
Wearable scanner measures brain function in people on the move https://physicsworld.com/a/wearable-scanner-measures-brain-function-in-people-on-the-move/ Tue, 27 Jun 2023 08:45:44 +0000 https://physicsworld.com/?p=108593 A wearable MEG system combined with matrix coil magnetic shielding enables brain scanning while people stand and walk around

The post Wearable scanner measures brain function in people on the move appeared first on Physics World.

]]>
Researcher Niall Holmes wears the brain imaging helmet

A UK-based research team has created a wearable brain scanner that can measure brain function while people are standing and walking around, paving the way for better understanding and diagnosis of neurological problems that affect movement.

As part of the project, a University of Nottingham-led team combined compact sensors with precision magnetic field control to measure tiny magnetic fields generated by the brain, enabling highly accurate recordings to be made during natural movement. The results, presented in NeuroImage, describe how the team mounted around 60 sugar-cube-sized magnetic field sensors, known as optically pumped magnetometers (OPMs), into lightweight wearable helmets to enable freedom of movement during a magnetoencephalography (MEG) recording.

As Niall Holmes, research fellow at the University of Nottingham, who led the research, explains, the project focuses on imaging the function of the human brain in “completely natural settings” to deepen understanding of what happens in our brains when we learn to walk – or of what goes wrong in the brains of patients with conditions where movement becomes impaired or uncontrollable.

“Conventional neuroimaging systems, such as MRI scanners, are simply too restrictive for us to perform natural movements, and EEG recordings during movements produce artefact-ridden data,” Holmes says.

Needle in a haystack

Neurons in the brain communicate via electric potentials and neuronal currents that produce an associated magnetic field. Measuring these fields outside the head with MEG recordings allows researchers to determine the underlying neuronal activity with uniquely high spatiotemporal precision. However, according to Holmes, this process presents a significant challenge.

“The neuronal magnetic fields are on the femtotesla level, over one billion times smaller than the magnetic field of the Earth, and many orders of magnitude smaller than magnetic fields generated by sources such as mains electricity and moving vehicles; it’s like looking for a needle in a haystack,” he says.

To address this limitation, the team built on recent developments in the miniaturization of quantum technologies to create highly accurate OPMs that work by measuring the transmission of laser light through a glass cell filled with a vapour of rubidium atoms. The laser optically pumps the atoms, which aligns the electron spins. At zero magnetic field, all spins are aligned, and no more laser light can be absorbed, so a measurement of the intensity of the laser light exiting the glass cell is at a maximum.

“When a small magnetic field is applied near the cell, the spins fall out of alignment, and need to absorb more photons of laser light to re-align with the pumping laser. As photons are absorbed, the measured intensity decreases,” explains Holmes. “By monitoring the intensity of the laser light that is transmitted through the cell, we can infer the local magnetic field experienced by the atoms.”

Matrix coil

The Nottingham team also developed a “matrix coil” – a new type of active magnetic shielding made from small, simple, unit coils, each with individually controllable current – that can be redesigned in real time to shield any region in a magnetically shielded room (MSR). This allows the OPMs to continue to function as patients move freely.

“Using our matrix coil we have demonstrated, for the first time, that accurate MEG data can be acquired during ambulatory movements. This sets the groundwork for many clinical and neuroscientific paradigms that would be impossible using conventional neuroimaging systems,” says Holmes.

“For example, the scanning of patients with disorders that affect movement and balance, such as Parkinson’s disease, concussions and gait ataxia, will directly activate the brain networks associated with the movements they find most challenging, increasing our sensitivity to the neural correlates of the disorders,” he adds.

According to Holmes, freedom of movement also enables studies of spatial navigation and natural social interaction, as well as longitudinal neurodevelopment studies and the recording of epileptic activity during seizures. In doing so, it creates what he describes as “an entirely different set of boundaries for researchers and clinicians”.

“It’s exciting to think of what we might be able to learn in these areas. We are now in the process of commercializing the technology with our spin-out company Cerca Magnetics to enable these new studies,” he says.

The post Wearable scanner measures brain function in people on the move appeared first on Physics World.

]]>
Research update A wearable MEG system combined with matrix coil magnetic shielding enables brain scanning while people stand and walk around https://physicsworld.com/wp-content/uploads/2023/06/MEG-imaging-featured.jpg newsletter1
IBM’s 127-qubit processor shows quantum advantage without error correction https://physicsworld.com/a/ibms-127-qubit-processor-shows-quantum-advantage-without-error-correction/ Mon, 26 Jun 2023 19:30:21 +0000 https://physicsworld.com/?p=108629 Quantum error mitigation used to calculate 2D Ising model  

The post IBM’s 127-qubit processor shows quantum advantage without error correction appeared first on Physics World.

]]>
A 127-qubit quantum processor has been used by an international team of researchers to calculate the magnetic properties of a model 2D material. They found that their IBM quantum computer could perform a calculation that a conventional computer simply cannot, thereby showing that their processor offers quantum advantage over today’s systems – at least for this particular application. What is more, the result was achieved without the need for quantum error correction.

Quantum computers of the future could solve some complex problems that are beyond the capability of even the most powerful conventional computers – an achievement that is dubbed quantum advantage. Physicists believe that future devices would have to combine about a million quantum bits (or qubits) to gain this advantage. Today, however, the largest quantum processor contains fewer than a 1000 qubits.

An important challenge in using quantum computers is that today’s qubits are very prone to errors, which can quickly destroy a quantum calculation. Quantum error correction (QEC) techniques can be used to deal with noise in a technique called a fault-tolerant quantum computing. This involves using a large number of qubits to create one “logical qubit” that is much less prone to errors. As a result, a lot of hardware is needed to do calculations  and some experts believe that many years of development will be needed before the widespread use of this technique will be possible.

Now, however, a team led by researchers at IBM has shown that quantum advantage can be achieved without the need for QEC. The team used a 127-qubit quantum processor to calculate the magnetization of a material using a 2D Ising model.  This model represents the magnetic properties of a 2D material using a lattice of quantum spins that interact with their nearest neighbours. Despite being very simple, the model is extremely difficult to solve.

Noise cancellation

The researchers used an approach called “noisy intermediate-scale quantum computation”, which has already been used to do some chemistry calculations. This is a race against time whereby the calculation proceeds quickly to avoid a build-up of errors. Instead of creating a universal quantum processor, the researchers encoded the Ising model directly onto the qubits themselves. They did this to take advantage of the similarities in the quantum mechanical nature of the qubits and the model being simulated – which led to a meaningful outcome without the use of QEC.

To do the calculation,  the IBM team used a superconducting quantum processor chip that comprises 127 qubits. The chip runs quantum circuits 60 layers deep with a total of around 2800 two-qubit gates, which are the quantum analogue of conventional logic gates. The quantum circuit generates large and highly entangled quantum states that were used to program the 2D Ising model. This is done by performing a sequence of operations on qubits and pairs of qubits. High quality measurements were possible thanks to the long coherence times of the qubits and because the two-qubit gates were all calibrated to allow for optimal simultaneous operation.

Mitigation, not correction

These methods do remove a large part of the noise, but errors were still an important issue. To tackle this, the IBM team applied a quantum error mitigation process using a conventional computer. This is a post-processing technique that uses software to compensate for noise, thereby allowing for the correct calculation of the magnetization.

The team’s quantum calculations showed a clear advantage over conventional computers, but this advantage is not completely related to computational speed. Instead, it comes from the ability of the 127-qubit processor to encode a large number of configurations of the Ising model – something that computers would not have enough memory to achieve.

IBM’s Kristan Temme, who is co-author of a Nature paper that describes the work, believes that the research is a decisive step towards the implementation of more general near-term quantum algorithms before fault-tolerant quantum computers become available. He says that the team has shown that it is possible to obtain accurate expectation values of the model system from circuits that are only limited by the coherence time of the hardware.  He calls their method  for quantum error mitigation “the essential ingredient” for such applications in the near future. “We are very eager to put this new tool to use and to explore which of the many proposed near-term quantum algorithms will be able to provide an advantage over current classical methods in practice”, he tells Physics World.

John Preskill at the California Institute of Technology in the US, who was not involved in this research, says that he is “impressed” by the quality of the device performance, which he thinks is the team’s most important achievement. He adds that the results strengthen the evidence that near-term quantum computers can be used as instruments for physics exploration and discovery.

The post IBM’s 127-qubit processor shows quantum advantage without error correction appeared first on Physics World.

]]>
Research update Quantum error mitigation used to calculate 2D Ising model   https://physicsworld.com/wp-content/uploads/2023/06/IBM-Eagle.jpg newsletter1
John Goodenough: Nobel-prize-winning battery pioneer dies aged 100 https://physicsworld.com/a/john-goodenough-nobel-prize-winning-battery-pioneer-dies-aged-100/ Mon, 26 Jun 2023 16:02:33 +0000 https://physicsworld.com/?p=108648 Oldest-ever Nobel laureate led the development of lithium-ion batteries

The post John Goodenough: Nobel-prize-winning battery pioneer dies aged 100 appeared first on Physics World.

]]>
The materials scientist John Goodenough, who pioneered the development of lithium-ion batteries, died on 25 June at the age of 100. Goodenough’s work, which he led in the 1970s and 1980s, went on to power a revolution in handheld electronics and electric vehicles. He was awarded a share of the 2019 Nobel Prize for Chemistry, when he became the oldest ever Nobel laureate at the age of 97.

Born in Jena, Germany, on 25 July 1922 to American parents, Goodenough received a BS in mathematics from Yale University in 1944. After serving as a meteorologist for the US Army during the Second World War, he was awarded a PhD in physics from the University of Chicago in 1952.

After his doctorate, Goodenough went to the Massachusetts Institute of Technology’s Lincoln Laboratory where he mostly worked on random-access memory used in computers. In 1976 he moved to the University of Oxford in the UK, where he led the development of lithium-ion rechargeable batteries.

Powering a revolution

At the time, Stanley Whittingham from Stanford University had been developing new energy systems when he discovered that a battery cathode made of titanium disulphide can absorb lots of lithium ions from a metallic lithium anode.

Building on this finding, in 1979 Goodenough discovered that an even better performing cathode can be made from cobalt oxide. This work showed that it would be possible to achieve a high density of stored energy with an anode other than metallic lithium.

The trouble with metallic lithium is that while it is an excellent anode material because it readily gives up electrons, it is highly reactive. Akira Yoshino from the Asahi Kasei Cooporation solved this problem in 1985 by creating a carbon-based anode that is able to absorb large numbers of lithium ions.

This work removed the need to use reactive metallic lithium and the first commercial lithium-ion battery appeared in 1991. Since then, the devices have powered a revolution in handheld electronics and electric vehicles. It was for this work that Goodenough, Whittingham and Yoshino recieved the 2019 Nobel Prize for Chemistry.

Back in the US

In 1986 Goodenough returned to the US, joining the University of Texas at Austin where he was to remain for the rest of his career. In 2006 Goodenough established the John B and Irene W Goodenough Endowed Research Fund in Engineering at the university.

Goodenough is the author of eight books including  Magnetism and the Chemical Bond, which was published in 1963. He also wrote an autobiography – Witness to Grace – in 2008. As well as the Nobel prize, Goodenough received many other awards including the Japan Prize in 2001, the Enrico Fermi Award (2009) and the US National Medal of Science (2011).

“John’s legacy as a brilliant scientist is immeasurable — his discoveries improved the lives of billions of people around the world,” says Jay Hartzell, president of the University of Texas at Austin. “He was a leader at the cutting edge of scientific research throughout the many decades of his career, and he never ceased searching for innovative energy-storage solutions.”

The post John Goodenough: Nobel-prize-winning battery pioneer dies aged 100 appeared first on Physics World.

]]>
News Oldest-ever Nobel laureate led the development of lithium-ion batteries https://physicsworld.com/wp-content/uploads/2023/06/goodenough_holdingbattery_edit-Large-1200x800-c-default.jpeg newsletter
Nuclear Now by Oliver Stone – putting nuclear energy back on the table https://physicsworld.com/a/nuclear-now-by-oliver-stone-putting-nuclear-energy-back-on-the-table/ Mon, 26 Jun 2023 13:43:23 +0000 https://physicsworld.com/?p=108578 Robert P Crease reveals the lessons from Oliver Stone’s new movie Nuclear Now

The post <em>Nuclear Now</em> by Oliver Stone – putting nuclear energy back on the table appeared first on Physics World.

]]>

Nuclear Now – the new documentary movie from Oliver Stone – has a messianic flavour. Global warming is an existential threat. Humanity has the right technology to save itself. Malevolent forces stand in the way. But with leadership, courage and reason we can prevail – provided we turn to nuclear power, that is. For Stone, nuclear power has gone from hero to zero and back again.

Nuclear Now is packed with vivid and dramatic images, including crumbling glaciers, violent explosions, smoke-filled cities and flooded urban areas

Nuclear power was born right after the Second World War with a sterling future. Cheap, reliable and compact, it could power anything, supporters claimed, and forestall looming disasters. Like all Stone’s movies, Nuclear Now is packed with dramatic images, including crumbling glaciers, violent explosions, smoke-filled cities and flooded urban areas. Archival clips illustrate naïve mid-20th century predictions of vibrant, fully electrified and utterly clean, nuclear-powered cities in the 21st century.

But by the 1970s, nuclear power was a pariah. Intimately associated with nuclear weapons, it was said to emit dangerous levels of radiation and have the potential for accidents. Seeming to confirm the latter was the 1979 meltdown of a reactor at Three Mile Island in Pennsylvania (even though little to no radiation was released) and the 1986 explosion at Chernobyl, which spread plumes of radiation over Western Europe. Opposition to nuclear power, Stone says in a voiceover, became “glamorous, virtuous and lucrative all at once”.

The movie gives us lurid scenes of skull and gas mask-clad protestors holding posters of skeletons carrying dead babies, of Jane Fonda addressing an anti-nuclear rock concert in morally superior language, and of officials celebrating the closing of a nuclear power plant while holding glasses of what looks to be champagne.

Still more terrifying, anti-nuclear activists made irresponsible claims that fossil fuel was “clean” or easily able to become so. In one split-second clip in the movie, a leading anti-nuclear activist shouts: “Coal or oil, anything but nuclear!” What’s so stomach-turning is not only the technical ignorance of the remark, but the fraudulent sense of moral superiority it expresses, as well as how confident many people were at the time of its truth.

No Oliver Stone movie would be complete without a conspiracy theory. Here it’s oil and coal companies promoting the idea that the low levels of radiation associated with nuclear power are dangerous

A monster then loomed. Climate change had been there all along: skies had been warming, glaciers melting and seas slowly rising for decades. Until the 1980s, few humans had regarded the beast as a serious threat. No longer. But the only force that was truly able to combat it – according to the movie – was largely regarded as a pariah, beset by a cultural hysteresis that associated it with bombs and meltdowns.

No Stone movie would be complete without a conspiracy theory. Here it’s the role of oil and coal companies in promoting the idea that the low levels of radiation associated with nuclear power are dangerous (even though they are far lower than background radiation and ordinary medical treatments) and that fossil-fuel industries had corrupted leading environmentalists who had once championed nuclear technology.

Striking interviews, chilling images and vivid analogies come fast and furious. Most are a few seconds long – of smog, floods and tidal waves, of atoms and galaxies, of helpless, oil-drenched birds at the beach, and of US Senator James Inhofe dismissively tossing a snowball in the halls of Congress in 2015 to supposedly refute the idea that the climate is warming. Let’s hope that these clips are powerful enough to dent or soften the rationalizing defences and psychological shields that stand in the way of seriously considering nuclear power.

The simple and blunt message of Nuclear Now is: “We go nuclear or we die!” Does the message hold up? It depends on five premises: that climate change is an existential threat; that it’s caused by fossil fuels sending carbon dioxide and other poisons into the atmosphere; that energy consumption cannot be sufficiently cut back; that no other energy technologies even in concert can meet the demand; and that the byproducts of nuclear technology are much less dangerous than recognized.

One of the most powerful images in the movie is a scene of a few children playing on a long railway bridge high above a river. Suddenly and unexpectedly, a speeding locomotive comes into view, bearing down on the terrified kids. To try to run from the bridge would be futile; according to the voiceover by Nuclear Now’s co-writer Joshua Goldstein, that would be like thinking that we can rely on renewables.

With the unstoppable train speeding towards them, the desperate kids instead do the only thing that can save them: leap off the bridge into the water below, which is like turning to nuclear technology. “The jump is scary,” says Goldstein, “but it’s the train that’s gonna kill you.” While the kids know enough to jump – we see them doing it – we haven’t yet made up our minds whether to do it ourselves.

My main objection to the film is that it says nothing about yet another reason for opposition to nuclear power – that radiation evokes powerful and deeply entrenched terrors, as the historian Spencer Weart detailed in his insightful 1988 book Nuclear Fear. It is those terrors that make the opposition to nuclear power so difficult to confront – and leads many people to deny the existence of the train, or to believe that ways can be found to outrun it.

The critical point

The time is long gone, Stone’s movie forces us to think, when humans could ponder and judge nuclear power from a smug and superior distance. In the 21st century, that’s a fraudulent, reckless and morally self-congratulatory exercise, a consequence-free application of abstract if popular values. The virtue of Nuclear Now is that it puts nuclear technology back on the table as a possible energy source.

At the end of the movie, we see brief clips of Martin Luther King and Mahatma Gandhi. They aren’t there to comment on the technical merits of nuclear technology, of course. Stone brings them in to invoke the moral and political courage needed to use it. Inevitably, though, the last words of the movie go to Stephen Hawking, our age’s saintly symbol of successful technological struggle against adversity. “Overcome the odds. It can be done,” Hawking intones, “it can be done.”

At moments like this, Nuclear Now is way, way, way over the top. But then so is the crisis that we face.

The post <em>Nuclear Now</em> by Oliver Stone – putting nuclear energy back on the table appeared first on Physics World.

]]>
Opinion and reviews Robert P Crease reveals the lessons from Oliver Stone’s new movie Nuclear Now https://physicsworld.com/wp-content/uploads/2023/06/2023-06-CP-NuclearNow-List-notext-a.png newsletter
Medical devices take design tips from the animal kingdom https://physicsworld.com/a/medical-devices-take-design-tips-from-the-animal-kingdom/ Mon, 26 Jun 2023 11:00:16 +0000 https://physicsworld.com/?p=108581 From shrimp to butterflies, and now the pangolin and blue-ringed octopus, animals provide inspiration for a myriad of medical devices and applications

The post Medical devices take design tips from the animal kingdom appeared first on Physics World.

]]>
The animal kingdom has benefited from millions of years of biological evolution to adapt processes and characteristics to meet specific needs. Using an approach known as bioinspiration, scientists and engineers are employing insights from biology to solve today’s technology challenges and optimize the design of new materials, devices and structures.

Within the medical field, for example, researchers have designed a surgical imaging system based on the amazing eyes of the mantis shrimp, created a space blanket that allows users to control their temperature by mimicking the adaptive properties of squid skin, and fabricated an intraocular pressure sensor based on nanostructures with optical properties first discovered in the wings of a butterfly.

And this week saw the publication of two new research studies exploiting insights from biology for the benefit of human health.

Powered by pangolin scales

First up, the pangolin – the only mammal that’s completely covered in hard scales. These scales connect to the underlying skin, rather than to each other, and overlap in the style of a pine cone, enabling the pangolin to curl into a ball when threatened. And it is these scales that provided the inspiration for Metin Sitti from the Max Planck Institute for Intelligent Systems and his collaborators to design a miniature soft medical robot.

Pangolin-inspired robot

Untethered magnetic soft robots offer the potential to perform minimally invasive medical procedures inside the body. One day, such robots could be guided by magnetic fields to hard-to-reach regions where they can then deliver drugs or create heat. Localized heating can be used to stop bleeding, cut tissue or even ablate tumours. Remote generation of heat, however, requires the use of rigid metallic materials, which can compromise the compliance and safety of soft robots.

“To address this inherent trade-off between effective remote heating at long distances and compliance, we observed how pangolins in nature could still achieve flexible and unencumbered motion despite having keratin scales which are orders of magnitudes harder and stiffer than the underlying tissue layers, simply by organising the keratin scales into an overlapping structure,” write the researchers, in Nature Communications.

With this in mind, Sitti and colleagues designed and built a 20 x 10 x 0.2 mm robot comprising a soft polymer layer and a pangolin-inspired layer of overlapping metal elements. By exposing the robot to a low-frequency magnetic field, the researchers could make it roll up and move about. When exposed to a high-frequency magnetic field, the robot delivered on-demand heating (by over 70°C) at large distances (more than 5 cm) within less than 30 s.

Illustration of the untethered magnetic robot

In proof-of-concept experiments on tissue phantoms, the team showed that a 65 mT rotating magnetic field could actuate and move an untethered robot, and that the heating scales could selectively release cargo secured to the robot with beeswax.

To further assess the robot’s clinical potential, the researchers simulated bleeding inside an ex vivo pig stomach and demonstrated that the robot could navigate to the bleeding site and use heat to stop the bleed. They also placed tumour spheroids in direct contact with the heating scales, which destroyed the spheroids after just 5 min of heat at 60 °C.

“Many questions and technical challenges still remain, although surmountable, they require more time and effort. These include the clinical utility and practicality of deploying these robots in clinical scenarios, biocompatibility issues, control and tracking,” says first author Ren Hao Soon. “In my next project, I want to continue pushing these untethered robots closer to the bedside. I hope to work closely with clinicians to identify a real medical need for which such robots might be useful.”

Emulating the octopus bite

The blue-ringed octopus is tiny, vibrant in colour, and one of the world’s most venomous marine animals. Its bite punctures the shell of its prey and then releases tetrodotoxin, a paralysing neurotoxin. “The predatory behaviour of the blue-ringed octopus inspired us with a strategy to improve topical medication,” writes a research team headed up at Sichuan University and Zhejiang University in China.

Intratissue topical medication – a method in which drugs are delivered into tissue surfaces via microneedles – offers rapid action, high drug bioavailability and minimal invasiveness. The approach can be used to inhibit tumour growth, for example, or accelerate healing. Challenges remain, however, such as adhering drug carriers to soft tissue surfaces wetted by bodily fluids and controlling the concentration of drug release.

To overcome these obstacles, first author Zhou Zhu and colleagues created a microneedle patch that provides robust tissue surface adhesion and active-injection drug delivery. Writing in Science Advances, they note that the drug-releasing microneedles work in a manner “inspired by the teeth and venom secretion of the blue-ringed octopus”.

Microneedle drug delivery platform

The researchers formed the microneedle patch from a mixture of silk fibroin and the hydrogel pluronic F127 (silk-Fp), adding heat-sensitive PNIPAm hydrogel to enable controlled drug release. The resulting hydrogel microneedles were strong enough to penetrate soft tissue or the mucus barrier.

One challenge, particularly in a humid environment, is to keep the patch stable on the tissue surface. Imitating the design of the suction cups on the octopus’ tentacles, the team created a base layer containing hydrogel suction cups and integrated the microneedles into its centre. The suction cups adhere to tissue via both negative pressure fixation and chemical bonding with tissue proteins. Even after long periods under water, the silk-Fp patch stayed firmly in place on the tissue surface.

To test the functionality of the patch, the researchers loaded the silk-Fp microneedles with the anti-inflammatory drug dexamethasone sodium phosphate (DEX) or the anticancer drug 5-fluorouracil (5-FU), and applied the patches to treat oral ulcers or early superficial tumours in animals, respectively.

The shape and strength of the microneedles enabled them to puncture into the ulcer or tumour. After entering the target tissue, the microneedles sense the body temperature and provide rapid-onset drug delivery within two hours (as the needles shrink and the PNIPAm transforms from a hydrophilic to a hydrophobic state upon heating). Over the next two days, the microneedles gradually deliver the remaining drug to maintain the therapeutic effect.

The researchers found that the silk-Fp microneedle patch could increase the healing speed of ulcers through DEX release or almost completely halt tumour growth when loaded with 5-FU. “Imitating the blue-ringed octopus biting through the shell of its prey and injecting toxic saliva, the developed silk-Fp MN could actively inject drugs into tissues,” they write.

The post Medical devices take design tips from the animal kingdom appeared first on Physics World.

]]>
Blog From shrimp to butterflies, and now the pangolin and blue-ringed octopus, animals provide inspiration for a myriad of medical devices and applications https://physicsworld.com/wp-content/uploads/2023/06/animals.jpg newsletter
Lined-up quantum dots become highly conductive https://physicsworld.com/a/lined-up-quantum-dots-become-highly-conductive/ Mon, 26 Jun 2023 08:00:59 +0000 https://physicsworld.com/?p=108512 Semiconducting nanostructures behave like metals when placed in quasi-two-dimensional superlattices

The post Lined-up quantum dots become highly conductive appeared first on Physics World.

]]>
Assemblies of quantum dots tend to be highly disordered, but when the facets of these tiny semiconducting structures are lined up like soldiers on parade, something strange happens: the dots become very good at conducting electricity. This is the finding of researchers at the RIKEN Center for Emergent Matter Science in Japan, who say that these ordered, quasi-two-dimensional “superlattices” of quantum dots could make it possible to develop faster and more efficient electronics.

Quantum dots are semiconductor structures that confine electrons in all three spatial dimensions. This confinement means that quantum dots behave in some ways like single quantum particles even though they contain thousands of atoms and measure up to 50 nm across. Thanks to their particle-like properties, quantum dots have found use in many optoelectronics applications, including solar cells, biological imaging systems and electronic displays.

There is a snag, however. The general disorderliness of quantum dot assemblies means that charge carriers do not flow efficiently through them. This makes their electrical conductivity poor, and standard techniques for introducing order have not helped much. “Although the order of the assemblies can be improved, we found that it is not enough,” says Satria Zulkarnaen Bisri, who led the RIKEN study and is now an associate professor at the Tokyo University of Agriculture and Technology.

A fresh look at quantum dots

Bisri explains that to improve quantum dots’ conductivity, we need to look at them in a different way – not as spherical objects, as is currently the case, but as chunks of matter with a suite of unique crystallographic properties inherited from their compound crystal structure. “Orientation uniformity of the quantum dots is also important,” he says. “Understanding this enabled us to formulate a way to control the assembly of the quantum dots by tuning the interaction between facets of neighbouring quantum dots.”

The researchers made their quantum dot assemblies, or superlattices, by creating what is known as a Langmuir film. Bisri describes this process as a bit like drizzling oil on the surface of water and letting it spread into a very thin layer. In their experiment, the “oil” is the quantum dots, while the “water” is a solvent that helps the dots connect to each other selectively, via certain facets, to form an ordered monolayer, or superlattice.

“The good properties of this monolayer superlattice are that the large-scale order and the coherent orientation of the quantum dot building blocks minimize energetic disorders throughout the assembly,” Bisri tells Physics World. “This allows for more precise control over the electronic properties of the dots.”

At higher doping levels, charge transport from one quantum dot to another is no longer governed by a hopping transport process

The RIKEN researchers found that they could make their system up to a million times more conductive than assemblies of quantum dots that were not connected epitaxially in this way. Bisri explains that this increase in conductivity is associated with an increase in the doping level of charge carriers in the system. At this higher doping, charge transport from one quantum dot to another is no longer governed by a hopping transport process (as occurs in an insulator), but by a delocalized transport mechanism through electronic minibands – “just as what would happen in a metallic material,” Bisri says.

Faster and more efficient electronic devices

High conductivity and metallic behaviour in semiconducting colloidal quantum dots could bring significant advantages for electronic devices, making it possible to develop faster and more efficient transistors, solar cell, thermoelectrics, displays and sensors (including photodetectors), Bisri adds. The materials could also be used to investigate fundamental physical phenomena such as strongly correlated and topological states.

The researchers now plan to study other quantum dot compounds. “We would also like to achieve similar or even better metallic behaviour using other means besides electrical field-induced doping,” Bisri reveals.

They detail their present work in Nature Communications.

The post Lined-up quantum dots become highly conductive appeared first on Physics World.

]]>
Research update Semiconducting nanostructures behave like metals when placed in quasi-two-dimensional superlattices https://physicsworld.com/wp-content/uploads/2023/06/working-in-glovebox.jpg newsletter1
Regulating AI will slow the pace of science https://physicsworld.com/a/regulating-ai-will-slow-the-pace-of-science/ Sat, 24 Jun 2023 10:00:28 +0000 https://physicsworld.com/?p=108525 Developments in AI are moving fast but top-down regulation is not likely to be the best approach

The post Regulating AI will slow the pace of science appeared first on Physics World.

]]>
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That was the terrifying-sounding statement signed by more than 350 business leaders and researchers at the end of May. Released by the non-profit Center for AI Safety, signatories included the astronomer Martin Rees, who is known for his deep thoughts about the future of humanity.

Rees later explained he was less worried about “some super-intelligent ‘takeover’” and more concerned about the risk of us relying too much on big interconnected systems. “Large-scale failures of power grids, Internet and so forth can cascade into catastrophic societal breakdown,” he warned in the Times. For Rees – and many others who take an interest in such matters – we need regulation to control AI.

But who should do the regulating? I’m not sure I particularly trust tech firms to act in our best interests, while politicians are notorious for creating rules that are cumbersome, late and miss the point. Some say we should leave it to international bodies like the United Nations – in fact, the EU is already planning what it calls “the world’s first comprehensive AI law”. And anyway, is regulation even possible now that the AI genie is out of the bottle?

Sure, there are concerns. Large-language models like ChatGPT are ultimately trained on data and information created by people. But it’s unclear what sources it’s used and credit is rarely given. There have been instances of ChatGPT “making up” journal references, which can erode trust in what we see, watch and read online.

With events moving at such a fast pace – the latest version of ChatGPT is due out shortly – I’m not sure anyone really knows what the future holds. Many UK universities have reacted by banning students from using AI tools, worried that they might be gaining an unfair advantage on coursework. Universities essentially are trying to win breathing space while they work out what to do long term.

But banning AI isn’t a wise idea, especially as it offers so many exciting possibilities. As well as addressing pressing global issues such as climate change, AI could help with day-to-day tasks: imagine a student taking a photo of a lab experiment and asking if they’ve set it properly up. AI tools could create videos or podcasts as teaching aids. Or do coding. Or spot patterns in data. It could suggest titles to research papers or summarize talks.

AI is here to stay; ultimately, it’s up to us to use it wisely.

The post Regulating AI will slow the pace of science appeared first on Physics World.

]]>
Opinion and reviews Developments in AI are moving fast but top-down regulation is not likely to be the best approach https://physicsworld.com/wp-content/uploads/2023/06/AI-tech-concept-1140691248-iStock_metamorworks.jpg newsletter
Shopping trolley could save lives, the bottle bouncing challenge https://physicsworld.com/a/shopping-trolley-could-save-lives-the-bottle-bouncing-challenge/ Fri, 23 Jun 2023 16:58:37 +0000 https://physicsworld.com/?p=108615 Excerpts from the Red Folder

The post Shopping trolley could save lives, the bottle bouncing challenge appeared first on Physics World.

]]>
Could a shopping trolley save your life? Researchers at the UK’s Liverpool John Moores University think so. They have done a study in which 2155 people used a shopping trolley (or cart) that had an electrocardiogram (ECG) sensor built into its handle. The device was able to detect whether a subject had atrial fibrillation, which is a type of irregular heartbeat that makes it much more likely that a person will have a stroke.

Shoppers in a supermarket gripped the trolley handle for at least one minute while a measurement of the person’s heartbeat was made. A green light would appear if no evidence of atrial fibrillation was detected. This null result was then confirmed by one of the researchers using a separate instrument. If evidence of atrial fibrillation was detected by the trolley handle, an independent measurement was made by one of the supermarket’s pharmacists. A cardiologist member of the team then reviewed the data and reported back to the subjects.

The study was done in four supermarkets in Liverpool over two months and 220 people were flagged up for have an irregular heartbeat. Of these, a diagnosis of atrial fibrillation was made for 59 people – with twenty people already knowing that they had the condition.

Happy to be tested

Liverpool Moores’ Ian Jones says “Nearly two-thirds of the shoppers we approached were happy to use a trolley, and the vast majority of those who declined were in a rush rather than wary of being monitored. This shows that the concept is acceptable to most people and worth testing in a larger study.” He adds, “we identified 39 patients who were unaware that they had atrial fibrillation. That’s 39 people at greater risk of stroke who received a cardiologist appointment.”

The team reported its results today in Edinburgh at ACNAP 2023, which is a scientific congress of the European Society of Cardiology.

Now, it’s time for a bit of physics fun. Take a plastic bottle and fill it with water and then drop it and observe how high it bounces. Then, take the same bottle and set it spinning about its long axis and drop it again and see what happens.

Apparently, the non-spinning bottle will bounce higher than the spinning bottle – according to a team of researchers in Chile led by Pablo Gutiérrez of O’Higgins University and Leonardo Gordillo of the University of Santiago.

Now, when I first came across this study, I assumed that the spinning bottle would bounce higher because its recoil upwards would be stabilized by the conservation of angular momentum. I was wrong, but can you work out why the spinning bottle does not bounce as high? Here’s a hint, think of the water as a shock absorber. You can read more in Physics, where you can also watch a video of the experiment.

The post Shopping trolley could save lives, the bottle bouncing challenge appeared first on Physics World.

]]>
Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/06/cardiogram-web-7250300-iStock_gimbat.jpg
Silicon solar cells gain new flexibility https://physicsworld.com/a/silicon-solar-cells-gain-new-flexibility/ Fri, 23 Jun 2023 08:00:30 +0000 https://physicsworld.com/?p=108510 Blunted edge technique could make silicon a viable material in applications that call for lightweight, bendable cells

The post Silicon solar cells gain new flexibility appeared first on Physics World.

]]>
A photo showing the new bendy silicon solar cells flopping over in a person's hand like a piece of flimsy card

Most silicon solar cells are completely rigid, but researchers in China, Germany and Saudi Arabia have now succeeded in making them bend and flex like paper. The new flexible cells have a baseline power conversion efficiency of 24%, and they retain 96.03% of this efficiency after 20 minutes of flapping in a laboratory-generated version of wind. They are also robust to temperature changes, losing just 0.38% of their efficiency after cycling between temperatures of –70 and 85 °C for two hours.

Conventional silicon-based solar cells make up 95% of all solar cells on the market today. After decades of development, these cells are very efficient at generating electricity from sunlight. Their main drawback is that silicon is a brittle material that cracks easily when bent, meaning that standard silicon solar cells cannot be deployed on undulating or flexible surfaces.

Thin-film solar cells made from other materials, such as amorphous silicon, Cu(In,Ga)Se2, CdTe, organics or perovskites, are attractive alternatives in many ways. However, they often contain elements that are toxic (such as lead or cadmium) or scarce and expensive (such as indium or tellurium). They also suffer from low power conversion efficiency and are chemically unstable under normal operating conditions.

Shearing rather than fracturing

Scientists have long known that textured crystalline silicon wafers begin to crack at certain sharp interfaces that form between pyramid-shaped features on their edges.

In the latest work, a team of researchers led by Wenzhu Liu of the Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, used this fact to improve the wafers’ flexibility. They did this by blunting the crack-initiating interfaces with chemical and plasma etching. Once blunted, the wafer no longer fractures. Instead, it forms a microscopic network of cracks that enables it to be bent and even rolled up.

After treating the wafers in this way, the researchers demonstrated that they could use them to make heterojunction solar cells. When assembled into large flexible modules with an area greater than 10,000 cm2, the new cells had a power conversion efficiency as high as 23.4%. An anti-reflective coating based on magnesium fluoride increased the efficiency further, to 24.6%.

Resistant to vibrations and repeated bending

“These devices are resistant to vibrations and repeated bending (1000 side-to-side bending cycles) and could be made into lightweight large-area flexible solar modules,” Liu tells Physics World. “They may be good for building-integrated and car-integrated photovoltaics where such properties are needed.”

While the new cells can withstand conditions that mimic winds with speeds of 30 m/s for 20 minutes, they are not yet robust to high-speed hailstones. The researchers, who report their work in Nature, say they are now looking into ways of solving this problem.

The post Silicon solar cells gain new flexibility appeared first on Physics World.

]]>
Research update Blunted edge technique could make silicon a viable material in applications that call for lightweight, bendable cells https://physicsworld.com/wp-content/uploads/2023/06/flexible-module.jpg
Intel releases 12-qubit silicon quantum chip to the quantum community https://physicsworld.com/a/intel-releases-12-qubit-silicon-quantum-chip-to-the-quantum-community/ Thu, 22 Jun 2023 16:00:38 +0000 https://physicsworld.com/?p=108572 Chip is based on silicon spin qubits, which are about a million times smaller than other qubit types

The post Intel releases 12-qubit silicon quantum chip to the quantum community appeared first on Physics World.

]]>
Intel – the world’s biggest computer-chip maker – has released its newest quantum chip and has begun shipping it to quantum scientists and engineers to use in their research. Dubbed Tunnel Falls, the chip contains a 12-qubit array and is based on silicon spin-qubit technology.

The distribution of the quantum chip to the quantum community is part of Intel’s plan to let researchers gain hands-on experience with the technology, while at the same time enabling new quantum research.

The first quantum labs to get access to the chip include the University of Maryland, Sandia National Laboratories, the University of Rochester and the University of Wisconsin-Madison.

The Tunnel Falls chip was fabricated on 300 mm silicon wafers in Intel’s “D1” transistor fabrication facility in Oregon, which can carry out extreme ultraviolet lithography (EUV) and gate and contact processing techniques.

Silicon spin qubits work by encoding information in the up or down spin of a single electron, making each qubit device essentially a single electron transistor that can be fabricated using standard CMOS processing.

The high quality of the fabrication process results in a 95% yield rate across the wafer similar to a CMOS logic process, with each wafer providing over 24 000 quantum dot devices.

Catching up

In recent years, Intel has fallen behind competitors such as IBM and Google who have quantum processors containing as many as 433 qubits. Yet Intel believes silicon spin qubits are superior to other qubit technologies because of scalability. Being the size of a transistor, the chip is approximately 50 x 50 nm, making it up to a million times smaller than other qubit types.

James Clarke, director of quantum hardware at Intel, calls the release of the new chip the next step in Intel’s long-term strategy to build a full-stack commercial quantum computing system. He says that while there are still challenges that must be solved towards a fault-tolerant quantum computer, the academic community can explore this technology and accelerate research development.

Intel now plans to integrate the chip into its full-stack commercial quantum computing system with the so-called Intel Quantum Software Development Kit (SDK). A next-generation quantum chip based on Tunnel Falls is already under development and is expected to be released next year.

The post Intel releases 12-qubit silicon quantum chip to the quantum community appeared first on Physics World.

]]>
News Chip is based on silicon spin qubits, which are about a million times smaller than other qubit types https://physicsworld.com/wp-content/uploads/2023/06/newsroom-tunnel-falls-chips-finger-small.jpg
Tiny 3D-printed vacuum pump could give mass spectrometry a boost https://physicsworld.com/a/tiny-3d-printed-vacuum-pump-could-give-mass-spectrometry-a-boost/ Thu, 22 Jun 2023 14:46:12 +0000 https://physicsworld.com/?p=108484 Device could help with health and environmental testing

The post Tiny 3D-printed vacuum pump could give mass spectrometry a boost appeared first on Physics World.

]]>
A tiny 3D-printed vacuum pump has been developed by researchers in the US. Luis Fernando Velásquez-García and colleagues at the Massachusetts Institute of Technology say that their device outperforms current state-of-the-art miniature pumps. It could be used to give people in remote communities access to advanced instrumentation such as mass spectrometry for health and environmental testing.

A peristaltic pump is a type of miniaturized positive displacement pump that mimics the action of the muscles in our intestines. Inside the pump, fluid travels through a flexible tube, fitted around the inner edge of a rigid circular casing.

A rotor at the circle’s axis is fitted with rollers that pass along the circle’s inner circumference – squeezing the tube against the casing, transporting pockets of fluid ahead of the rollers, in the direction of the pump outlet. Simultaneously, after the roller passes, the tube regains its original shape. This creates a suction effect that draws more fluid into the pump.

Since this technique avoids direct contact between the fluid and pumping mechanism, it is now widely used to transport liquids that are chemically reactive or need to stay pristine – like blood.

Vacuum challenges

So far, however, peristaltic pumps have not been widely used for creating and maintaining a vacuum through the transport of gases. This would require the rotor to both rotate at faster speeds and squeeze the flexible tube harder, which could quickly damage the pump. In addition, a tube with a circular cross-section can never be fully sealed, meaning some gas can always leak through in the wrong direction.

In the new study, Velásquez-García’s team explored how these problems could be solved through a smarter flexible tube design – made possible by 3D printing. “One of the key advantages of using 3D printing is that it allows us to aggressively prototype,” Velásquez-García explains.

“If you do this work in a clean room, where a lot of these miniaturized pumps are made, it takes a lot of time. If you want to make a change, you have to start the entire process over. In this case, we can print our pump in a matter of hours, and every time it can be a new design.”

This approach enabled Velásquez-García and team to print all of the pump’s inner workings simultaneously. For the flexible tube, they used a relatively new material that is easier to print than more mainstream flexible materials, but has the required properties.

Pair of notches

They also adapted the tube’s design – introducing a pair of notches on opposite sides of its cross-section, perpendicular to the direction of its compression by the rollers. This small alteration meant the tube required less than half the force to seal completely (see figure).

With these adaptations in place, the team’s pump could maintain vacuum pressures an order of magnitude lower than other state-of-the-art miniaturized pumps. This is achieved using lower rotor speeds, and with smaller forces imparted on the flexible tube. Their design maintained this performance over a lifetime of over 100,000 rotations.

Velásquez-García and colleagues believe that their results clearly show just how advanced 3D printing has become. “Some people think that when you 3D print something there must be some kind of trade-off. But here our group has shown that is not the case,” Velásquez-García claims. “It really is a new paradigm. Additive manufacturing is not going to solve all the problems of the world, but it is a solution that has real legs.”

The team envisages numerous possible uses for the device: including high-purity metallurgy, coating processes, semiconductor manufacturing and especially mass spectrometry.

“With mass spectrometers, the 500-pound gorilla in the room has always been the issue of vacuum pumps,” Velásquez-García explains. “What we have shown here is ground-breaking, but it is only possible because it is 3D-printed. If we wanted to do this the standard way, we wouldn’t have been anywhere close.”

With this approach, mass spectrometers fitted with miniaturized vacuum pumps could be easily produced and deployed in remote regions – allowing small communities in developing countries to analyse blood samples, and examine water quality.

The pump is described in Additive Manufacturing.

The post Tiny 3D-printed vacuum pump could give mass spectrometry a boost appeared first on Physics World.

]]>
Research update Device could help with health and environmental testing https://physicsworld.com/wp-content/uploads/2023/06/Miniature-vacuum-pump.jpg
Sniffing out drug driving: why a breath test for cannabis is so hard to create https://physicsworld.com/a/sniffing-out-drug-driving-why-a-breath-test-for-cannabis-is-so-hard-to-create/ Thu, 22 Jun 2023 13:00:57 +0000 https://physicsworld.com/?p=108551 An expert in nanoparticle metrology and neurotoxicology is our podcast guest

The post Sniffing out drug driving: why a breath test for cannabis is so hard to create appeared first on Physics World.

]]>
Friday 23 June 2023 marks the 10th International Women in Engineering Day and we are celebrating by devoting two episodes of the Physics World Weekly podcast to women engineers who are doing cutting edge research.

This week our guest is Kavita Jeerage, who is a research engineer at the US National Institute of Standards and Technology (NIST) in Boulder, Colorado. She is an expert in nanoparticle metrology and neurotoxicology and some of her research focuses on developing breath-test technology.

While roadside breath tests for alcohol are a standard part of policing, there is currently no device that can reliably determine whether a driver has recently consumed cannabis. This is not for lack of trying, it turns out that creating a breath test is very difficult.

Recently, Jeerage and colleagues set out to measure the amount of tetrahydrocannabinolic acid (THC, the active ingredient in cannabis) in users’ breath, and to monitor how it changes over time. While the team was able to address some of the challenges that have been holding back the development of practical cannabis breath tests, they concluded that their research does not support the idea that detecting THC in breath as a single measurement could reliably indicate recent cannabis use.

In a conversation with Physics World’s Margaret Harris, Jeerage explains why a breath test for cannabis is so hard to create.

The post Sniffing out drug driving: why a breath test for cannabis is so hard to create appeared first on Physics World.

]]>
Podcast An expert in nanoparticle metrology and neurotoxicology is our podcast guest https://physicsworld.com/wp-content/uploads/2023/06/Kavita-and-Tara-list.jpg newsletter
What makes external lasers essential at a bore-type LINAC? https://physicsworld.com/a/what-makes-external-lasers-essential-at-a-bore-type-linac/ Thu, 22 Jun 2023 12:29:34 +0000 https://physicsworld.com/?p=108542 Join the audience for a live webinar on 12 July 2023 sponsored by LAP GmbH Laser Applikationen

The post What makes external lasers essential at a bore-type LINAC? appeared first on Physics World.

]]>

Linac’s latest development is the bore-type design that offers many advantages for the patient and the clinical user. So, the question arises whether and why external lasers should also be used at the bore-type LINAC.

In this webinar, we want to evaluate this question from the point of view of the therapist, the patient and the medical physicist while giving you an overview of laser solutions from LAP.

Raphael Schmidt is responsible for the product management of laser systems for CT, MRI, LINAC, and MR-LINAC from LAP. During his studies at KIT (Karlsruhe Institute of Technology), he gained broad experience in different workflows in radiation therapy while analysing them. Raphael holds a degree in industrial engineering and management.

The post What makes external lasers essential at a bore-type LINAC? appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 12 July 2023 sponsored by LAP GmbH Laser Applikationen https://physicsworld.com/wp-content/uploads/2022/11/2022-12-06-LAP-webinar-image.jpg
Incoming president of The Electrochemical Society aims to spark change https://physicsworld.com/a/incoming-president-of-the-electrochemical-society-aims-to-spark-change/ Thu, 22 Jun 2023 10:00:16 +0000 https://physicsworld.com/?p=108479 As the new president of The Electrochemical Society, Gerardine Botte wants to inspire a broader awareness and understanding of the importance of electrochemical technologies for changing our world

The post Incoming president of The Electrochemical Society aims to spark change appeared first on Physics World.

]]>
Gerardine Botte, the new president of The Electrochemical Society (ECS), wants the world to know that solid-state science and electrochemistry have the power to transform our lives. Alongside the battery technologies that power our mobile devices and a growing fleet of emission-free vehicles, she points out that advances in electrochemical science and engineering are crucial for the development of fuel cells and hydrogen power, as well as a myriad of novel materials, sensors and devices for applications ranging from industrial manufacturing through to healthcare.

“We can imagine the world to be a much better place, all the way from safeguarding the environment to delivering the next generation of materials and biomedical innovations,” she says. “We need to make more people aware of the impact that electrochemistry and solid-state science can have in solving the grand challenges facing humanity and our planet.”

Botte, generally known as Gerri, has been a Fellow of the ECS since 2014 and a member since 1998. Now a professor specializing in food, water and energy sustainability at Texas Tech University, Botte is no stranger to driving positive change. She is a serial entrepreneur with several start-ups to her name, and recently founded and now directs an engineering research centre supported by the National Science Foundation (NSF) that brings together five academic institutions as well as industrial partners to create a circular economy for nitrogen-based fertilizers.

“I love changing the world and having an impact,” she says. “I’m really excited to be president of The Electrochemical Society, and I want to take the opportunity I have to make a difference.”

Gerardine Botte with past president Turgut Gür and plenary speaker Linda L Horton

It’s no surprise, then, that Botte has an ambitious programme of initiatives in mind for her presidential year, which started at the beginning of June. Top of the list is extending the reach and influence of the society beyond its established community of electrochemical scientists and engineers. “Our biannual meetings already attract a diverse group of people, but we could work harder to deliver our message to other stakeholders, such as investors and policymakers,” she says. “We need to make the science relevant to people who are not directly involved in the research, so they can see the value of investing in these technologies.”

One specific initiative is to introduce different types of articles into the society’s publications. “Our journals are phenomenal, and lots of people read them because the science is incredibly strong,” Botte explains. “I’d like to work with our editors and divisions to publish perspective articles that cover broader topics, such as lifecycle analysis, sustainability and science policy, to enable our message to be heard at another level.”

As well as making connections with a wider circle of influencers, Botte is working closely with the society’s leadership team to reach out to geographic regions that are not yet well represented at the ECS. South and Central America are key priorities for Botte, who is originally from Venezuela, and she is working with chief executive officer Chris Jannuzzi to seed more student chapters throughout the region. A road-trip is in the works to inspire local scientists to become advisors for these student-led organizations, while Botte is also hoping to develop regional conferences to provide a stronger voice for under-represented populations within the society. “It’s more than simply co-sponsoring a conference and putting the ECS logo on it,” she says. “We want to bring the experiences from these different regions into the society and allow scientists from all over the world to take an active role in the society.”

Botte hopes that involving more students and scientists in the ECS will provide them with the same opportunities that she has enjoyed during her membership. “I have had tremendous support from the society throughout my career,” she says. “Through the meetings I have been able to meet with friends and colleagues, ask for advice, explore new collaborations, and find students and researchers to join our team, while the society has also provided letters of recommendation to support every level of my professional progression.”

Gerardine Botte with the student poster session winners

Botte felt welcome at the ECS from her very first meeting as a graduate student, and as president she is continuing to look for new ways to make sure that people from different backgrounds and cultures feel at home within the organization. “Even small changes, such as providing a range of menu options at our events, can help us to embrace members with different cultural backgrounds,” she says. “We want everyone to feel included, to feel that they are part of our community, and to feel that they can be part of the society.”

Botte also hopes to partner with organizations that have been working to integrate diversity and inclusion into their processes and development programmes. “We need always to have this component present when we are planning our activities,” says Botte. “As an example, we need to reflect cultural diversity in our career development courses, plus we need to ensure that everyone has the same opportunity to benefit from the support and recognition offered by the society.”

Throughout her time at the ECS Botte has placed a strong emphasis on educating the next generation of scientists. In 2006 she launched an outreach programme within her division that over the years has enabled hundreds of high-school students to learn about electrochemical technologies and their applications. Now, as president, Botte is keen to expand the existing programme of travel grants, best-paper awards, and early-career fellowships that the society offers to its student members.

“We will need to raise funds to expand our initiatives for students and early-career researchers, but investing in the next generation of scientists could be an attractive option for philanthropic support,” she says. “We are reaching out to a few foundations that might make a contribution, while the society has some fantastic industry members that might support initiatives aimed at bolstering their efforts to hire electrochemical scientists and engineers.”

Gerardine Botte networking with ECS members

Botte’s ascendancy to the presidency comes after 25 years of commitment to The Electrochemical Society. Initially working within her division, which focuses on industrial electrochemistry and electrochemical engineering (IE&EE), she has taken an active role in various committees, established and organized awards, and served first as vice-chair and then chair of the IE&EE division.

“To become president you need to play all the roles in your division and the committees, because only then do you have the knowledge and understanding to join the executive committee and work at the society level,” she says.

In 2021 Botte became third vice-president of the ECS, kickstarting a five-year progression through society’s executive positions that includes her presidential year. “It’s very important to take the responsibility and make the commitment to the society,” says Botte. “I’m a woman, I’m Hispanic, and just being in the role helps to show future generations what’s possible, that they can feel represented, and that they can aspire to take leadership positions.”

Botte always has one eye on future, particularly on the students and early-career scientists that will sustain and grow the field of electrochemical science and engineering, as well as the ECS, for years to come. “It really is a community and I try to make sure that all my students and researchers recognize the value of the society, not just for their own professional careers but also to ensure that future generations can benefit from its support,” she says. “My goal is to extend the impact of the society so that we can inspire and support the electrochemical scientists and engineers of the future.”

With such an ambitious presidential agenda, Botte is well aware that the next 12 months is likely to go past in a blur. “It’s a lot to do in a year, but we’ve already been paving the way and developing some of the initiatives,” she says. “With the support of Chris and the other members of the executive committee, as well as the staff, the members, and our volunteers, I think we’re going to do great.”

The post Incoming president of The Electrochemical Society aims to spark change appeared first on Physics World.

]]>
Analysis As the new president of The Electrochemical Society, Gerardine Botte wants to inspire a broader awareness and understanding of the importance of electrochemical technologies for changing our world https://physicsworld.com/wp-content/uploads/2023/06/Gerardine-Botte.jpg newsletter
MR spectroscopy maps brain glucose metabolism without requiring radiation https://physicsworld.com/a/mr-spectroscopy-maps-brain-glucose-metabolism-without-requiring-radiation/ Thu, 22 Jun 2023 08:45:32 +0000 https://physicsworld.com/?p=108499 Researchers quantify glucose metabolism and metabolites at high resolution without the need to administer radioactive substances

The post MR spectroscopy maps brain glucose metabolism without requiring radiation appeared first on Physics World.

]]>
Mapping the uptake of glucose in the brain and body provides clinicians with information about the metabolic dysfunction observed in conditions such as cancer, diabetes and Alzheimer’s disease. This mapping is traditionally performed by administering radioactive substances that act as glucose analogues and can be visualized on medical images.

Scientists know, for example, that tumour cells gobble up glucose more than normal cells. Clinicians exploit this by using 18F-FDG-PET imaging to diagnose and localize tumours and to evaluate treatments. This imaging technique, however, cannot assess downstream metabolites that may be important for diagnosis and treatment evaluation – and it also requires injecting the patient with a radioactive compound.

Another technique, magnetic resonance spectroscopy (MRS) with carbon-13, can quantify downstream metabolites but cannot precisely localize them. Meanwhile, the emerging technique of hyperpolarized 13C-MRS imaging does not provide information about some downstream metabolites, including glutamate and glutamine. Hyperpolarized 13C-MRS imaging also requires injections and uses specialized hardware that may not be available in clinical settings.

Researchers at the Medical University of Vienna have now developed a new approach to mapping glucose metabolism. The technique doesn’t rely on radiation or injections but instead uses clinically-available magnetic resonance imaging (MRI) and oral ingestion of a glucose solution.

2H-MRS

In the researchers’ initial validation study, which appears in Investigative Radiology, participants were imaged with 3 T MRI after fasting overnight and again after ingesting deuterium-tagged glucose solution (deuterium, a stable isotope of hydrogen, is not radioactive). The 2H-MRS scan included a 3D echoless free induction decay sequence, and water suppression was performed using conventional water suppression scheme. After the MRS scan, a 3D T1-weighted magnetization-prepared rapid gradient echo readout scan was performed. An in-house software pipeline was used to process data.

The 2H-MRS imaging approach allowed the researchers to quantify oxidative and anaerobic glucose utilization and assess neurotransmitter synthesis. Yet, they could only measure a limited number of deuterated compounds, and specialized hardware was needed to perform the imaging. So they conducted a follow-up study – now published in Nature Biomedical Engineering – to see whether proton MRS (1H-MRS) at 7 T would provide higher sensitivity, chemical specificity and spatiotemporal resolution than 2H-MRS imaging.

1H-MRS

Studies in animals have shown that deuterium-labelled glucose is readily taken up by brain cells, and deuterons are incorporated into downstream glucose metabolites. Because deuterons substitute protons in the molecule, they do not contribute to the proton spectrum, thus an increase in deuterium-labelled metabolites is reflected by a decrease in metabolite signals in 1H-MRS.

In the 1H-MRS study, five participants (four males and one female) received the deuterium-labelled glucose solution, and their blood glucose levels were measured several times over 90 min. The researchers quantified glutamate, glutamine, γ-aminobutyric acid and glucose deuterated at specific molecular positions. They also mapped deuterated and non-deuterated metabolites. They note that the imaging technique does not require specialized hardware to work with clinically available systems.

Fabian Niess, a research associate involved with the Nature Biomedical Engineering study and lead author of the Investigative Radiology study, explains in a press release that the Investigative Radiology study was “an important step” to demonstrate that the approach worked on lower-field systems “because 3 T MR systems are extremely widespread in clinical applications”.

The researchers conclude that 1H-MRS imaging may facilitate glucose metabolism studies, and they are conducting additional research to verify their approach and preliminary results.

The post MR spectroscopy maps brain glucose metabolism without requiring radiation appeared first on Physics World.

]]>
Research update Researchers quantify glucose metabolism and metabolites at high resolution without the need to administer radioactive substances https://physicsworld.com/wp-content/uploads/2023/06/7Tesla.jpg newsletter
LASER lights up photonics technologies https://physicsworld.com/a/laser-lights-up-photonics-technologies/ Thu, 22 Jun 2023 08:00:48 +0000 https://physicsworld.com/?p=108285 Dedicated to showcasing the latest innovations in optics and photonics, this year's LASER World of PHOTONICS will feature more than 1200 suppliers of both industrial systems and scientific equipment

The post LASER lights up photonics technologies appeared first on Physics World.

]]>
One of the world’s leading trade fairs to be devoted to all types of laser and photonic technologies will take place on 27–30 June 2023 in Munich, Germany. LASER World of PHOTONICS will bring together thousands of delegates from all over the world to explore the latest innovations in photonics components and systems, as well as how they are being exploited in novel applications.

More than 1200 companies will be featured at the exhibition, with key industry sectors ranging from industrial manufacturing and energy technologies through to biophotonics and data processing. Other focal points for the exhibition will be light-based sensors, test and measurement solutions, as well as imaging solutions and integrated photonics.

This year’s event will once again feature a dedicated platform for quantum technologies. World of QUANTUM aims to make connections between leading suppliers of photonics equipment and quantum developers working across applications in sensing and imaging, computing and secure communications systems.

Running alongside the exhibition will be the World of Photonics Congress, which includes several thousand presentations across seven specialist conferences. A complementary programme of industry forums, panels and round-table discussions will also offer an expert insight into the commercial trends and emerging applications within the photonics sector.

Read on to find out more about some of the companies and product innovations that will be featured at the show.

Spectrometers combine speed with performance

Ocean Optics, a brand of Ocean Insight, will be introducing two new families of spectrometers at this year’s LASER World of Photonics. First up is the Ocean HR series of compact, high-resolution spectrometers, which offer fast acquisition speeds along with excellent thermal wavelength stability and low stray light to enable reliable performance in demanding environments.

Spectrometers from Ocean Optics

The HR series comes in three different models to meet the specific measurement requirements of different applications. The HR2 combines the fastest acquisition speed with a high signal-to-noise (SNR) ratio, making it ideal for laser and LED characterization, while the HR4 provides high optical resolution for applications such as identifying narrow-band emission peaks in plasmas. Meanwhile, the HR6 combines high resolution with high sensitivity to ultraviolet light, enabling precise measurements of UV absorbance in solutions and gases.

The other new addition is the compact and versatile Ocean SR series of spectrometers, which combine fast acquisition speeds with a high SNR. These multi-use instruments can be exploited for applications ranging from measuring distinct spectral peaks in plasmas and emission sources to detecting subtle absorbance changes in DNA, proteins and other biological samples, and are also well suited for integration into customized systems for high-volume industrial and OEM applications.

Different models have again been optimized for different measurement requirements, with the SR2 combining speed with high SNR, SR4 offering high resolution, and SR6 delivering high sensitivity. Both series of spectrometers can be coupled to Ocean Optics’ light sources, accessories and software, and are supplied with OceanDirect, a cross-platform Software Developers Kit that allows users to optimize spectrometer performance, access critical data for analysis, and exploit a hardware-accelerated signal-averaging tool to improve the SNR performance.

  • Visit Ocean Optics at booth 421 in Hall A3 to discuss your measurement requirements.

Optical encoder sets new standards for motion control

The METIRIO encoder from SmarAct Metrology, part of the SmarAct Group that produces state-of-the-art solutions for positioning, metrology and automated micro-assembly, exploits the latest optical sensor technology to deliver exceptional accuracy and reliability in a compact package. Designed to meet the demands of today’s precision motion-control systems, the encoder can be used in applications ranging from robotics and automation through to semiconductor manufacturing and aerospace.

The METIRIO optical encoder from SmartAct

The advanced optical design exploited in the encoder provides excellent signal stability and insensitivity to environmental noise, ensuring consistent performance and smooth motion control even in demanding industrial environments. Its robust construction and reliable operation ensure consistent performance and a long life, reducing both maintenance costs and system downtime.

The encoder has a compact form factor and flexible design, allowing easy integration into a wide range of motion-control systems, while advanced signal processing algorithms are used to optimize the signal quality and its stability. By offering a new level of precision and reliability, the encoder enables companies to optimize their processes, increase productivity and improve overall operational efficiency.

“The METIRIO encoder has fundamentally changed the field of motion control,” said Sebastian Rode, CEO of SmarAct Metrology. “This revolutionary encoder has proven to be very reliable and has been well received. It underscores our commitment to delivering innovative solutions that meet the evolving needs of our customers to achieve superior performance in their motion control applications.”

  • To find out more about the METIRIO encoder, visit SmarAct at booth 107 in Hall B2.

Wireless solution enables remote monitoring of high-power lasers

Gentec-EO, which specializes in developing solutions for laser beam and terahertz source measurement and analysis, will be demonstrating its HP-BLU series of wireless detectors for the remote monitoring of high-power lasers operating at up to 15 kW. The detectors allow the laser power to be measured at distances of up to 30 m away from the point of delivery, providing safe and accurate monitoring of multi-kilowatt laser systems that are typically confined in an enclosure or operated from another room.

HP-BLU series of wireless detectors from Gentec-EO

Standard models are available for laser powers of up to 4, 12 and 15 kW, and offer an effective aperture of up to 125 mm to accommodate the largest laser beams. Customized versions are also available to handle higher output powers, or to provide larger apertures with different shapes.

The detectors are equipped with an integrated wireless data-transfer module that transmits all the laser measurement data direct to a PC. Battery operation also avoids the need for additional cabling, reducing the risk of accidents in the workspace.

  • To find out more, visit Gentec-EO at booth 319 in Hall B2.

Gratings and spectrometers tackle demanding applications

Wasatch Photonics has reoptimized its line of VPH transmission gratings and spectrometers to meet the needs of researchers and OEM developers working in low-light and light-precious applications, ranging from Raman spectroscopy and optical coherence tomography (OCT) to laser-pulse compression and astronomy.

Compact Raman and OCT spectrometers

The company’s VPH transmission gratings for laser-pulse compression offer up to 98% efficiency for a single polarization, improved uniformity over the full clear aperture, and low diffracted wavefront distortion to reduce beam distortion during amplification. These gratings can be easily cleaned and handled, and allow compact, folded optical designs, while the company also offers custom gratings for both small-quantity prototyping and large-volume production.

Wasatch Photonics has also introduced a compact and lightweight spectrometer that has been optimized for spectral-domain OCT (SD-OCT) imaging at 800 nm, responding to increasing demand for OCT imaging in medical diagnostics, image-guided surgery and laser machining. The Cobra OEM spectrometer complements the company’s existing line of Cobra spectrometers that support OCT imaging from visible wavelengths through to 1600 nm.

Wasatch Photonics will also be offering a sneak preview of its newest line of compact, configurable Raman spectrometers for wavelengths from 532 to 1064 nm. The spectrometers in the WP Raman X series have been designed to adapt to each customer’s application needs, delivering superior efficiency at every wavelength and highly reproducible Raman spectra. The unique “OEM inside” design supports exploratory research with a benchtop unit and enables the transition to a streamlined OEM module with no change in performance, accelerating product development and reducing risk.

  • Learn more about Wasatch Photonics’ full range of grating and spectrometer options at booth 272 in Hall A3.

Innovative solutions simplify time-resolved measurements

PicoQuant, a specialist in time-resolved spectrometers and microscopes, will be showcasing several recent innovations at LASER World of Photonics. First up is Prima, a compact three-colour picosecond laser module that offers a standalone and affordable solution for researchers who need excitation at 450, 510 and 635 nm within limited lab space. Offering pulsed and continuous-wave operation along with fast switching, Prima is suitable for measuring fluorescence and photoluminescence lifetimes, even in materials that have poor luminescence quantum yield.

Prima laser module from PicoQuant

Also new is the PDA-23 single-photon detector array, developed in collaboration with Pi Imaging, that measures single photons over a spectral range from 400 to 850 nm and beyond. The PDA-23 combines an array of 23 single-photon avalanche detectors, which have a high native fill factor, with microlenses for high photon detection. The low dark count rates, typically around 100 cps, are further reduced with an integrated Peltier-cooler.

The PDA-23 can be combined with the MultiHarp 160, a scalable plug-and-play unit for event timing and time-correlated single-photon counting. This multichannel module supports up to 64 timing channels with high sustained count rates, an ultrashort dead time of less than 650 ps, and a time resolution of 5 ps.

Last but not least is the FluoMic microscope add-on, which can be integrated with the company’s range of FluoTime spectrometers to support time-resolved micro-photoluminescence spectroscopy. Coupling the two instruments together enables both steady-state and time-resolved emission spectra to be captured at high resolution from specific areas of the sample, yielding multi-dimensional datasets that can provide valuable insights for materials science applications.

  • Visit PicoQuant at booth 216 in Hall B2 to discuss your needs.

Laser simulation software now offers support for GRIN lenses

German start-up company BeamXpert has updated its 3D laser simulation software to support GRIN lenses, both for its proprietary, real-time “beam” modelling technique and for the classical ray-tracing approach. The intuitive and easy-to-use software, called BeamXpertDESIGNER, provides an accessible solution for the accurate design of optical systems for laser radiation, delivering ISO-compliant results and available with a perpetual license at an attractive price.

BeamXpertDESIGNER software

The latest version of the software offers other improvements to simplify and streamline the design process. It now includes a comprehensive output of error messages and warnings that indicate possible sources of error, such as intersecting objects or invalid refractive indices, to aid troubleshooting and avoid potential problems. In this context, the ray-tracing engine for the detailed analysis of the aberrations of the simulated optical system has been completely updated. The user interface has also been streamlined, while the software now offers additional options for data export.

Last but not least, the component database has been expanded to include lenses from Thorlabs and GRINTECH. It now consists of more than 20,000 optical components from eleven manufacturers, which can be dragged and dropped directly into the laser simulation set-up.

  • Find out more about these latest enhancements by visiting BeamXpert at booth 421 in Hall A2. More information can also be found at beamxpert.com.

High-power laser targets demanding applications

HÜBNER Photonics, a manufacturer of high-performance lasers for applications in imaging, detection and analysis, has added a higher power model of its Cobolt Jive 561 nm laser to its 05-01 Series of diode-pumped lasers. Now with a continuous-wave output power of up to 1000 mW, the Cobolt Jive is perfectly suited to demanding applications in fluorescence microscopy, particularly super-resolution techniques like DNA–PAINT, as well as interferometric methods such as particle-flow analysis.

Cobolt Jive diode-pumped laser from HÜBNER Photonics

The Cobolt Jive is a single-frequency laser that delivers a near-perfect TEM00 (Gaussian) beam with an M2 value of less than 1.1. A proprietary laser-cavity design ensures ultralow noise performance – typically less than 0.1% rms over frequencies from 20 Hz to 20 MHz – as well as an excellent power stability of less than 2% under normal operating conditions.

All Cobolt lasers are manufactured using proprietary HTCure technology, providing a compact package that is hermetically sealed to maximize reliability and provide a high level of immunity to varying environmental conditions. HTCure has proven to be one of the most reliable methods for making industrial-grade lasers, with lasers built using this technology shown to withstand extreme mechanical shocks without any degradation in performance.

  • Visit HÜBNER Photonics at booth 214 in Hall B2 to explore the company’s full range of high-performance lasers.

 

The post LASER lights up photonics technologies appeared first on Physics World.

]]>
Innovation showcase Dedicated to showcasing the latest innovations in optics and photonics, this year's LASER World of PHOTONICS will feature more than 1200 suppliers of both industrial systems and scientific equipment https://physicsworld.com/wp-content/uploads/2023/06/s2000.jpeg newsletter
US releases its first strategic framework for space diplomacy https://physicsworld.com/a/us-releases-its-first-strategic-framework-for-space-diplomacy/ Wed, 21 Jun 2023 14:30:08 +0000 https://physicsworld.com/?p=108535 The 37-page document from the US Department of State calls for the creation of a “rules-based international order” for outer space

The post US releases its first strategic framework for space diplomacy appeared first on Physics World.

]]>
The US has released its first strategic framework for space diplomacy. The 37-page document, issued by the US Department of State, outlines the intention to “build international partnerships for civil and national security space”. It calls for the creation of a “rules-based international order” for outer space as well as for the US government to protect the country from “space-enabled threats”.

The document points out that the US private sector is revolutionizing the use of outer space with new technologies and business models. “The number of space-faring nations has dramatically increased,” the document notes. “Countries without current launch capacities are investing in space-based assets and infrastructure.”

A recent estimate suggests that the global space economy was worth $469bn in 2021. As a result of the growth, the framework calls for the responsible stewardship of outer space and to “maximize the benefits of the growing space economy for current and future generations”.

The framework advocates actions across “three pillars”. One involves advancing US space policy and programmes internationally while reducing the potential for conflict. The second is to use US space activities for wider diplomatic goals such as climate change, while the third is to give the state department’s employees the skills to “pursue space-related policy objectives”.

The document also calls for the formation of robust multilateral coalitions such as the Artemis Accords – the agreement among NASA and its equivalents in partner countries on principles that guide lunar exploration.

Tom Stroup, president of the Satellite Industry Association, welcomes the report’s “appreciation for the need for both traditional diplomacy as well as engagement with the US and worldwide commercial space stakeholders”.

Meanwhile, NASA has awarded Blue Origin – the space venture of Amazon founder and Washington Post owner Jeff Bezos – a $3.4bn contract for the third 21st-century human landing on the Moon. NASA has scheduled the launch, tagged Artemis V, no earlier than September 2029. Elon Musk’s SpaceX has the contract for the first two lunar landings in this series.

The post US releases its first strategic framework for space diplomacy appeared first on Physics World.

]]>
News The 37-page document from the US Department of State calls for the creation of a “rules-based international order” for outer space https://physicsworld.com/wp-content/uploads/2023/06/201304210013-nasa.jpg
RadCalc’s Monte Carlo capability streamlines and automates 3D dose-volume verification https://physicsworld.com/a/radcalcs-monte-carlo-capability-streamlines-and-automates-3d-dose-volume-verification/ Wed, 21 Jun 2023 13:00:55 +0000 https://physicsworld.com/?p=108507 Monte Carlo 3D dose-check calculations help clinical physicists maintain confidence in their patient-specific QA for hard-to-treat disease indications

The post RadCalc’s Monte Carlo capability streamlines and automates 3D dose-volume verification appeared first on Physics World.

]]>
Secondary dose calculations represent a foundational building block for any patient-specific quality-assurance (QA) programme, providing at-scale validation of radiotherapy treatment plans in the radiation oncology clinic. Front-and-centre in the patient QA endeavour – and in daily use at over 2300 cancer treatment centres worldwide – is LAP’s RadCalc QA secondary check software which, for more than two decades, has provided medical physicists and dosimetrists with automated and independent dosimetric verification of their radiotherapy treatment planning systems (TPS).

An early-adopter and RadCalc devotee is Mauro Iori, director of medical physics at the Institute in Advanced Technologies and Models of Care in Oncology (IRCCS), part of the Azienda Unità Sanitaria Locale (AUSL) di Reggio Emilia in northern Italy. “We have been using RadCalc, in its various iterations, to support independent patient QA for more than 20 years – helping us to reduce our direct dosimetry QA measurements along the way,” explains Iori. “Thanks to LAP’s extensive user base, the software provides a stable, robust and uniform QA environment that integrates seamlessly with our TPS. RadCalc is also vendor-agnostic, so users can standardize the second-check QA workflow across different treatment systems and modalities.”

Monte Carlo insights

Operationally, Iori is one of five clinical physicists and three technicians (dosimetrists) within the AUSL-IRCCS radiation oncology programme, overseeing a suite of two Varian TrueBeam machines, an Accuray TomoTherapy system, a high-dose-rate Elekta brachytherapy unit and an orthovoltage X-ray device. “We treat over 1600 patients each year and cover a wide range of disease indications,” notes Iori (who also manages six other medical physicists and four technicians working in the AUSL-IRCCS radiology and nuclear medicine departments). “What’s more,” he adds, “around 65% of our external-beam radiotherapy treatments involve some form of hypofractionation.” Put simply, that means increased dose per fraction to enable significantly improved patient experience and increased patient throughput – all part of an operational drive for enhanced workflow efficiency.

Mauro Iori

Given AUSL-IRCCS’s clinical emphasis on hypofractionation, Iori and colleagues rely heavily on RadCalc’s Monte Carlo software module for automated 3D dose-volume verification. The goal is to maintain confidence in QA process accuracy across harder-to-treat clinical indications and thereby ensure the planning treatment volume is being covered, while guaranteeing plan quality by comparing dose to adjacent critical structures and organs-at-risk (OARs). Equally important, the medical physics team needs to know if something is not right straight away when treating patients with escalated dose per fraction – in the case of a machine error, for example, or incorrect patient set-up. “For this type of check,” notes Iori, “the log-file analysis and in vivo dosimetry modules in RadCalc constitute a complementary and dedicated tool.”

Under the hood, RadCalc’s Monte Carlo module relies on BEAMnrc (a well-established simulation system for external-beam sources in radiotherapy) utilizing proprietary machine modelling acquired by LAP from McGill University in Canada. The software’s 3D functionality is reinforced by RadCalcAIR (Automated Import and Report) to give users a fully automated second-check process with percent difference, dose-volume histogram (DVH), protocol metrics, gamma and many more customizable tools.

Shedding light on complexity

By allowing 3D verification of plan dose distribution, says Iori, the Monte Carlo module comes into its own for more challenging treatment planning scenarios. Examples include advanced head-and-neck cancers and late-stage prostate and rectal disease – indications that often require larger, heavily modulated treatment fields that are problematic in terms of conventional point-dose QA checks.

Another Monte Carlo clinical use-case arises when treating small or complex tumours surrounded by heterogeneities (e.g. in the lung, abdominal cavities as well as adjacent to bone or metal implants). Planning techniques with steep dose gradients, for example, are especially relevant for lung stereotactic treatments, with the tumour targets generally located near the chest wall, heart and normal blood vessels. Here RadCalc’s Monte Carlo module can perform an accurate and realistic dose verification of TPS plans – implementation of the machine models in BEAMnrc, with every physical component included, establishing confidence in such challenging cases.

The QA workflow in this scenario is all about streamlining: the physicist simply exports the treatment plan via their DICOM RT and RadCalc will automatically verify the plan using a Monte Carlo algorithm, generating results in minutes. If the treatment plan doesn’t pass various preset criteria, RadCalc will prompt the user to investigate what’s going on using a suite of dose analysis tools before determining the course of action (in terms of further preclinical QA or adding in vivo dosimetry checks on the treatment machine).

“With RadCalc’s Monte Carlo tools we can achieve the highest quality of dosimetric verification,” claims Iori, “and not only for simple treatment plans but complex planning scenarios as well, or in the case of adaptive radiotherapy treatments. Right now, we use the Monte Carlo calculations for around 30% of our external-beam radiotherapy patients – chiefly for head-and-neck, thorax and pelvis disease indications that exhibit the highest levels of tissue heterogeneity. Over time, we will extend the use of the Monte Carlo module to all treatment plans.”

The clinical end-game? Better targeting accuracy and dose distribution accuracy – and, ultimately, enhanced treatment outcomes for AUSL-IRCCS cancer patients.

Calculation, simulation, validation

The roll-out of the RadCalc 3D Monte Carlo module at AUSL-IRCCS was preceded by a period of preclinical “tuning and validation”, with the optimized software subsequently used to dosimetrically verify complex treatment plans where the measured dose distributions can be inaccurate due to the TPS dose calculation algorithm.

During the commissioning phase, the AUSL-IRCCS medical physics team, working with colleagues from the University of Bologna in Italy, built Monte Carlo models on the back of specific commissioning measurements. To set up the Monte Carlo module, the team loaded a file containing dosimetric data for different beam energies (6X, 6FFF, 10X, 10FFF) into RadCalc and prepopulated it with values obtained directly from phantom measurements (using defined protocols for percentage depth dose and off-axis ratio).

Another key step involved optimization of the Additional Radiation to Light Field Offset (ARLF) tuning parameter, with Monte Carlo simulations performed on a uniform phantom for four different ARLF values (for each considered energy). The goal here was to achieve the best dose-comparison agreements between Monte Carlo simulations and the volumetric patient-specific QA measurements (with phantom dose distributions and calculated results evaluated in terms of 2 mm/2% gamma pass rate).

“Our preclinical study showed good agreement between Radcalc Monte Carlo simulations and dose measurements, enhancing the dosimetric performance of the secondary-check tool used to verify our treatment plans,” explained Iori. “Following validation, RadCalc’s Monte Carlo module now enables us to better estimate the plan doses in lung-cancer patients and to detect possible inaccuracies due to tissue homogeneity, which are not quantifiable using homogeneous phantoms.”

Further reading

Giadi Sceni et al. 2023 Tuning and validation of the new RadCalc 3D Monte Carlo-based pre-treatment plan verification tool Journal of Mechanics in Medicine and Biology

The post RadCalc’s Monte Carlo capability streamlines and automates 3D dose-volume verification appeared first on Physics World.

]]>
Analysis Monte Carlo 3D dose-check calculations help clinical physicists maintain confidence in their patient-specific QA for hard-to-treat disease indications https://physicsworld.com/wp-content/uploads/2023/06/LAP-team.jpg
Is quantum gravity slowing down neutrinos? https://physicsworld.com/a/is-quantum-gravity-slowing-down-neutrinos/ Wed, 21 Jun 2023 12:06:03 +0000 https://physicsworld.com/?p=108532 Particles arrive late after travelling billions of light-years

The post Is quantum gravity slowing down neutrinos? appeared first on Physics World.

]]>
Ultra-relativistic neutrinos blasted into space during gamma-ray bursts are slowed down by the effects of quantum gravity. That is the conclusion of physicists in Italy, Poland and Norway, who have spotted seven neutrinos that arrived on Earth later than expected, compared to their companion gamma rays.

Quantum theory does a fantastic job of describing interactions that involve three out of the four known forces of nature. However, there is no theory today that adequately describes the quantum nature of gravity. While theories of quantum gravity have been proposed, they tend to make predictions that cannot currently be tested by experiment or observation.

One prediction that physicists have a chance of confirming today is that particles moving very near to the speed of light will lose energy because of a quantum gravitational effect. The faster the particle is moving, the more the effect is enhanced. While the effect is extremely small, if the particles are created in an astrophysical event billions of light-years away, the cumulative result would be a delay that could be measured when the particles arrive on Earth.

Late neutrinos

Now, a team led by Giovanni Amelino-Camelia of the University of Naples have looked for this effect in neutrino data collected by the IceCube Neutrino Observatory. Located at the South Pole, the observatory detects neutrinos when they occasionally interact within a cubic kilometre of ice.

The researchers identified seven neutrinos that have high probabilities of coming from gamma-ray bursts. These are highly energetic events that are produced either by the supernovae of the most massive stars, or by colliding neutron stars. Gamma rays from these specific bursts were also detected by NASA’s Fermi Gamma-ray Space Telescope.

Tantalizingly, these neutrinos appear to have arrived at Earth up to three days after the gamma rays were detected, suggesting that something had delayed them. The three-day delay is expected of particles with energies up to 500 TeV. In comparison, neutrinos with higher energies up to 2 PeV would require a 12-day delay window, which is too long to positively identify them with a specific gamma-ray burst.

Amelino-Camelia explains: “The particles get this extra contribution to their speed, which is negative, and it grows in magnitude as their energy grows”.

Location, location

However, not everyone is convinced. Teppei Katori of Kings College London, who was not involved in the work, points out that the seven candidate neutrinos are all of the “cascade-type”. In a cascade event, a neutrino enters the IceCube Observatory and deposits all of its energy into a small, spherical region, making it difficult to determine the direction that the neutrino came from. This is unlike a “track event”, which produces a signal that points back to the neutrino’s  point of origin in the cosmos.

“We don’t know where these neutrinos are coming from exactly,” explains Katori.

Indeed, it is not even clear that gamma-ray bursts do produce a significant number of neutrinos. Katori cites earlier work describing a search for neutrinos from gamma-ray bursts that failed to find a correlation between the two. However, he accepts that this search did not take into account any delays caused by quantum-gravity effects.

In 2022 Katori – who is part of the IceCube collaboration – was a science lead on another experiment looking for quantum gravity effects in neutrinos. Specifically, this study looked at how quantum gravity could affect neutrino oscillations.

Neutrinos come in three different “flavours” – electron, muon and tau – and the particles can oscillate from one flavour to another. Although the experiment found no evidence for quantum gravity affecting neutrino oscillations, it was the first experiment to probe these oscillations at a level where quantum gravity should be relevant. As such, the experiment was able to impose constraints on quantum-gravity models that predict variations in the oscillations.

Implications for cosmology

If quantum gravity is indeed slowing down neutrinos, both Amelino-Camelia and Katori agree that the observation would be a major step forward in understanding quantum gravity and its role in the evolution of the universe.

However, Amelino-Camelia points out, “If the effect is only there for neutrinos and other half-integer spin particles, then the implications for cosmology might be minor.”

For Katori, the most significant outcome of confirming a delay is that physicists could use it to calculate the size of the quantum gravity effect. This would allow physicists to evaluate competing models of quantum gravity – and to take the next step and design experiments and observatories to measure the effect more precisely.

“There is still a gap between quantum-gravity-motivated phenomenology models and quantum-gravity theories,” says Katori. “I think filling this gap is challenging [but] finding any quantum-gravity-motivated effect is the first step.”

The search for neutrinos from gamma-ray bursts will benefit from the construction of IceCube-Gen2, which will increase the size of the detector volume to eight cubic kilometres of ice, improving the ability of the observatory to pinpoint the origins of neutrinos.

The research is described in Nature Astronomy.

The post Is quantum gravity slowing down neutrinos? appeared first on Physics World.

]]>
Research update Particles arrive late after travelling billions of light-years https://physicsworld.com/wp-content/uploads/2023/06/Fermi-Gamma-ray-Space-Telescope.jpg newsletter
How do things work? It all comes down to forces https://physicsworld.com/a/how-do-things-work-it-all-comes-down-to-forces/ Wed, 21 Jun 2023 10:00:57 +0000 https://physicsworld.com/?p=108156 Karel Green reviews Force: What It Means to Push and Pull, Slip and Grip, Start and Stop by Henry Petroski

The post How do things work? It all comes down to forces appeared first on Physics World.

]]>
  • The author of the book, Henry Petroski, died shortly after this review was written.
  • From its title, you might expect a book called Force to simply explain how forces work. Instead of a purely educational, fact-driven narrative, however, author Henry Petroski – an engineer and popular-science writer – uses a mix of personal essays, musings and biography to convey how scientific concepts govern and influence everyday life. This is not to say that you won’t learn a fair amount of engineering from Force, or that you won’t come away with a better understanding of how, on a fundamental level, “things work”. But you will also be given the tools to recognize how these things are connected, literally and metaphorically.

    Humans have always had the ability to recognize patterns, breaking down complex structures into simpler, recognizable components and using this knowledge to satisfy their curiosity

    Force begins by showing how humans have always had the ability to recognize patterns, breaking down complex structures into simpler, recognizable components and using this knowledge to satisfy their curiosity and understand the mechanics of things. Rather than just providing a material understanding of forces, the book delves into how people perceive the world around them, and how these feelings frequently affect our world as much as any physical “pushing or pulling” force does.

    In the prologue to Force, Petroski observes that “research and development takes place in a grander context than one of hard physical force; it is also subject to the softer forces imposed by ethics, morals and judgment – none of which is easily and unambiguously defined by laws and limits”. He illustrates this by discussing the medical innovations that brought us from the prehistoric era, when humans had only their five senses to help them avoid harm, to the present day, when technologies such as stethoscopes allow doctors to monitor hearts, lungs and stomachs much more thoroughly than simple listening would permit.

    He also mentions how the COVID-19 pandemic spurred innovation, despite the distresses we were under. It led, for example, to the mobilization of vaccines in record time, as well as improvements to mask technology. Although not even close to perfect, today’s masks have undergone major upgrades compared to those worn by, say, plague doctors in medieval Europe.

    Petroski complements this sentiment by comparing literal examples of pandemic-driven innovation with more subtle advances made simultaneously in society. During a time when mundane affectionate interactions like embracing, shaking hands and even standing next to someone were heavily discouraged to prevent the spread of the virus, people adapted as best they could, replacing higher-risk gestures with fist and elbow bumps, hip checks and toe taps.

    For physicists, one of the most enjoyable parts of Force may be Petroski’s discussion of the work of Michael Faraday, who many (including myself) will know primarily for his discoveries in electromagnetism. Faraday was, however, also well practised and gifted in explaining complex concepts to the public in an easy-to-understand way. As such, people came from far and wide to listen to him speak and to marvel at the practical props he regularly used to assist him – including the rubber balloon, which Faraday invented for his experiments on hydrogen gas.

    Much of the book continues in this manner: a case study of a known engineer or piece of engineering, an explanation of how something works on an accessible scientific level, and a discussion that illuminates the social and societal changes that accompanied or indeed motivated it. Each chapter is named after either a fundamental force, a more general derivative force or a feeling you’ll recognize.

    The author winds up detailing the inverse square law that governs both gravity and the electromagnetic force, all while weaving in stories of his and his children’s childhoods

    In the electromagnetism chapter, for example, Petroski explores how telephones produce and relay sounds. After using older, simpler types of phones to describe the fundamentals of how they work, he moves into the more complex technology of the modern age. Eventually, he winds up detailing the inverse square law that governs both gravity and the electromagnetic force that permeates our planet and the wider universe, all while weaving in stories of his and his children’s childhoods.

    Other chapters continue in the same vein, happily building up our idea of what forces can be. As well as the more accessible fundamental forces of electromagnetism and gravity (which gets its own chapter), Force covers everyday sensations such as squeezing a football, feeling inertia on mass transit, and even global effects like winds, hurricanes and earthquakes. It is also chock-full of Petroski’s personal experiences, which gives a biographic feel to the work and makes the explanations very clear.

    My favourite example of this blend of the personal and the scientific is Petroski’s account of getting roof tiles replaced on his house. From this mundane and perhaps even annoying task, he creates both a compelling explanation of how gravity works and an opportunity to reveal some of the quirks of his personality. Whereas most of us would try to be anywhere else to avoid the noise, Petroski remained in his attic office, recognizing and enjoying the individual rhythms of the roofers as they hammered in the nails.

    In the book’s epilogue, Petroski returns to his earlier medical theme and reflects on the epidemic of bubonic plague that struck London in 1665. During this time, Isaac Newton, like many others who could afford it, sheltered from the disease in the countryside. It was there that he made great strides in physics and (at least in the Western view of science history) discovered gravity after being inspired by the fall of an apple. Though the more recent coronavirus pandemic has been horrendous, Petroski reflects that it also offered unexpected opportunities for some lucky people to change their lives, make progress on their work or become more prepared for the future.

    Overall, Force is a great book that roots you in reality and gives solid explanations of some of the world’s most complex phenomena. The accompanying biographical feel is another positive too, as Petroski’s personal experiences from his life and from stories he’s read act as a base for the learning within the book. This gives his narrative a uniqueness and flair when it might otherwise have slipped into a very dry style. Experts and non-experts alike can gain something from it, whether it’s finally understanding how it is that speakers can replicate almost any sound, or simply remembering the countless ways forces impact our day-to-day lives.

    • 2022 Yale University Press $30.00hb 328pp

    The post How do things work? It all comes down to forces appeared first on Physics World.

    ]]>
    Opinion and reviews Karel Green reviews Force: What It Means to Push and Pull, Slip and Grip, Start and Stop by Henry Petroski https://physicsworld.com/wp-content/uploads/2023/06/2023-06-Green-roof-tiles-installation-823327454-iStock_sturti.jpg newsletter
    Entangled atoms enhance tomography technique https://physicsworld.com/a/entangled-atoms-enhance-tomography-technique/ Wed, 21 Jun 2023 08:54:37 +0000 https://physicsworld.com/?p=108404 Quantum-boosted approach almost doubles the sensitivity of magnetic induction tomography

    The post Entangled atoms enhance tomography technique appeared first on Physics World.

    ]]>
    Atomic sensor is made of spins whose noise is only limited by intrinsic quantum fluctuations

    Researchers at the University of Copenhagen in Denmark have found a way to boost the sensitivity of a routine sensing technique known as magnetic induction tomography beyond the standard quantum limit. The improved method could find application in bio- and medical sensing.

    In magnetic induction tomography, a magnetic field generated by a current-carrying coil produces minute eddy currents in the sample being analysed. These currents, in turn, alter the magnetic field, which is detected using the collective spin (or magnetization) of an atomic magnetometer. The properties of the detected field yield information about the electrical conductivity and magnetic permeability of the sample.

    The technique is used in geophysical surveys, to non-destructively test metallic objects, as well as in medical imaging. But its sensitivity is constrained by the so-called quantum limit, or quantum fluctuations (uncertainty) of the sensor’s collective spin.

    “Indeed, quantum mechanics and the uncertainty principle dictate that the spin direction cannot be determined with arbitrary precision,” explains Eugene Polzik, who led this new study. “Roughly speaking, in a sensor that contain atomic spins, the direction of the collective spin cannot be determined with an angular certainty better than 1/√N, and it is this that we call the standard quantum limit (SQL).”

    Reducing uncertainty

    Polzik and colleagues showed that this uncertainty can be reduced by using an atomic magnetometer containing atoms whose spins are entangled to generate a so-called spin squeezed state. The angular uncertainty of one of the projections of this state is below the SQL. The researchers arranged the magnetic induction tomography protocol such that the useful signal is contained exactly in the projection with the reduced uncertainty. This approach results in a SQL sensitivity that is almost twice that of conventional atomic magnetometers.

    “Conventional magnetic induction tomography techniques use a coil to detect the signal,” explains Polzik. “Such coils have intrinsic thermal noise, as well as picked-up environmental noise, which limits sensitivity. We have used an atomic sensor made of spins whose noise is only limited by intrinsic quantum fluctuations. This allowed us to substantially improve the sensitivity compared to conventional approaches.”

    The researchers say they now plan to use their method in bio- and medical sensing, and in particular hope to develop it further for imaging internal organs, including the heart and even the brain.

    “We also plan to continue working on this quantum-enhanced magnetic induction tomography with the goal of further improving its sensitivity and spatial resolution,” Polzik tells Physics World.

    The research is detailed in Physical Review Letters.

    The post Entangled atoms enhance tomography technique appeared first on Physics World.

    ]]>
    Research update Quantum-boosted approach almost doubles the sensitivity of magnetic induction tomography https://physicsworld.com/wp-content/uploads/2023/06/atomic-sensor-featured.jpg
    Frozen fallout: radioactive dust from accidents and weapons testing accumulates on glaciers https://physicsworld.com/a/frozen-fallout-radioactive-dust-from-accidents-and-weapons-testing-accumulates-on-glaciers/ Tue, 20 Jun 2023 15:17:00 +0000 https://physicsworld.com/?p=108442 Scientists are calling for more research into the impacts of fallout radionuclides entering ecosystems as glaciers melt

    The post Frozen fallout: radioactive dust from accidents and weapons testing accumulates on glaciers appeared first on Physics World.

    ]]>

    Glacier surfaces in certain parts of the world contain concerning amounts of toxic radioactive materials, a result of weapons testing and nuclear accidents such as the Chernobyl disaster in 1986. Fallout radionuclides accumulate within cryoconite – a granular sediment found in holes on glacier surfaces – and there is a risk of this material entering local ecosystems as glaciers melt due to climate change. Glaciologists and ecologists say this poses urgent questions. What regions are at highest risk? How diluted is the nuclear material entering proglacial zones? What impact might that have on organisms?

    Find out more about this emerging concern by watching this short video, which includes interviews with glaciologist Caroline Clason and environmental scientist Philip Owens. For a more detailed background to the issue, take a look at the recent Physics World feature ‘Trapped in ice: the surprisingly high levels of artificial radioactive isotopes found in glaciers‘.

    The post Frozen fallout: radioactive dust from accidents and weapons testing accumulates on glaciers appeared first on Physics World.

    ]]>
    Video Scientists are calling for more research into the impacts of fallout radionuclides entering ecosystems as glaciers melt https://physicsworld.com/wp-content/uploads/2023/06/Radioactive-glacier-home-image.jpg
    Moore’s law: further progress will push hard on the boundaries of physics and economics https://physicsworld.com/a/moores-law-further-progress-will-push-hard-on-the-boundaries-of-physics-and-economics/ Tue, 20 Jun 2023 12:00:18 +0000 https://physicsworld.com/?p=108319 James McKenzie pays tribute to the foresight of Gordon Moore, the founder of Intel who died earlier this year

    The post Moore’s law: further progress will push hard on the boundaries of physics and economics appeared first on Physics World.

    ]]>
    When the Taiwan Semiconductor Manufacturing Company (TSMC) announced last year that it was planning to build a new factory to produce integrated circuits, it wasn’t just the eye-watering $33bn price tag that caught my eye. What also struck me is that the plant, which is set to open in 2025 in the city of Hsinchu, will make the world’s first “2 nanometre” chips. Smaller, faster and up to 30% more efficient than any microchip that has come before, TSMC’s chips will be sold to the likes of Apple – the company’s biggest customer – powering everything from smartphones to laptops.

    But our ability to build such tiny, powerful chips shouldn’t surprise us. After all, the engineer Gordon Moore – who died on 24 March this year, aged 94 – famously predicted back in 1965 that the number of transistors we can squeeze onto an integrated circuit ought to double every year. Writing for the magazine Electronics (38 114), Moore reckoned that by 1975 it should be possible to fit a quarter of a million components on to a single silicon chip with an area of one square inch (6.25 cm2).

    Gordon Moore was the co-founder of Intel Corporation

    Moore’s prediction, which he later said was simply a “wild extrapolation”, held true, although in 1975 he revised his forecast, predicting that chip densities would double every two years, rather than every year. What thereafter became known as “Moore’s law” proved amazingly accurate, as the ability to pack ever more transistors into a tiny space underpinned the almost non-stop growth of the consumer electronics industry. In truth, it was never an established scientific “law” but more a description of how things had developed in the past as well as a roadmap that the semiconductor industry imposed on itself, driving future development.

    Seeing into the future

    Basic physics says that as transistors get smaller, they can be run faster and require less power. Simple economics, meanwhile, dictates that as you pack more transistors onto a chip, each transistor becomes cheaper to make. “The cost per component,” Moore noted in his 1965 article, “is nearly inversely proportional to the number of components.” A research director at the US firm Fairchild Semiconductor at the time, Moore simply put the two notions together.

    Gordon Moore proved to be a visionary who correctly foresaw the breath-taking pace at which semiconductor technology would grow

    In doing so, Moore proved to be a visionary who correctly foresaw the breath-taking pace at which semiconductor technology would grow. While the precise details of how we have shrunk transistors have changed over the years, many of Moore’s predictions about the rise of integrated circuits have come to pass. In his original article, he foresaw digital watches, home computers, smartphones (or what he called “personal portable communications equipment”), the ability to send multiple messages down phone lines, as well as automatic controls for cars.

    In an interview with IEEE Spectrum on the 50th anniversary of his 1965 article, Moore said he was surprised that his law had survived for so long. “I never would have anticipated anyone remembering it this far down the road,” he said. Its continuation was, for him, a tribute to the creativity of engineers in the semiconductor industry, who have time and again found new ways to shrink devices. “I could never see more than the next couple of [chip] generations, and after that it looked like [we’d] hit some kind of wall. But those walls keep receding.”

    Moore's law graph

    However, in the same interview, Moore recognized that there are two basic physical obstacles that will eventually preclude any further miniaturization. As he recalled the cosmologist Stephen Hawking once pointing out on a visit to Silicon Valley, nothing can travel faster than the speed of light, while materials are, ultimately, made of atoms of a finite size. There are, in other words, speed and size limits to chips. “These are fundamentals I don’t see how we [will] ever get around,” Moore warned. “And in the next couple of generations, we’re right up against them.”

    So is the end of Moore’s law in sight?

    Gordon Moore: a brief history

    Born on 3 January 1929 in Pescadero, California, Gordon Earle Moore was a chemist by training, graduating in 1950 from the University of California, Berkeley. He then did a PhD, also in chemistry, at the California Institute of Technology followed by a postdoc at the Applied Physics Laboratory at Johns Hopkins University from 1953 to 1956. That year he left academia to work at the Shockley Semiconductor Laboratory (SSL), which had just been set up by the physicist William Shockley.

    It was an exciting time for the nascent semiconductor industry. SSL was one of the first high-tech firms in Silicon Valley to work on semiconductor devices and Shockley himself was awarded the 1956 Nobel Prize for Physics – along with Walter Brattain and John Bardeen – for their discovery of transistors. Moore was part of a group of talented young scientists whom Shockley recruited to develop and produce new semiconductor devices.

    A photo from 1978 shows the co-founders of Intel Corporation

    However, Shockley was not an easy boss to work for, with an authoritarian management style. After a demand for him to be replaced was rebuffed, Moore and other colleagues quit in 1957. Later known as the “traitorous eight”, they immediately founded their own company – Fairchild Semiconductor – with Shockley calling their departure a “betrayal”. The firm was named after Sherman Fairchild, an experienced business executive who invested in the company. It was while working as research director at Fairchild in 1965 that Moore made his famous prediction later dubbed “Moore’s law”.

    Fairchild Semiconductor soon grew into a leader in the semiconductor industry, being bought by ON Semiconductor for $2.4bn in 2016. Operating as an incubator for new technology, it was directly or indirectly involved in the creation of dozens of corporations, including Intel and AMD. According to an analysis by Endeavor Insight in 2014, a total of 92 publicly listed firms, with a market value of more than $2.1 trillion, were spawned directly or indirectly by Fairchild. Endeavor reckoned that a further 2000 firms could be traced back to Fairchild too.

    Perhaps the most famous of all is Intel, which was set up in 1968 by Moore and the physicist Bob Noyce, a fellow co-founder of Fairchild. Originally known by their initials as NM Electronics, it was soon renamed Intel, and Moore went on to hold various senior roles, including chairman and chief executive. Intel, which pioneered new technologies for computer memory, integrated circuits and microprocessor design, had revenues of $63bn in 2022 and employs more than 125,000 staff.

    At his death, Moore was reportedly worth $7bn but he was a prolific philanthropist, setting up the Gordon and Betty Moore Foundation in 2000 with a $5bn gift to support educational and environmental projects. The following year he gave Caltech $600m, which was then the biggest gift to a higher-education institution. Moore also received many honours, including the Presidential Medal of Freedom (America’s highest civilian award) from George Bush in 2002.

    Smaller, faster, better

    At the heart of any computer is the central processing unit or CPU, which consists of individual transistors linked together to form a single integrated circuit that carries out basic arithmetical operations. The world’s first single-chip microprocessor was the four-bit CPU released by Intel in 1971. Known as the Intel 4004, it had 2300 transistors, each about 10 µm in size and sold for $60. But as Moore predicted, the number of transistors on integrated circuits would quickly rise.

    By the early 1980s, transistors were down to 1 µm in size and companies were packing up to 100,000 transistors onto a chip. The number of transistors per chip reached a million by the 1990s, 10 million by the early 2000s and 100 million a decade later. The latest CPUs have over 10 billion transistors using what’s known as a “5 nm process”, with Intel managing to pack over 100 million transistors on each square millimetre by 2019. (These days the process name is essentially a marketing term: the 2 nm of TSMC’s chips, for example, doesn’t actually refer to any specific physical feature on the devices.)

    Modern integrated circuits are created by taking a substrate of silicon or some other semiconductor, and then gradually building the circuit up layer by layer using various “lithographic” techniques. There is a huge variety of such methods, but all generally involve using either light or chemical reactions. What’s amazing is not just the incredible progress achieved in making chips, but also the sheer levels of cleanliness that are required in today’s semiconductor fabrication plants.

    Back in 1971, the Intel 4004 chip was made using a “10 µm process”, which then meant that all transistors on the chips were spaced no more than 10 µm apart. To achieve such small dimensions, Intel pioneered the use of the “optical mask” – essentially a large, transparent glass plate, parts of which were covered with a pattern of light-absorbing chrome. Blue light was shone through the mask, which was held above the surface of the wafer.

    Intel’s clever thinking was to coat the wafer with a light-sensitive organic photoresistive layer, which reacts if light lands on it, while areas that are unexposed stay the same. Using a solvent to dissolve away the parts that had been exposed to light, the original pattern on the mask could then be transferred to the silicon, albeit now much smaller (see image). Several masks steps were used to form the devices needed in the integrated circuit.

    Diagram to explain the photolithography process

    Over the years, increasingly accurate “projection lenses” had to be introduced between the mask and the wafer to make circuits smaller. In the 1980s, for example, “reduction steppers” were developed to make 2 µm chips. These devices transferred the mask patterns in stages to smaller and smaller lengths. Steppers continued to dominate lithographic patterning throughout the 1990s, as minimum feature sizes reached the 250 nm levels.

    Ultimately, however, the smallest feature you can print is limited by two factors: the resolving capability of the photoresist and the minimum size of the image that can be projected onto the wafer. That minimum size – also known as the Rayleigh criterion or the “diffraction limit” – is given by 0.61 λ/NA, where λ is the wavelength of light and NA is the numerical aperture of the projection lens. In other words, it’s impossible to project the image of a feature less than roughly half the wavelength of the light being used.

    To get to ever-smaller sizes, lithography systems over the years shifted to ever-shorter wavelengths, progressing from blue (436 nm) to ultraviolet (365 nm) and then to deep ultraviolet light (248 nm), with the latest systems using 193 nm light from an argon fluorine excimer laser. Moore’s law has also been sustained by improvements in numerical apertures, which have been pushed from 0.16 in early systems to amazingly high values of up to 0.93. Huge advances in nano-positioning technology to align the various masks to a suitable accuracy have been vital too.

    Down to 2 nm

    But how do we get to the 2 nm process as used at plants like that of TSMC in Taiwan? That’s well below the diffraction limit even for light with an ultra-short wavelength of 193 nm. Most chip manufacturers have turned to systems developed by the Dutch multinational firm ASML. Using extreme ultraviolet (EUV) light with a wavelength of 13.5 nm, which is almost in the X-ray range, these devices are incredible feats of engineering that push hard on the boundaries of the laws of physics.

    The EUV light is created by blasting molten drops of tin with a laser in a vacuum and then bouncing it off mirrors made by Zeiss, which ASML says are the flattest surfaces in the world. Costing more than $150m a pop, each ASML system is huge, having to be shipped to customers in 40 massive freight containers, three cargo planes and more than 20 trucks. Despite the price, the company has so far sold more than 140 of these EUV systems. But as the only supplier, ASML is in fact a bottle neck for expansion of the semiconductor industry.

    The ASML cleanroom

    According to MIT Technology Review, the first generations of chips with tiny EUV features are already being used by Google and Amazon, improving language translation, search-engine results, photo recognition and AI. The EUV revolution is also reaching everyday consumers, with ASML’s machines being used to make chips in smartphones from the likes of Apple and Samsung.

    Also helping us to keep Moore’s law going are some amazing advances in materials science and transistor design. Take, for example, “fin field-effect transistors” or FinFETS, which use relatively tall, fin-like structures on the surface of the silicon base. FinFETS were among the first of a new generation of 3D transistors that can be stacked one on top of the other. Companies are already producing devices with 176 mask layers, but 600 layers and above are on the semiconductor industries’ roadmap to deliver future generations of devices.

    The latest 2 nm processes use even more advanced FET transistors, known as gate all around (GAA) devices. The US firm IBM has already used them to create chips with a density of 333 million transistors per square millimetre, with the company claiming it can fit 50 billion transistors “onto a chip the size of a fingernail”. IBM says such chips could quadruple smartphone battery life, cut the cost of data centres and make laptops run faster.

    Working at the limit

    Essentially what is happening is that every possible lever is being pulled to keep Moore’s law on track. ASML, for example, is working towards the 1 nm chip scale using EUV lithography systems, while we can expect to see further improvements in lithography, with the resolution usually halving every six years. It is worth the effort: processors made with TSMC’s 2 nm silicon chips, for example, will run up to 15% faster than with 3 nm devices, while consuming about 25% less energy.

    A 2 nm wafer produced at IBM

    We are certainly not yet done with Moore’s law. Although 2 nm is barely the width of 10 silicon atoms, remember the transistors in 2 nm chips aren’t actually that small; the distance from one gate to another is nearer to 50 nm (the so-called “gate pitch”) so there is a bit more room to play with. We can also wring more out of existing chips by writing software code that is more efficient.

    But it is hard to see what innovations will come next to keep Moore’s law going. In 2016 researchers in Germany, Japan and the US made a transistor consisting of one molecule of phthalocyanine surrounded by just 12 indium atoms, which – with a gate size of 0.167 nm – would be “the absolute hard limit for Moore’s law” (Nat. Phys. 11 640). There is also a move to different kinds of chips designed for specific applications, such as AI, which use graphical processing units (GPUs) rather than CPUs and so can calculate more effectively in parallel.

    In the end, how far we can stretch Moore’s law is likely to be a matter of pure economics. With TSMC’s newest factory costing $33bn – far more than the $15—20bn of 5 nm plants – sustaining Moore’s law is a game of very high stakes. In this rarefied atmosphere, only a handful of players – IBM, Intel, Samsung and TSMC – are capable of developing next-generation semiconductor chip technology. They certainly haven’t given up on Moore’s law, but further progress is going to be very hard to sustain.

    The post Moore’s law: further progress will push hard on the boundaries of physics and economics appeared first on Physics World.

    ]]>
    Feature James McKenzie pays tribute to the foresight of Gordon Moore, the founder of Intel who died earlier this year https://physicsworld.com/wp-content/uploads/2023/06/2023-06-McKenzie-CPU-shutterstock_2203128699_jiang-jie-feng-LIST.jpg newsletter
    Paintable bioactive ink heals wounds of any shape or size https://physicsworld.com/a/paintable-bioactive-ink-heals-wounds-of-any-shape-or-size/ Tue, 20 Jun 2023 08:45:49 +0000 https://physicsworld.com/?p=108489 A wound-healing ink that can be 3D printed directly into injuries aims to accelerate the body’s natural healing process

    The post Paintable bioactive ink heals wounds of any shape or size appeared first on Physics World.

    ]]>
    A wound-healing ink can be 3D printed directly into injuries

    Repair of chronic wounds, caused by trauma, surgery or diabetes, for example, can be challenging. When the skin is cut or torn, the body works to heal itself, via a complex process involving blood clotting, elimination of any bacterial invaders, regrowth of damaged blood vessels and tissue remodelling.

    Techniques are available that can help heal wounds, such as applying bandages or stitches to stop bleeding, or using antibiotics to prevent infection. But now, researchers in China have developed a wound-healing ink that aims to actually accelerate the body’s natural healing process, called “portable bioactive ink for tissue healing”, or PAINT. The ink, described in ACS Applied Materials & Interfaces, is made from extracellular vesicles (EVs) embedded in hydrogel and can be painted directly into wounds of any shape using a 3D-printing pen.

    EVs secreted from white blood cells such as macrophages play an important role in promoting blood vessel formation and reducing inflammation during healing. To create their PAINT platform, the research team – headed up by Dan Li from Nanjing University, and Xianguang Ding and Lianhui Wang from Nanjing University of Posts and Telecommunications – mixed bioactive EVs derived from M2 macrophages (the type associated with tissue repair) with biocompatible sodium alginate hydrogels. Within minutes, this mixture forms a sturdy EVM2-gel ink that can be applied to wounds in situ.

    The researchers first tested the effect of exposing human endothelial cells to various concentrations (0, 100, 200 and 300 µg/ml) of EVM2. In particular, they examined the impact on angiogenesis (the formation of new blood vessels), an essential factor in wound healing.

    Cells incubated with EVM2 formed more capillary tubes than untreated controls, with angiogenesis increasing with EVM2 concentration and over time. Optical microscopy showed that the total blood vessel length, vessel percentage area and total number of junctions all increased significantly after 6 h incubation, demonstrating that EVM2 can promote angiogenesis in vitro.

    Next, the team incubated macrophages with the three concentrations of EVM2. Macrophages are a vital part of the immune system: they respond to and resist the invasion of foreign substances and play a key role in the repair and regeneration of inured tissues. Polarization of macrophages from the M1 (pro-inflammatory) to the M2 (pro-healing) phenotype is an effective approach for promoting wound healing. Compared with the control group, the ratio of M2 to M1 macrophages significantly increased in the EVM2 groups, indicating that EVM2 promoted polarization to the M2 phenotype in a concentration-dependent manner.

    Finally, the researchers assessed the therapeutic effect of PAINT in vivo on injured mice. For this, they developed a 3D-printing pen that internally mixes EVM2 with hydrogel precursors to form the EVM2-gel ink at the surface of the skin. This application process can be adapted to match the size and shape of any wound.

    Following the creation of a 9 mm circular wound on the back of mice, the animals were treated with EVM2-gel ink containing 100 or 300 µg/ml concentrations of EVM2, or no gel. Fluorescence imaging 6 and 12 h after administration showed that the ink continuously released EVM2 into the wound.

    Compared with the control group, mice treated with EVM2-gel experienced significantly accelerated wound healing. On day 12, wound areas were reduced to 21.22, 12.87 and 6.73% of the original size, for the control, 100 and 300 µg/ml groups respectively. Histology indicated that the EVM2-gel ink significantly increased epidermal thickness and promoted collagen fibre formation, essential factors in remodelling newly formed tissues.

    Consistent with the in vitro findings, the ink also significantly increased the average microvessel density in the wound area and polarized macrophages to the M2 phenotype. By day 14, scarring gradually decreased and wounds of mice in the 300 µg/ml group were almost completely healed.

    The researchers say that this work could help heal a wide variety of cuts quickly and easily, without the need for complex procedures. “Our study demonstrates the high potential of bioactive EV ink in biomedical applications,” they conclude. While the PAINT platform has not yet been tested in human subjects, ultimately, the team plans to work towards clinical translation, Ding tells Physics World.

    The post Paintable bioactive ink heals wounds of any shape or size appeared first on Physics World.

    ]]>
    Research update A wound-healing ink that can be 3D printed directly into injuries aims to accelerate the body’s natural healing process https://physicsworld.com/wp-content/uploads/2023/06/wound-healing-featured.jpg newsletter
    Fermilab faces protest over visitor restrictions https://physicsworld.com/a/fermilab-faces-protest-over-visitor-restrictions/ Mon, 19 Jun 2023 10:59:07 +0000 https://physicsworld.com/?p=108450 Visitors and even Fermilab employees have found themselves restricted to specific buildings

    The post Fermilab faces protest over visitor restrictions appeared first on Physics World.

    ]]>
    Located on prairie land near Batavia, Illinois, the Fermi National Accelerator Laboratory has long been renowned as a place that’s friendly to users and visitors alike. It has welcomed not just visiting scientists and contractors but friends and relatives of employees and members of the local community too. Once inside, they could attend lectures, visit experiments, or drive or cycle around the lab’s 27 km2 grounds to watch its herd of bison or fish in its ponds.

    But not anymore. To gain access, adult visitors to the campus, including scientists, must now present a certain type of driving licence or similar identification document approved by the US Department of Homeland Security. Once there, they are restricted to specific buildings and areas of the site. The restrictions have led to lab retirees and delivery people being denied entry to the campus, while some visiting scientists have had to give invited lectures via Zoom from local hotel rooms. Even Fermilab’s employees cannot enter certain labs.

    “It became clear that the COVID-19 restrictions weren’t being rolled back,” says Fermilab postdoc Fernanda Psihas, who decided to take action. Working with longtime Fermilab user Rob Fine and Justin Vasel, a US Air Force contractor who had seven years connected with Fermilab, they have set up a “Reopen Fermilab” petition. Warning of “the consequences of these restrictions on the local community and scientific process”, it has so far been signed by over 2600 scientists and community members. The petition asks for the new access policies to be overturned, saying they harm Fermilab’s reputation and threaten its science and education programmes.

    Maintaining openness

    Fermilab officials see the issue as more complicated than reversing restrictions introduced because of the COVID-19 pandemic. Not only is the lab undergoing a transformation that includes the construction of the Deep Underground Neutrino Experiment but the Department of Energy (DOE) has recently tightened safety regulations for its 17 national laboratories. “Fermilab has always been an open and welcome institution,” lab director Lia Merminga told Physics World. “We’re striving to maintain the openness while complying with the new DOE regulations.”

    Merminga, who joined Fermilab as director in April last year, notes that the public can still walk in, bike, watch the bison, and even come to the auditorium for Saturday morning lectures. But she concedes that Fermilab did not do “a good enough job” of communicating and applying the new rules.

    “We’re also working very hard to streamline the process for implementing the new regulations, “ she adds. “The way we did it was not the best way.” Merminga notes that the DOE has provided $13m to allow the public to access areas of the campus, including the cafeteria, by end of this year. “We’re working to open the 15th floor [of the main building], from which on a clear day you can see Lake Michigan,” says Merminga. “And we’re building a new access center that will be completed late next year or early in 2025.”

    Meanwhile, Reopen Fermilab plans to hand-deliver copies of its petition with signatures to Illinois’ Senators and Representatives at the end of the month. Yet according to Merminga, the petition has already had an impact. “Did the petition stimulate [better] communication? I would say yes,” she told Physics World. “The letter was an impetus for us for introspection, and to realize that we needed to communicate more simply and clearly and frequently.”

    The post Fermilab faces protest over visitor restrictions appeared first on Physics World.

    ]]>
    News Visitors and even Fermilab employees have found themselves restricted to specific buildings https://physicsworld.com/wp-content/uploads/2023/06/Baby-Bison.hr-small.jpg
    Tantalum polyhydride joins emerging class of high-pressure superconductors https://physicsworld.com/a/tantalum-polyhydride-joins-emerging-class-of-high-pressure-superconductors/ Mon, 19 Jun 2023 08:00:32 +0000 https://physicsworld.com/?p=108433 New superconducting hydride is the first to be made from a Group 5 transition metal

    The post Tantalum polyhydride joins emerging class of high-pressure superconductors appeared first on Physics World.

    ]]>
    A graph showing the temperature dependence of resistance for a sample of tantalum polyhydride measured at 197 GPa. An inset shows an enlarged view of the resistance curve and its temperature derivative to show the superconducting transition.

    Tantalum polyhydride becomes a superconductor at a temperature of 30 K and pressures of around 200 gigapascals (GPa), say researchers at the Chinese Academy of Science in Beijing. This marks the first time that superconductivity has been observed in a hydride made from a Group 5 transition metal, and members of the research team say the discovery could pave the way towards synthesizing metallic hydrogen.

    Superconductors are materials that carry electrical current with no electrical resistance when cooled to below their superconducting transition temperature (Tc). They are used in applications ranging from the high-field magnets in MRI scanners and particle accelerators to quantum bits in quantum computers. However, Tc for most superconductors is just a few degrees above absolute zero, and even so-called high-temperature superconductors must be cooled to below 150 K before they can conduct electricity without resistance. Researchers are therefore seeking to develop materials that remain superconducting at higher temperatures, and ideally at room temperature.

    Hydrides take centre stage

    In theory, the metallic state of hydrogen, which is expected to occur at extremely high pressures, should be a superconductor at room temperature. Unfortunately, it is very difficult to make pure hydrogen metallic. As an alternative, scientists have begun investigating hydrides, which are compounds consisting of hydrogen and a metal.

    Within the past few years, sulphur hydride and polyhydrides have both been found to superconduct at temperatures above 200 K, albeit only at pressures more than a million times higher than atmospheric pressure at sea level. Other superconducting materials in this class include rare earth hydrides like LaH10 and YH9 and alkaline earth hydrides like CaH6. Hydrides containing zirconium, lutetium and tin have similarly been found to have a moderately high Tc.

    Most 3d transition metals have local electron spins that tend to exhibit magnetic fluctuations that oppose superconductivity. For this reason, researchers have turned their attention to 5d transition metals such as Hf and Ta. Indeed, hafnium polyhydride becomes superconducting at around 83 K.

    A new polyhydride

    A team of researchers led by Changqing Jin have now synthesized an additional superconducting hydride, tantalum polyhydride (TaH3). They performed their experiment by placing the material in a diamond anvil cell, and at a pressure of 200 GPa, they observed that it superconducts at around 30 K. This makes it the first superconducting hydride to be made from the Group 5 metals, say Jin and colleagues, who investigated the superconducting phase using in situ electrical conductance as well X-ray diffraction measurements at high pressures.

    Jin explains that tantalum has a high tolerance for interstitial elements and can accommodate more than three hydrogen atoms in its lattice. However, these atoms are spaced too far from each other for electrons to hop directly between different sites in the material and produce a dissipationless electric current. “We suggest therefore that the superconductivity we have observed is related to the hybridization between the orbitals of Ta and H,” he tells Physics World. “This is totally different from what is expected for calcium hydride or rare earth polyhydride superconductors, in which electron can directly hop between adjacent H atoms that form a very dense cage.”

    The observation of superconductivity in TaH3 implies that metallic hydrogen might be realized though metallizing a covalent bond between H and other elements. The researchers, who report their work in Chinese Physics Letters, say they now plan to explore other polyhydride superconductors containing these covalent bonds.

    The post Tantalum polyhydride joins emerging class of high-pressure superconductors appeared first on Physics World.

    ]]>
    Research update New superconducting hydride is the first to be made from a Group 5 transition metal https://physicsworld.com/wp-content/uploads/2023/06/tantalum-element-907967068-iStock_Evgeny-Gromov.jpg newsletter
    UK announces £45m boost for quantum technology research https://physicsworld.com/a/uk-announces-45m-boost-for-quantum-technology-research/ Sun, 18 Jun 2023 09:00:06 +0000 https://physicsworld.com/?p=108446 The cash will be used to support 49 projects in the application of quantum technologies

    The post UK announces £45m boost for quantum technology research appeared first on Physics World.

    ]]>
    A £45m package to support universities and businesses working in the UK’s quantum technologies sector has been unveiled by the UK science minister George Freeman. The investments, which were made through UK Research and Innovation’s Technology Missions Fund, will build on the country’s National Quantum Technologies Programme that has been running for nearly a decade.

    The £45m funding will include £8m for 12 projects exploring quantum technologies for position, navigation and timing (PNT) and £6m for 11 projects working on software-enabled quantum computation. There will also be £6m for 19 feasibility studies in quantum computing applications and £25m for seven projects in quantum-enabled PNT via the Small Business Research Initiative.

    One PNT project will develop a new sensor technology that can be used underwater or underground. Led by Joseph Cotter from Imperial College London, it will explore how quantum sensors can complement global navigation systems, which have limited capability when not above ground. The team will work with Transport for London (TfL) to test the new technology on London’s tube system.

    The software-enabled quantum-computation projects, meanwhile, will aim to improve the performance of quantum computers. Aleks Kissinger from the University of Oxford, for example, will develop quantum compilers that translate code written by humans into something the machine can run.

    Projects in quantum-computing applications will investigate the use of computing and machine learning to reduce carbon emissions in aviation, develop improved methods to detect and reduce money laundering, as well as create a quantum-computing based approach to enzyme-targeted drug discovery.

    “Researchers, businesses and innovators are continuously pushing the boundaries of quantum-technology development, placing the UK at the leading edge of this field,” says Will Drury, executive director of digital technologies at Innovate UK. “Through this support and investment, we will work in partnership to realize the potential of this technology for our UK economy and society.”

    Drury adds that the National Quantum Computing Centre is also investing £30m to commission quantum-computing testbeds in the UK.

    The post UK announces £45m boost for quantum technology research appeared first on Physics World.

    ]]>
    News The cash will be used to support 49 projects in the application of quantum technologies https://physicsworld.com/wp-content/uploads/2023/06/quantum-abstract-959493260-iStock_matejmo-1.jpg
    Building blocks of DNA could survive in Venus’ corrosive clouds, say astronomers https://physicsworld.com/a/building-blocks-of-dna-could-survive-in-venus-corrosive-clouds-say-astronomers/ Sat, 17 Jun 2023 11:00:16 +0000 https://physicsworld.com/?p=108437 All five nucleic acids remain stable in concentrated sulphuric acid, and could form the basis for an alternative way of encoding life

    The post Building blocks of DNA could survive in Venus’ corrosive clouds, say astronomers appeared first on Physics World.

    ]]>
    For a planet sometimes known as “Earth’s twin”, Venus is astoundingly inhospitable. Its surface temperature of 735 K is hot enough to melt lead. Its surface pressure of 94 atmospheres will crush all but the hardiest spacecraft. And if that wasn’t enough, its thick, oppressive clouds drip with sulphuric acid.

    Despite these disadvantages, the possibility of life on Venus is a hot topic among astronomers and astrobiologists. It last hit the headlines in 2020, when researchers led by Jane Greaves of Cardiff University, UK announced they had observed phosphine in the planet’s atmosphere. Since the only ways of generating phosphine on Earth relate to anaerobic metabolic processes in microbial life, the observation was widely interpreted as evidence that such life must exist on Venus, too.

    Within weeks, however, other astronomers were challenging the validity of the result – sometimes in terms that were almost as corrosive as Venus’s sulphuric-acid-rich clouds. Then, in 2022, a follow-up study by NASA’s SOFIA mission found no evidence of phosphine. The earlier finding, it seems, was incorrect. With that, things went quiet.

    A cloudy habitat

    A new study by researchers in the US, Canada and the UK has now reopened the debate by focusing not on phosphine, but on the stability of nucleic acids in Venus’ clouds. These clouds stretch from 48–60 km above the planet’s surface in a near-continuous stack, and temperatures within them are comparatively mild: 263 K (-10 °C) at their outer limit, rising to a balmy 310 K (37 °C) further in. Covalent chemical bonds form readily at such temperatures, and the clouds offer both a liquid environment and a supply of energy. What’s not to like?

    In their study, which is published in PNAS, astrophysicist Sara Seager of the Massachusetts Institute of Technology and her colleagues acknowledge two “potential show-stoppers”. The first is that Venus’ clouds are critically short of water, the substance upon which all life on Earth depends. The second is that the concentration of sulphuric acid in Venus’ clouds is so high that even acid-loving organisms, such as bacteria that thrive in mine tailings and undersea volcanoes, could not survive there.

    For proponents of the life-on-Venus theory, though, this is not the end of the story. Although the so-called building blocks of life, DNA and RNA, are not stable in such high concentrations of sulphuric acid, Seager and colleagues found evidence that all five of their base molecules – the building blocks of the building blocks, if you will – can survive just fine.

    Acid test for life

    To obtain this evidence, members of the team immersed samples of the five nucleic acid bases (adenine, cytosine, guanine, thymine, and uracil) and a few similar molecules in 98% sulphuric acid. They then used a combination of spectroscopic techniques to study the molecules’ structure after 18-24 hours. With one of these techniques, carbon-13 nuclear magnetic resonance (NMR) spectroscopy, they repeated the measurement two weeks later to check whether the molecules degraded over time. For the most part, the answer was no. Among other markers of stability, the molecules’ central aromatic rings remained unbroken, and the position of the carbon “peaks” in the NMR spectrum did not change, even after two weeks of wallowing in acid.

    Proving the stability of DNA and RNA bases in sulphuric acid is one thing. Finding a way to combine these bases into a sulphuric acid-proof information-carrying biopolymer is another. Without that, there can be no Venusian version of genetics or Darwinian evolution. Still, Seager and colleagues conclude their study on a bullish note. “We do not know if the origin of life in concentrated sulfuric acid is possible, but such a possibility cannot be excluded,” they write. “Life could use concentrated sulfuric acid as a solvent instead of water and could have originated in the cloud droplets in liquid concentrated sulfuric acid… In this scenario, the Venus atmosphere could still support the strictly aerial concentrated sulfuric acid-based life.”

    The post Building blocks of DNA could survive in Venus’ corrosive clouds, say astronomers appeared first on Physics World.

    ]]>
    Blog All five nucleic acids remain stable in concentrated sulphuric acid, and could form the basis for an alternative way of encoding life https://physicsworld.com/wp-content/uploads/2023/06/venus-clouds.jpg newsletter
    Winners announced for the 2023 Bell Burnell Graduate Scholarship Fund https://physicsworld.com/a/winners-announced-for-the-2023-bell-burnell-graduate-scholarship-fund/ Fri, 16 Jun 2023 15:18:15 +0000 https://physicsworld.com/?p=108445 Ten new awardees revealed, taking the number of physics PhD students supported by the fund to 31

    The post Winners announced for the 2023 Bell Burnell Graduate Scholarship Fund appeared first on Physics World.

    ]]>
    A total of 10 physics PhD students from across the UK have been unveiled as the 2023 awardees of the Bell Burnell Graduate Scholarship Fund.  The fund aims to improve diversity in physics by offering scholarships to PhD students from groups currently underrepresented in the physics research community. The 10 new awardees takes the number of physics PhD students supported by the fund to 31.

    The fund was originally set up by the astrophysicist Jocelyn Bell Burnell  in 2019 after she won the Special Breakthrough Prize in Fundamental Physics for her role in the discovery of pulsars. Bell Burnell donated her entire £2.3m prize to set up the scholarship programme, which is run by the Institute of Physics (IOP), which publishes Physics World.

    The 2023 awardees include Alix Freckelton from the University of Birmingham, Astra Sword (Open University), Clara Cafolla-Ward (Cardiff University), Karolina Szewczyk (University of Leeds), Lauren Muir (University of Glasgow), Raymond Isichei and Xinran Yang (Imperial College London), Rojita Buddhacharya (Liverpool John Moores University), Shideh Davarpanah (University of Portsmouth) and Sinéad Mannion (Queen’s University Belfast).

    “Wherever we look there are problems that need physicists to help solve them and the more diverse we can make the population of physics researchers and innovators the more effective and creative it will be,” says Rachel Youngman, deputy chief executive of the IOP. “Already students who have been supported are working across the UK in academia and business helping us solve some of the most important challenges of our times, in low carbon energy, medical sciences, computing and many, many other areas.”

    Helen Gleeson from the University of Leeds, who is chair of the fund committee, notes that the standard of applications for the fund gets higher each year. “The 10 successful applicants have all done incredibly well,” she adds. “There is no doubt that physics will provide the scientific applications and solutions to so many of the problems we face in our society and economy today and these Bell Burnell award winners will be at the very heart of that work.”

    The post Winners announced for the 2023 Bell Burnell Graduate Scholarship Fund appeared first on Physics World.

    ]]>
    News Ten new awardees revealed, taking the number of physics PhD students supported by the fund to 31 https://physicsworld.com/wp-content/uploads/2023/06/News-Bell-montage-WEB.jpg
    Snakes and Ladders inspired by the future of publishing, ‘plasmonic tongue’ has a taste for maple syrup https://physicsworld.com/a/snakes-and-ladders-inspired-by-the-future-of-publishing-plasmonic-tongue-has-a-taste-for-maple-syrup/ Fri, 16 Jun 2023 11:53:25 +0000 https://physicsworld.com/?p=108459 Excerpts from the Red Folder

    The post Snakes and Ladders inspired by the future of publishing, ‘plasmonic tongue’ has a taste for maple syrup appeared first on Physics World.

    ]]>
    One of the great things about working on Physics World is that I learn something new just about every day. And this Wednesday was no exception, when I learned about the origins of Snakes and Ladders.

    The popular board game came up at internal conference at IOP Publishing, where one of the guest speakers was from STM — an industry association that represents the interests of academic publishers around the world. Towards the end of last year, STM held a meeting in London that brainstormed scenarios for how scholarly publishing will evolve over the next three to five years.

    The delegates identified opportunities and threats for academic publishing and these were used to create a version of Snakes and Ladders. If you are not familiar with the game, players wind their way upwards on a board by rolling dice. If a player encounters a ladder, they get a boost up the board. But, if they encounter a snake they slide down the board.

    In the STM version of the game, the ladders are labelled with opportunities in the publishing industry and the snakes with threats. The board covers six themes: artificial intelligence; open research; digital identity; collaboration; social responsibility; and research integrity.

    Fake articles

    The most significant threat, which sends a player down four levels, is the spectre of 60 fake articles from a “paper mill” being published in a favourite journal. The tallest ladder will give you a four-level boost when you join the STM Integrity Hub.

    You can download the game to print and play here.

    So what did I learn about the origin of Snakes and Ladders? Apparently the game was first called Moksha Patam and is 2000 years old. It was invented in India to teach children about virtues and vices, destiny and desire, good and bad karma.

    From ancient India to modern Canada, where researchers have developed a “plasmonic tongue” that can predict the quality of maple syrup from the raw sap that it is made from. For those who don’t live in northeastern North America, maple syrup is made from the sap of the sugar maple tree – which is harvested in the early spring. The watery sap is boiled down to create syrup in a process that is very energy intensive, so it would be useful if producers could identify bad sap that would result in inferior maple syrup.

    Gold nanoparticles

    That is where the plasmonic tongue comes in. The device contains gold nanoparticles, which have optical properties that are affected by the presence of certain sap molecules that result in poor maple syrup. By comparing 600 samples of sap and the syrup it produced, the team was able to develop a way to screen out dodgy sap.

    The research is described in ACS Food Science Technology.

    The post Snakes and Ladders inspired by the future of publishing, ‘plasmonic tongue’ has a taste for maple syrup appeared first on Physics World.

    ]]>
    Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/06/STM-Trends-2027.jpg
    Quantum woo: how to avoid mystical nonsense when doing physics outreach https://physicsworld.com/a/quantum-woo-how-to-avoid-mystical-nonsense-when-doing-physics-outreach/ Thu, 15 Jun 2023 14:13:53 +0000 https://physicsworld.com/?p=108431 This podcast features the physicist, author and musician Philip Moriarty

    The post Quantum woo: how to avoid mystical nonsense when doing physics outreach appeared first on Physics World.

    ]]>
    Engaging with the public is often part of the job description for academic physicists and many undertake outreach activities such as writing popular science books, podcasting or even making music videos.

    In this episode of the Physics World Weekly podcast I meet a condensed-matter physicist who has done all three and more. Philip Moriarty explains how he gets people excited about quantum mechanics while avoiding “quantum woo” – that heady and irrational mix of science and mysticism.

    Based at the UK’s University of Nottingham, Moriarty chats about how his love of heavy-metal music inspired him to write a book that explains the principles of quantum mechanics using analogies from music. He also talks about a new physics-inspired music video that he has made called “Shut up and calculate”.

    • In Physics World, Moriarty has reviewed the book Quantum Bullsh*t: How to Ruin Your Life with Advice from Quantum Physics by Chris Ferrie; and you can read our review of Moriarty’s book When the Uncertainty Principle Goes to 11: Or How to Explain Quantum Physics with Heavy Metal

    The post Quantum woo: how to avoid mystical nonsense when doing physics outreach appeared first on Physics World.

    ]]>
    Podcast This podcast features the physicist, author and musician Philip Moriarty https://physicsworld.com/wp-content/uploads/2023/06/Schrodinger-equation-list.jpg newsletter
    MRI shows potential for verification of proton beam range https://physicsworld.com/a/mri-shows-potential-for-verification-of-proton-beam-range/ Thu, 15 Jun 2023 11:30:46 +0000 https://physicsworld.com/?p=108428 Online MRI can visualize a proton beam traversing a liquid-filled phantom

    The post MRI shows potential for verification of proton beam range appeared first on Physics World.

    ]]>
    Proton beam visualization using MRI

    Proton therapy is an advanced cancer treatment technique that offers a significant advantage over conventional photon-based radiotherapy: protons have a finite range at which they deposit the majority of their dose, thus sparing healthy tissues located near the tumour. Accurate targeting, however, is essential to fully exploit the dose conformality of proton beams. And currently, there’s no imaging technique available that can provide real-time beam range monitoring for routine clinical use during proton therapy.

    As a result, proton treatments employ safety margins around the tumour to account for uncertainties caused by morphological changes along the beam path and to ensure adequate target coverage. Unfortunately, such margins (commonly 2.5–3.5% of the nominal range plus 1–3 mm) compromise the potentially high dose conformality.

    One approach proposed to address this dilemma and improve targeting accuracy in treatments of moving tumours is the use of MRI guidance – as implemented for photons with hybrid MR-linac technology. With this aim, a research team at OncoRay and HZDR has successfully integrated a low-field open MR scanner into a proton research beam line.

    Aswin Hoffmann and his team have now used this research system to demonstrate that online MRI can visualize the proton beam and reveal its range during irradiation of liquid-filled phantoms. They report their findings in Proceedings of the National Academy of Sciences.

    Seeing the beam

    The set-up at OncoRay comprises an open 0.22 T MR scanner, radio-frequency-shielded by a Faraday cage and installed in the path of a horizontal proton research beam line. For this latest study, the researchers placed a 10 × 10 × 6.5 cm polyethylene box filled with tap water centrally in the MRI receiver coil. They then irradiated the phantom using proton beam energies of 200, 207 and 215 MeV (at a beam current of 32 nA) and currents of 8, 16, 32 and 64 nA (at 207 MeV).

    During each irradiation, the researchers performed time-of-flight angiography MRI, acquiring images for 5 s, starting 15 s after the start of irradiation. They observed energy- and current-dependent proton beam signatures that resembled the shape of the dose distribution measured using radiochromic film.

    With increasing beam energy, the range (seen as a hypointense signal on the MR image) was increasingly displaced along the beam direction. The measured range shifts relative to the 200 MeV beam – 1.6 and 3.4 cm, for 207 and 215 MeV beam energies, respectively – agreed with calculated range values to within 2 mm. The intensity of the signature decreased with decreasing beam current and faded out below 8 nA.

    The researchers note that the MRI contrast mechanism underlying these observations is not yet fully understood. However, they suggest that as similar signatures are found in other liquids, but not in flow-restricted water phantoms or gels, the effect may be due to convection arising from radiation-induced local heating and thermal expansion of the water.

    QA and beyond

    The imaging technique could find immediate application for geometric quality assurance in MR-integrated proton therapy systems currently under development. Further ahead, it may be possible to use MRI to provide real-time feedback on the beam range and energy deposition during proton therapy, although the method has so far only proven feasible in liquids and is likely not transferable to patients in its current form.

    “Therefore, our research group is continuing to unravel the underlying MRI contrast mechanisms behind these observations, in order to ideally design a novel MRI sequence that would allow MRI-based in vivo proton range verification,” explains first author Sebastian Gantz. Such a scheme could provide an alternative to proton range monitoring approaches based on ionoacoustics or detection of secondary radiation.

    “While both ionoacoustics and prompt gamma detection are indirect methods capable of detecting the proton beam end-of-range, the appeal of the MRI-based method is that we are able to visualize the proton beam in 2D,” Gantz tells Physics World. “Potentially, in the future, we can use one imaging modality, in-beam MRI, to concurrently visualize both the beam and the patient anatomy.”

    The post MRI shows potential for verification of proton beam range appeared first on Physics World.

    ]]>
    Research update Online MRI can visualize a proton beam traversing a liquid-filled phantom https://physicsworld.com/wp-content/uploads/2023/06/MRI-proton-fig1-featured.jpg
    Tiny levitating spheres could join the hunt for dark matter https://physicsworld.com/a/tiny-levitating-spheres-could-join-the-hunt-for-dark-matter/ Thu, 15 Jun 2023 07:27:57 +0000 https://physicsworld.com/?p=108425 Novel detector would target exceptionally light relatives of the axion particle

    The post Tiny levitating spheres could join the hunt for dark matter appeared first on Physics World.

    ]]>
    Physicists in China want to widen the search for dark matter using a minuscule levitating oscillator. By tuning the resonant frequency of their sensor across three orders of magnitude, they say they should be able to place new lower limits on the interaction strength of putative low-mass dark matter with an equally large range of masses. However, they have still to show that they can screen out all possible sources of noise.

    Scientists inferred the presence of dark matter in the universe decades ago, having observed that stars far from the centre of galaxies rotate more quickly than expected. Despite much effort in the meantime, however, researchers have still to detect any dark matter directly. The leading candidate for many years has been weakly interacting massive particles (WIMPs), which weigh 1010–1012 eV/c2 and tie in with supersymmetry theory. But multiple searches at the Large Hadron Collider at CERN and at dedicated underground facilities have so far drawn a blank.

    At the same time, researchers have stepped up the hunt for much lighter particles. Most prominent among these is the axion, a spin-0 boson with a mass somewhere between 10-6 and 10-3 eV/c2 that was originally proposed to resolve a quandary with the strong nuclear force. But beyond this lies a zoo of hypothetical “axion-like particles” (ALPs), which, unlike axions themselves, can in principle take on any of a vast range of masses and interaction strengths.

    Coherent waves

    Searching for these particles using mechanical oscillators relies on the fact that such low-mass and therefore abundant entities would behave as coherent waves. As the Earth passes through the cloud of dark matter thought to envelop the Milky Way, the wavelike nature of ALPs would lead to a periodic variation in the motion of a suitably sensitive oscillator. The size of the effect would be proportional to the number of neutrons in the detector and a specific but unknown coupling strength, while the oscillation frequency would be proportional to the ALP mass. Any variation at some specific frequency that couldn’t be explained by more humdrum noise sources might therefore indicate the presence of dark matter.

    Researchers have looked for such modulations using France’s MICROSCOPE space mission and experiments on the ground. In one ground-based experiment, Eric Adelberger and colleagues at the University of Washington in the US monitored the movement of a rotating torsion balance. Obtaining a null result after analysing nearly seven years of data, in 2019 they were able to impose strict new limits on interaction strengths of about 10-9–10-4  Hz. This corresponds to particle masses of 10-23–10-18 eV/c2.

    In the latest work, Jiangfeng Du at the University of Science and Technology of China in Hefei and colleagues in Hefei and Nanjing instead turn their attention to a magnetically levitated oscillator. A discovery in this case would come in the form of an enhanced vertical oscillation at some particular frequency beyond that expected from straightforward thermal vibrations.

    Two tiny spheres

    The oscillator – which has yet to be built – would consist of two tiny levitating spheres joined by a thin vertical glass rod. The uppermost sphere, a diamagnet measuring 1 mm across, would be held inside a ring of magnets. The lower one, a paramagnet with a radius of only 11 μm, would be suspended just above a smaller set of magnets (see figure). As the connected spheres move up and down their motion would be monitored by measuring the scattering of a laser beam from the upper sphere.

    Du and colleagues say that the sensitivity of the oscillator could be enhanced by varying its resonant frequency. The idea is to move the lower magnet up and down, so varying the distance between it and the lower sphere. Smaller distances would mean higher magnetic gradients, which in turn would yield lower resonant frequencies. In this way they could probe frequencies of 0.1–100 Hz.

    The researchers explain that experiments would involve monitoring the oscillator’s motion continuously for just over a day at a time, scanning its resonant frequency across the range as they do so. By repeating this process about 100 times, they calculate that they should be able to push down the upper limit on ALP coupling strength for masses of 10-16–10-13 eV/c2 by at least an order of magnitude compared to previous results.

    Minimizing noise

    This would involve cutting out various noise sources. It would limit unwanted vibrations through a multi-stage suspension system and minimize thermal noise by operating at just 30 mK. It would also reduce measurement noise in the form of detector imperfections and laser pressure fluctuations. The researchers acknowledge that the suspension system cannot filter out seismic waves, tidal forces and other low-frequency interference, but they are looking to deal with this by making better use of active vibrations to cancel out noise.

    What is  more, they say it should be possible to extend the search for ALPs by employing a sensor array – enhancing the sensitivity by at least the square root of the number of detectors.

    David Moore of Yale University in the US is enthusiastic about the new research, describing it as “a nice proposal” for using mechanical sensors to hunt for dark matter. But he emphasizes just how hard it can be to eliminate noise, having himself had to battle with the tiniest of vibrations when searching for dark matter using a minuscule mass levitated optically. “Some of the background events arose from just having people in the room talking,” he recalls.

    The research is described in Chinese Physics Letters.

    The post Tiny levitating spheres could join the hunt for dark matter appeared first on Physics World.

    ]]>
    Research update Novel detector would target exceptionally light relatives of the axion particle https://physicsworld.com/wp-content/uploads/2023/06/Dark-matter-detector.jpg
    Wireless ultrasound monitor is ready for a workout https://physicsworld.com/a/wireless-ultrasound-monitor-is-ready-for-a-workout/ Wed, 14 Jun 2023 15:00:28 +0000 https://physicsworld.com/?p=108419 New sensor transmits data continuously and can be worn comfortably on the skin without frequent adjustments

    The post Wireless ultrasound monitor is ready for a workout appeared first on Physics World.

    ]]>
    Researchers in the US have designed an ultrasound transducer that transmits information wirelessly and can be worn comfortably on the skin, overcoming two major shortcomings of previous devices. Developed by Muyang Lin, Sheng Xu and colleagues at the University of California San Diego (UCSD), the new transducer could be used to monitor patients with serious cardiovascular conditions, as well as to help athletes keep track of their training.

    Ultrasound transducers work by transmitting high-frequency sound waves into the body, then detecting the waves reflected from tissues that have different densities and acoustic properties. Over the past several decades, improvements to probe and circuit designs, combined with better algorithms for processing ultrasound signals, have produced transducers that can conform to the folds of a person’s skin. This has allowed the devices to measure ultrasound signals continuously, which is especially useful for monitoring the pulsing of veins and arteries.

    Researchers in Xu’s lab had previously developed wearable ultrasound probes that could monitor several physiological parameters of deep tissues, including blood pressure, blood flow and even cardiac imaging. Even so, the technology had some shortcomings. “These wearable probes are all wired to a bulky machine for power and data collection, and will shift in relative position during human motion, making them lose track of targets,” explains Lin, a PhD student in nanoengineering at UCSD and lead author of a paper in Nature Biotechnology on the device.

    Because of these flaws, previous continuous ultrasound sensors could seriously inhibit a wearer’s mobility. They also required frequent readjustments as wearers moved around.

    Ultrasound untethered

    To address these problems, the UCSD team developed a new device based on a miniaturized, flexible control circuit that interfaces with an array of transducers. This device collects the ultrasound signals but does not process them directly. Instead, it relays them wirelessly to a computer or smartphone, which processes them using machine learning.

    “We developed an algorithm to automatically analyse the signal and select the channel that has the best signal on moving target tissue,” Lin explains. “Therefore, the signals from the target tissue are continuous, even during human motion.”

    The researchers tested this capability by using the device to track the position of a human subject’s carotid artery while monitoring the pulsation of blood within. This artery supplies blood to the head and neck, so they trained the algorithm to recognize displacements caused by different motions of the subject’s head.

    Although the team only trained the algorithm on a single subject, a further advanced adaptation algorithm allowed new wearers to use the sensor with minimal retraining. Once trained, the device could detect ultrasound signals of the carotid artery’s pulsation as deep as 164 mm beneath the skin, even when the wearer was exercising.

    Multi-use monitor

    Xu and colleagues originally intended to test the sensor’s capabilities as a blood pressure monitor. Through their experiments, however, they discovered it could also monitor other important parameters, including arterial stiffness, the volume of blood pumped out by the heart and the amount of air exhaled by the wearer.

    Ultimately, the researchers predict their design could open up a wide range of possibilities for continuous ultrasound monitoring. “By using wearable ultrasound technology, we can untether the patient from bulky machines and automate the ultrasonic examinations,” Lin says. “Deep tissue physiology can be monitored in motion, which provides unprecedented opportunities for medical ultrasonography and exercise physiology.”

    These capabilities could be life-changing for patients living with cardiovascular conditions, Lin says. “For at-risk populations, abnormal values of blood pressure and cardiac output at rest or during exercise are hallmarks of heart failure,” he explains. But the applications don’t end there. “For a healthy population, our device can measure cardiovascular responses to exercise in real-time. Thus, it can provide insights into the actual workout intensity exerted by each person, which can guide the formulation of personalized training plans.”

    The post Wireless ultrasound monitor is ready for a workout appeared first on Physics World.

    ]]>
    Research update New sensor transmits data continuously and can be worn comfortably on the skin without frequent adjustments https://physicsworld.com/wp-content/uploads/2023/06/wireless-ultrasound-patch-1.jpg newsletter1
    Thrown away: what is the real impact of our waste? https://physicsworld.com/a/thrown-away-what-is-the-real-impact-of-our-waste/ Wed, 14 Jun 2023 13:08:26 +0000 https://physicsworld.com/?p=108226 Tom Tierney reviews Wasteland: the Dirty Truth About What We Throw Away, Where It Goes, and Why It Matters by Oliver Franklin-Wallis

    The post Thrown away: what is the real impact of our waste? appeared first on Physics World.

    ]]>
    Up to 7% of the world’s gold reserves may currently be contained in old electronic devices left in cupboards “just in case” those gadgets may one day be needed again.

    And it’s not just gold stowed away. One tonne of electronic waste can contain 50 times more copper than a tonne of copper ore. There’s also iron, aluminium and several rare earth elements in those devices too. Yet only 17.4% of electronic waste is being recycled – and nobody seems to know what’s happened to the rest.

    I learnt these facts in Wasteland: the Dirty Truth About What We Throw Away, Where It Goes, and Why It Matters by journalist Oliver Franklin-Wallis, an engrossing book that educates the reader on what actually happens to the things we throw away.

    And why does that un-recycled electronic waste in our cupboards matter?

    Wealthy countries have largely outsourced their waste problems to the global south

    If metals are not recycled, then more will be dug up by the mining industry, which produces a staggering 100 billion tonnes of waste a year. Franklin-Wallis follows the story to Brumadinho in Brazil where, on 25 January 2019 a dam containing waste from an iron ore mine broke, releasing nearly twelve million cubic metres of toxic slurry and killing 272 people. In Wasteland, this disaster is presented to us in the context of a world where wealthy countries have largely outsourced their waste problems to the global south, and rarely confront the complexity of dealing with it all.

    Franklin-Wallis highlights that our relationship with waste was reshaped by the emergence of plastic, meaning that now one third of what we throw away is less than a year old. The concept of the disposable society has also spread to the clothing-industry. He describes, for example, how a power station in Stockholm has switched from burning coal to burning clothing. The author also mentions the textile industry of Ghana, which has collapsed due to the enormous quantities of discarded clothes shipped there from Europe and America.

    Wasteland does not hide from the difficulty of finding solutions to any of the problems discussed. But it does not offer an entirely negative message either. Despite many failures and the pernicious effects of greenwashing, we learn that recycling can work: 80% of the copper ever mined, for example, is still in circulation. However, Franklin-Wallis argues that the solution to our waste problem is, ultimately, simple: we should just buy less stuff!

    • 2023 Simon & Schuster UK 304pp £9.99 ebook/£20.00 hb

    The post Thrown away: what is the real impact of our waste? appeared first on Physics World.

    ]]>
    Opinion and reviews Tom Tierney reviews Wasteland: the Dirty Truth About What We Throw Away, Where It Goes, and Why It Matters by Oliver Franklin-Wallis https://physicsworld.com/wp-content/uploads/2023/06/2023-06-Tierney-Watseland-Brazil-dam-rupture-1564968175-Shutterstock_Christyam-de-Lima_.jpg newsletter
    Graphene’s ‘cousin’ makes a switchable topological insulator https://physicsworld.com/a/graphenes-cousin-makes-a-switchable-topological-insulator/ Wed, 14 Jun 2023 11:30:46 +0000 https://physicsworld.com/?p=108421 A two-dimensional form of the element germanium could make electronics faster and more energy efficient

    The post Graphene’s ‘cousin’ makes a switchable topological insulator appeared first on Physics World.

    ]]>
    Germanene – a two-dimensional, graphene-like form of the element germanium – can carry electricity along its edges with no resistance. This unusual behaviour is characteristic of materials known as topological insulators, and the researchers who observed it say the phenomenon could be used to make faster and more energy-efficient electronic devices.

    Like graphene, germanene is an atomically thin material with a honeycomb structure. Like graphene, germanene’s electronic band structure contains a point at which the valence and conduction bands meet. At this meeting point, spin-orbit coupling creates a narrow gap between the bands within the material’s bulk, causing it to act as an insulator. Along the material’s edges, however, special topological states arise that bridge this gap and allow electrons to flow unhindered.

    Materials with this property – conducting electricity along their edges, while acting as insulators in their bulk – are called topological insulators. Since the edge-state electric current induces a transverse spin current, they are also known as quantum spin Hall systems by analogy with the better-known quantum Hall effect, in which strong magnetic fields induce electric current to flow along the edge of a semiconductor.

    A new topological insulator emerges

    In graphene, the quantum spin Hall effect is too weak to observe, but researchers led by Pantelis Bampoulis of the University of Twente in the Netherlands have now spotted it in germanene. To do this, they employed a variety of experimental and theoretical techniques, including low-temperature scanning tunnelling microscopy (STM) and scanning tunnelling spectroscopy (STS) as well as density functional theory and tight-binding calculations.

    “With STM and STS, we could directly measure the electronic band structure of germanene and showed that it has a band gap in its interior and conductive states at its edges,” Bampoulis explains. “This means that it doesn’t conduct electricity in the middle, but does along its edges.”

    Switching between states

    The fact that germanene is slightly buckled, rather than completely flat like graphene, introduces a potentially useful property, Bampoulis adds. “In our study, we were also able to apply an electric field to our sample using the STM to change the topological state of germanene,” he tells Physics World. “When a critical field is reached, the topological band gap closes and the material becomes a topological semimetal. Beyond this field, a conventional band gap opens up and the topological edge states disappear – in other words, the germanene becomes a normal insulator.”

    Because germanene transitions so readily between a perfect conductor and an insulator, the researchers say it could be used to make a novel type of field-effect transistor. In the “on” state of such a device, current would flow without energy loss along the topological edge states. The team also suggests that such edge states could be useful in quantum computing, where their robustness could make quantum devices more stable and resistant to errors.

    The University of Twente researchers say they are now busy trying to increase the number of conductive channels and further tune the quantum state of germanene. “These efforts will involve fabricating germanene nanoribbons (thin, elongated strips) and implementing a twist in stacks of two germanene layers,” Bampoulis reveals.

    The present work is detailed in Physical Review Letters.

    The post Graphene’s ‘cousin’ makes a switchable topological insulator appeared first on Physics World.

    ]]>
    Research update A two-dimensional form of the element germanium could make electronics faster and more energy efficient https://physicsworld.com/wp-content/uploads/2023/06/pantelis.jpg newsletter1
    Novel breathalyser rapidly tests for COVID-19 https://physicsworld.com/a/novel-breathalyser-rapidly-tests-for-covid-19/ Wed, 14 Jun 2023 09:00:34 +0000 https://physicsworld.com/?p=108362 Laser spectroscopy technique analyses exhaled breath to diagnose illness

    The post Novel breathalyser rapidly tests for COVID-19 appeared first on Physics World.

    ]]>
    A new medical diagnostic tool based on the use of optical frequency comb technology can rapidly test for COVID-19 in exhaled breath. The technique, developed by researchers at JILA, the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder, could also be used to diagnose other conditions or diseases, particularly those of respiratory, gastrointestinal or metabolic origin.

    Being able to rapidly test for infection by viruses like SARS-CoV-2, the virus responsible for COVID-19, is crucial for fighting future pandemics. Testing exhaled human breath could come into its own here since each breath contains more than 1000 distinct molecules, some of which can indicate underlying medical conditions or infections. These molecules can be detected and identified by measuring their selective absorption of laser light at different optical frequencies.

    In 2008 researchers led by Jun Ye of JILA demonstrated that frequency comb spectroscopy – a technique originally developed for optical atomic clocks and precision metrology – could potentially identify disease biomarkers in exhaled human breath. The technique, which essentially uses laser light to distinguish between different molecules, lacked sensitivity, however, and could not link specific molecules to disease states. They did not, therefore, test it to diagnose illnesses.

    Parts-per-trillion level sensitivity

    In 2021 Ye and colleagues improved the sensitivity of their technique by 1000 times, meaning that it could now detect certain biomolecules at the parts-per-trillion level.  In their new study, detailed in the Journal of Breath Research, they applied supervised machine learning to process the light absorption patterns and make a direct connection to potential disease states without going through the intermediate step of identifying molecules first.

    To test their method, the researchers collected breath samples from 170 individuals, half of whom had SARS-CoV-2 when tested using conventional PCR (polymerase chain reaction). They then piped the samples through a tube into their new breathalyser, which consists of optical frequency combs that generate mid-infrared laser light at tens of thousands and sometimes hundreds of thousands distinct optical frequencies. Using a pair of high-reflectivity mirrors, the frequency comb light multi-passes the breath gas samples around 4000 times so that the molecular absorption strengths are significantly enhanced.

    The team then used machine learning algorithms to analyse the ultrasensitive absorption signals measured at around 15,000 frequencies to detect whether the subjects were infected or not. The study was verified by asking the machine to predict the COVID status of each individual and then comparing this to the results from their PCR test.

    An alternative to PCR tests

    The results from the new laser spectroscopy technique matched 85% of those from PCR, which is “excellent” according to medical diagnostic standards.

    “The technique could be an alternative to PCR tests for COVID-19,” says study lead author Qizhong Liang. “The laser-based breath test is much faster at obtaining the result and in future systems we are implementing a real-time detection capability by asking people to breathe directly into our apparatus.”

    The detection is also, of course, non-invasive – in contrast to the nasal swabs that we all have bad memories of from the pandemic. “It might thus encourage more people to get tested,” he says. “Another interesting note is that the breath sample remains intact after the test, allowing time-dependent studies on these samples if there is future interest.”

    The JILA researchers will now be looking into the applicability of the technique to diagnose other conditions or diseases, particularly those of respiratory, gastrointestinal or metabolic origin. Indeed, they are setting up a collaboration with paediatricians to analyse the breath of asthmatic children. They also plan to reduce the dimensions of the instrument, which is currently metres in size.

    “We have extended the spectral coverage of the method to allow for the detection of many more molecules,” Ye tells Physics World. “In this way, we can detect even more chemical information from breath and further improve diagnostic accuracy.”

    The post Novel breathalyser rapidly tests for COVID-19 appeared first on Physics World.

    ]]>
    Research update Laser spectroscopy technique analyses exhaled breath to diagnose illness https://physicsworld.com/wp-content/uploads/2023/06/Low-Res_High_Tech_Breathalyzer_PC128.jpg newsletter1
    ‘More than Moore’ webinar explores the future of neuromorphic and quantum computing https://physicsworld.com/a/more-than-moore-webinar-explores-the-future-of-neuromorphic-and-quantum-computing/ Tue, 13 Jun 2023 16:01:02 +0000 https://physicsworld.com/?p=108417 Four experts explore technologies that could play roles in computers of the future

    The post ‘More than Moore’ webinar explores the future of neuromorphic and quantum computing appeared first on Physics World.

    ]]>

    Last week I had the pleasure of moderating a webinar panel session that looked at the future of computer technology beyond the current era of ever shrinking silicon transistors as defined by Moore’s law.

    Called “More than Moore”, the webinar featured three panellists working in neuromorphic computing, a field that seeks to create information processing systems that mimic the human brain. We were also joined by a physicist who believes that quantum computing will play a role in the information processing of the future.

    The panellists were Steve Furber of the UK’s University of Manchester, who does research on neural systems engineering; Chaoran Huang of the The Chinese University of Hong Kong, who works on silicon photonics, photonic integrated circuits, and nonlinear optics; Bhavin Shastri at Canada’s Queen’s University, who designs and builds programmable nanophotonic processors; and Renbao Liu of the The Chinese University of Hong Kong, who works on quantum nonlinear spectroscopy.

    Lively and fascinating

    It was a lively and fascinating discussion and I learned a lot about both neuromorphic and quantum computing. You can watch the webinar free of charge, and I hope that you enjoy it as much as I did.

    The podcast is sponsored by IOP Publishing, which also brings you Physics World.

    The post ‘More than Moore’ webinar explores the future of neuromorphic and quantum computing appeared first on Physics World.

    ]]>
    Blog Four experts explore technologies that could play roles in computers of the future https://physicsworld.com/wp-content/uploads/2023/06/computing-web-27691007-iStock_agsandrew.jpg
    Symmetry breaking in ‘galactic tetrahedrons’ linked to parity violation https://physicsworld.com/a/symmetry-breaking-in-galactic-tetrahedrons-linked-to-parity-violation/ Tue, 13 Jun 2023 13:58:39 +0000 https://physicsworld.com/?p=108409 Large-scale structures could be influenced by new physics

    The post Symmetry breaking in ‘galactic tetrahedrons’ linked to parity violation appeared first on Physics World.

    ]]>
    Astronomers in the US have discovered an unexpected asymmetry in the relative positions of galaxies that are hundreds of millions of light-years apart. The phenomenon could be explained by a breaking of the symmetry of the laws of nature that is believed to have occurred shortly after the Big Bang. As a result, the observation could help explain why there appears to be much more matter than antimatter in the observable universe.

    The discovery was made by analysing a database of over one million galaxies observed by the Baryon Oscillation Spectroscopic Survey (BOSS). The research was done by Jiamin Hou and Zachary Slepian at the University of Florida, and Robert Cahn at the Lawrence Berkeley National Laboratory in California, who found the unexpected pattern.

    The observation is related to parity symmetry, which applies to the long-range electromagnetic and gravitational interactions in the Standard Model of particle physics. Parity requires that a physical system will behave the same way as its mirror image. Human hands, for example, are mirror images of each other but the laws of physics apply equally to right and left hands.

    Parity violation

    In the microscopic world, however, parity symmetry can be violated by the weak interaction and possibly by the strong interaction – which both act at very short distances.

    The trio explored parity symmetry on a very large scale by drawing lines between quadruplets of galaxies that are separated by distances between 65 million and 500 million light-years. As they showed in a recent paper in Physical Review Letters, the tetrahedrons created by this exercise could then be analysed for evidence of parity violation.

    Now, they report the result of such a study, which Slepian describes as a “huge surprise”.

    The researchers defined right and left-handed galactic tetrahedrons based on how galaxies were connected to their closest and farthest partners. They found that there were significantly more galaxies with one type of handedness than the other.

    Galactic tetrahedrons

    “For any given galaxy distribution we assume that the clustering is invariant under rotation about any galaxy,” explains Slepian. “So, if I’m sitting in one galaxy, I should see that the pattern of clustering is on average the same wherever I rotate my head and look. Yet instead we see an excess of tetrahedra over their mirror images.”

    Despite strength of the effect, the reason for this handedness remains a mystery. Gravity is the only known force that can act over the huge distances separating the galaxies, and it should not violate parity. Instead, Slepian says that the asymmetry, “must have been imprinted even earlier in the universe’s history when other forces were at play”.

    This takes us all the way back to the period of cosmic inflation, which occurred about 10−33 s after the Big Bang. At this point the universe experienced a brief period of extremely rapid expansion. Physicists believe that quantum fluctuations during inflation have since expanded to become the large-scale structure of the universe. Therefore, any parity violation present during inflation could become imprinted in how galaxies are distributed in the universe 13.7 billion years later.

    The origin of this parity violation remains unknown. “It could have been a new force, or a new particle, acting on a quantum scale at that time,” says Slepian.

    Missing antimatter

    This potential observation of parity violation in how galaxies are distributed is exciting news. As well as suggesting the existence of physics beyond the Standard Model, could also help solve another of physics’ deepest mysteries: why is there much more matter than antimatter in the universe.

    The Standard Model predicts that equal amounts of matter and antimatter should have been formed in the Big Bang. Had that happened, matter and antimatter would have annihilated each other, leaving the universe with neither. Luckily for us there seems to have been an excess of matter left over – a phenomenon called baryogenesis.

    It is possible that the mechanism that caused parity violation that led to this latest astronomical observation could also related to baryogenesis.

    “There’s a range of mechanisms that can cause parity violation, all pretty speculative,” says Slepian. He cites hypothetical particles called axions, or one of the fundamental forces behaving differently in the high energies of the Big Bang. “While it’s not guaranteed that whatever mechanism is producing this parity violation in the galaxies could also explain baryogenesis, I think there certainly could be a relationship.”

    While the existence of this galactic asymmetry has not been established beyond any doubt, the findings provide strong evidence for inflation and physics beyond the Standard Model. However, a systematic error in the data could be responsible for the observation. “I’ll feel a lot better once the same signal is seen in a different dataset taken by a different instrument with different software and different people,” says Slepian.

    Slepian, Hou and Cahn are all members of the science team of the Dark Energy Spectroscopic Instrument (DESI) at Kitt Peak National Observatory. It will observe over 35 million galaxies, and the trio intend to use DESI to make further observations to confirm their findings.

    The results are described in Monthly Notices of the Royal Astronomical Society.

    The post Symmetry breaking in ‘galactic tetrahedrons’ linked to parity violation appeared first on Physics World.

    ]]>
    Research update Large-scale structures could be influenced by new physics https://physicsworld.com/wp-content/uploads/2023/06/JWST-galaxies.jpg newsletter1
    Defect engineering for quantum memory chips https://physicsworld.com/a/defect-engineering-for-quantum-memory-chips/ Tue, 13 Jun 2023 12:31:00 +0000 https://physicsworld.com/?p=108299 Available to watch now, Physics World explores the developments in the defect engineering of materials to create chip devices for quantum communications and computing

    The post Defect engineering for quantum memory chips appeared first on Physics World.

    ]]>

    Atomic-scale defects in crystals can make excellent quantum memories that can be written and read out using lasers, and could form the basis of future quantum communications and computing systems. Creating these defects “on demand” and engineering memory chips with arrays of defects is very challenging.

    In this webinar, Jason Smith will talk about the methods used, the current state of the art, and what we are learning along the way.

    Jason Smith is professor of photonic materials and devices at the University of Oxford, and founding editor-in-chief of the new IOP Publishing journal Materials for Quantum Technology. His research focuses on engineering materials and devices in which photons and electrons communicate in controlled ways, as a means to develop new technologies in sensing, communications and computing.

    The post Defect engineering for quantum memory chips appeared first on Physics World.

    ]]>
    Webinar Available to watch now, Physics World explores the developments in the defect engineering of materials to create chip devices for quantum communications and computing https://physicsworld.com/wp-content/uploads/2023/06/2023-06-29-webinar-image.jpg
    Boron arsenide single crystals with ultrahigh thermal conductivity and carrier mobility https://physicsworld.com/a/boron-arsenide-single-crystals-with-ultrahigh-thermal-conductivity-and-carrier-mobility/ Tue, 13 Jun 2023 10:33:58 +0000 https://physicsworld.com/?p=108056 Available to watch now, Physics World explores the potential of this “wonder material” for semiconductor devices of the future

    The post Boron arsenide single crystals with ultrahigh thermal conductivity and carrier mobility appeared first on Physics World.

    ]]>
    Semiconductors are the most important part of modern electronics. A good semiconductor should have the right band gap, high carrier mobility in both electrons and holes, and high thermal conductivity, but the semiconductors currently available do not meet the requirements. Boron arsenide (BAs) seems to be the ideal semiconductor. It has a bandgap of ~2.1 eV, carrier mobility above 1400 cm2 s-1 V-1 for both the electrons and holes, isotropic thermal conductivity higher than 1300 W m-1 K-1 at room temperature.

    In this webinar, the speaker, Zhifeng Ren, will present on what has been done and what is expected for this special material.

    Zhifeng Ren is a M D Anderson Chair Professor in the Department of Physics at the University of Houston, and director of the Texas Center for Superconductivity at the University of Houston (TcSUH). He obtained his PhD from the Institute of Physics of the Chinese Academy of Sciences in China in 1990. He was a postdoctoral fellow and research faculty member at SUNY Buffalo (1990–1999) before joining Boston College as an associate professor in 1999. He specializes in fields such as nanostructured thermoelectric materials, non-noble-metal catalysts for water electrolysis, novel semiconductor boron arsenide single crystals with ultrahigh thermal conductivity and carrier mobility, sodium nanofluid for enhanced oil recovery and cleaning, superconductor levitated super system for energy transport and storage and people/goods transport.

    The post Boron arsenide single crystals with ultrahigh thermal conductivity and carrier mobility appeared first on Physics World.

    ]]>
    Webinar Available to watch now, Physics World explores the potential of this “wonder material” for semiconductor devices of the future https://physicsworld.com/wp-content/uploads/2023/05/2023-06-28-webinar-image-1.jpg
    The physics of languages https://physicsworld.com/a/the-physics-of-languages/ Tue, 13 Jun 2023 10:00:11 +0000 https://physicsworld.com/?p=107888 Marco Patriarca, Els Heinsalu and David Sánchez explain how physics has spread into the field of linguistics

    The post The physics of languages appeared first on Physics World.

    ]]>
    If the world of research were an ecosystem, with scientists of different disciplines representing different species, then physicists would be classified as invaders. After all, they’ve spread their methods and tools to many other fields over the years, infiltrating not only other natural sciences but the social sciences too.

    Born from this invasion, interdisciplinary fields such as sociophysics and econophysics have been developed, in which mathematical models from physics are applied to social contexts, including traffic, crowds and financial markets. These new areas – and the models involved – are part of what’s known as complex systems theory. It concerns systems composed of many elements that interact with each other, producing a collective behaviour that could not be understood if the properties of the individual components were considered in isolation.

    But while sociophysics and econophysics are now recognized disciplines, the application of physics to linguistics – known as language dynamics – is less familiar. In fact, we have come across referee reports that have questioned the seriousness of research papers by physicists on this topic. So if you’re one of these strange physicists studying problems that traditionally are part of linguistics, how should you react? Well, swearing is one option.

    Spreading the word

    Swear words are in fact a great example of how physics can be applied to linguistics. In 2011 a group of physicists from the Niels Bohr Institute in Denmark and Kyushu University in Japan studied how Japanese swear words spread across the country (Phys. Rev. E 83 066116). This is a relatively simple scenario because it involves no linguistic complexity – there’s no grammar, syntax or phonetics to complicate the picture. It’s just a question of how words disperse, which is very similar to standard diffusion. Furthermore, the thin and long shape of Japan’s main islands gives it a quasi-one-dimensional geography, greatly simplifying the diffusion process.

    The study analysed linguistic maps (built from past surveys) for 21 swear words, revealing that the majority had spread out from the city of Kyoto, which is often regarded as the cultural capital of Japan. In fact, most of the words studied were found north and south of Kyoto along wave fronts located at comparable distances (figure 1). Using a simple cultural diffusion model – in which a linguistic innovation expands randomly in all directions until it meets and possibly replaces (with a given probability) older innovations – the team was able to reproduce not only the spatial distributions of the 21 swear words but also the variable distances between consecutive wave fronts.

    For example, figure 1 shows that the swear word “aho” (dumb) is predominantly used in Kyoto, while “baka” (stupid person) is more prevalent in the capital, Tokyo. This discrepancy is a consequence of the propagation of linguistic innovations that appeared in the Kyoto area at different times, rather than it being a cultural competition between the two big cities. In other words, “baka” used to exist in Kyoto but at some point was overrun by the new “aho”, which has not reached Toyko yet.

    1 Rings of swear words

    Map of Japan showing spreading rings from Kyoto

    A map showing the spread of Japanese swear words (see key). Each word is represented by wave fronts (circles) that centre on the city of Kyoto (marked by the small ring). It is noticeable how each word spreads at equal rates in all directions and that the newer the word, the smaller the spread from Kyoto. The area of word usage, meanwhile, grows with increasing distance from Kyoto. The study also revealed how new words replaced old ones rather than existing alongside them. “Baka”, for example, used to exist in Kyoto but had been overrun by the newer “aho”, which had not reached Toyko. The words “aho” and “baka” are shown with dashed lines.

    Scenarios that resemble typical reaction-diffusion wave fronts like those seen in figure 1 are also found in ecology representing, for example, a species colonizing a new territory or invading an area already occupied by another species. In fact, there are many mathematical similarities between language dynamics and ecology.

    Dispersal and evolution

    Besides words and idiomatic expressions, language features – such as phonetic traits and different syntactic structures – can spread too. This process can take place across a population, when a linguistic innovation appears in the language community, or through contact between communities with different languages. It is not necessarily connected to human migration. Instead, the spreading of a language as a whole usually occurs in addition to language community migration, which is when people speaking a language living in one place move to another location where a different language is spoken. As a result, this region can then have a bilingual community, or the original language can even become extinct.

    Furthermore, language dispersal typically occurs in parallel with language evolution, possibly causing it to split into dialects. This is similar to what happens in biology when a group within a species separates from the other members, developing its own characteristics and becoming a new species.

    Take, for example, the Mazatec languages – a group of closely related indigenous languages spoken by about 240,000 people in areas of northern Oaxaca in south-east Mexico. This is an important case study because the Mazatec languages exhibit on a small scale a level of diversity and complexity that is comparable to larger groups, such as the Romance languages in Europe (which include French, Spanish and Italian).

    When the Mazatec people migrated from their homeland, they first spread through the lowlands before moving across the mountains of the Sierra Mazateca region. What is peculiar is that some languages spoken in the lowlands are more similar to other languages in the mountains than they are to other lowland languages that are geographically closer.

    To work out why this is the case, a study in 2019 (involving authors Marco Patriarca and Els Heinsalu) used a simple spreading-evolution model (Complexity Applications in Language and Communication Sciences 10.1007/978-3-030-04598-2_9). The work showed that the two-dimensional nature of the lowlands geography makes languages diffuse more slowly, and therefore a larger diversity is observed because there is more time for mutations to appear and spread. In contrast, the quasi-one-dimensional nature of the valleys connecting the lowlands to the mountains forces a faster dispersal, preventing mutations.

    Another interesting example of language dispersal and evolution involves the Numic languages – a group of seven languages spoken by Native Americans in western US. In this case, it is believed that the proto-Numic language evolved into multiple varieties within the Numic homeland (the southern part of the Sierra Nevada mountain range and Death Valley). Then, when the speakers started their rapid spread across the Great Basin, it created a fan-like distribution of the different languages (figure 2a).

    2 Fanning out

    Map of south-west US with language areas marked

    Black and white diagram showing a fan-shaped dispersal

    The seven Numic languages, spoken by Native Americans in western US, spread out from the Numic homeland and across the Great Basin in a fan-like distribution, as demonstrated in (a), a map built from field data. A team at Universidade Federal de Pernambuco in Brazil reproduced this structure using a minimal model that simultaneously describes the diffusion and evolution of languages (b).

    In 2006 researchers from the Universidade Federal de Pernambuco in Brazil were able to reproduce this pattern using computer simulations (Physica A 361 361). Using a minimal model of a population reproducing and expanding across a territory, they showed that its language changes and splits into different ones, leading to a spatial distribution of languages that is similar to the one observed for the Numic case study (figure 2b).

    Evolution versus competition

    Language evolution is an important and complex side of language dynamics. The corresponding mathematical description is mostly inspired by ecological and genetic evolution models, but also those used in social sciences, such as game theory. Some language-evolution models are abstract and focus on the statistics of the evolution process, whereas others take into account the rules known from linguistics, such as the phonetic laws describing the evolution of sounds.

    On short timescales, however, we can neglect evolution, and consider languages as fixed species competing for speakers. In language competition, which was one of the first topics discussed in language dynamics, we do not have just two competing species – i.e. monolinguals speaking language A or B. Instead, there are also bilinguals, who speak both languages. So, could a language be likened to a parasite that can coexist with another within a host?

    Another interpretation could be that speakers are nodes of a temporal network, and the language used by speakers to communicate is a link between two nodes. Now, if we have a multilingual community, speakers have to decide which language to use where, with whom and for what. In order to understand how a minority language survives or goes extinct depending on the speaker’s choices, one can employ models of language use, possibly also incorporating information from real situations (PLOS ONE 16 e0252453).

    Language complexity and levels of description

    In the case of languages and linguistic analysis, there are two different types of complexity. One is related to structural features of the language itself, such as phonemes, morphemes and lexical stems, while the other is linked to social interactions, such as conversations in person or communication through social networks. These different aspects of complexity can be described in terms of interacting complex networks. Unifying the two dimensions of language complexity – the structural and the social – is a major challenge for mathematical modelling of languages.

    Furthermore, the modelling can be done at different levels of detail. In terms of structural complexity, one can study units of sounds (phonology), words (morphology), sentences (syntax) and global meaning (semantics). As for social complexity, one can consider it at a macroscopic level in terms of population sizes of the language communities; at a mesoscopic level in which noise and disorder is added to the mathematical description; or at the detailed microscopic level of individuals, in which single speakers are simulated taking into account their diversity and random fluctuations.

    Collecting information about languages

    Traditionally, information about natural languages – those that have developed simply by humans speaking it – is collected through field work, which was the case for the studies of the Mazatec and Numic languages. It involves interviewing speakers to document the language, focusing on the spoken languages not the written texts. The data can then be analysed statistically using different mathematical methods and tools to estimate the linguistic distances between languages and to reveal language groups.

    However, written texts offer another view on language, and is another part of linguistics that physicists can get involved in. By applying statistical physics, one can reveal regularities and statistical laws, such as Zipf’s law of brevity, which states that more frequently used words tend to be shorter. Today this type of study can be made using numerical tools and fast computers, allowing easy analysis of large digital databases. Doing this, the brevity law has been shown to hold for around 1000 languages from 80 different linguistic families.

    The rise of social media has also opened up new ways of collecting linguistic data. People on Twitter, for example, communicate in real time, providing lots of data in the form of millions of geotagged posts from across the world and in a plethora of languages. Such data may be somewhat biased – users are predominantly young (12–34) and male – but they do contain a lot of interesting and intriguing information.

    In one recent study, physicists from the Institute for Cross-Disciplinary Physics and Complex Systems (IFISC) in Spain (including author David Sánchez) built a high-resolution linguistic picture of various multilingual regions to capture the diversity of these societies and understand what drives language extinction (Phys. Rev. Research 3 043146). To do this, they used a dataset of 100 million Tweets, which were collected between 2015 and 2019 in 16 countries and regions. Language and location attribution was done automatically – with Twitter providing the location and the researchers using automatic tools to determine language – allowing the team to calculate the proportion of speakers in a given language and geographical location.

    3 Mixing monolinguals

    Four maps – two of Belgium and two of Catalonia – coloured in shades of yellow and purple

    Belgium (top) and Catalonia (bottom) are both multilingual territories that contain monolingual communities. By analysing Tweets collected between 2015 and 2019, researchers were able to calculate the proportion of speakers of a given language in 100 km2 areas (the squares) for both countries. These maps show the proportions of (a) French, (b) Dutch, (c) Catalan and (d) Spanish speakers, where dark purple means there are no speakers of that language, and yellow indicates areas where only that language is spoken. Black means there were insufficient Tweets from that area to constitute a data set.

    In Belgium, there is an obvious language divide between the regions of Flanders in the north where Dutch is dominant, and Wallonia in the south where French is mostly spoken. The most language mixing happens at the border (indicated by a black line). Brussels is also marked on the map and shows a concentration of French speakers. In contrast, the Catalonia map shows much more widespread mixing, with a slight difference between the central countryside and large coastal cities in the east.

    Countries including Belgium or Switzerland were found to have monolingual communities separated by clear-cut boundaries, whereas in regions such as Catalonia speakers of the various languages appeared to be mixed (figure 3). The reasons behind these different distributions are mainly historical, but also related to language similarity and prestige.

    Tweets can also provide valuable information about geographical lexical variation – when different words and phrases are used in different areas to refer to the same thing. One approach is to create a list of word variations for the same definition and study their spatial occurrences using clustering algorithms. By looking for similarities among elements with given lexical features and seeing how they form clusters, it is possible to find the dialect areas. Alternatively, one can take into account all words in the dataset of Tweets, not only those that show alternate forms.

    In a recent study, the group at IFISC used this second method to map cultural regions in the US (Humanities and Social Sciences Communications 10 133). Based on the idea that cultural affiliation can be inferred from the topics that people discuss, the researchers looked at the frequency distributions of words in geotagged Tweets to find regional hotspots for them. From there, they were able to derive the main clusters of topical variation (figure 4).

    4 Word culture

    Map of US and key list of words

    By looking at the contents of nine billion geotagged Tweets posted in the US from 2015 to 2021, researchers were able to build frequency distributions of words to find regional hotspots corresponding to their usage. From these words, and therefore the topics people discussed, the team was able to map out cultural regions (a). Each segment is a county, and those coloured white did not have enough data. The topics most frequently discussed in the different regions are: cuisine, fashion, music (blue); sports, school (yellow); nature, weather, outdoor activities (green); urban life, immigration, violence (red); self-reference, Hispanic culture (cyan). The most specific words for each region are displayed in (b).

    Although such studies of social-media data can provide linguistic pictures over short time periods, the future challenge will be to track how languages, and the competition between them, evolve in time. Doing so, however, one has to remember that the way languages are used in real life and online can be different. Most online information is in English, but offline it is only the third most spoken native language, with Chinese far more common (even though it accounts for less than 2% of online language). Furthermore, many existing languages are not represented in online communication platforms at all.

    Linguistic diversity

    When we give talks to non-physicists about language dynamics, we have noticed that the link between physics and linguistics always triggers lively discussions. Language, of course, concerns us all. Sadly, however, many languages are dying. It has been estimated that 50–90% of the approximately 6000 languages spoken currently in the world will disappear by the end of this century. In other words, on average about every two weeks a language dies. Interestingly, areas with high biodiversity also have high linguistic diversity, and the loss of both occurs in parallel.

    Returning to the notion of the research world as an ecosystem, we hope that physics spreading into linguistics does not have a negative effect, as is usually the case when one species invades another. Instead, we foresee a fruitful symbiosis, in which the tools and models of complex systems theory let us understand the mechanisms that drive how languages evolve and spread. This knowledge is not only interesting scientifically but also has a clear social impact that may help shape and support societies that are linguistically more inclusive.

    The post The physics of languages appeared first on Physics World.

    ]]>
    Feature Marco Patriarca, Els Heinsalu and David Sánchez explain how physics has spread into the field of linguistics https://physicsworld.com/wp-content/uploads/2023/06/2023-06-Patriarca-world-map-speech-bubbles-498555620-iStock_Qvasimodo.jpg newsletter
    Novel AI tool predicts colorectal cancer survival from pathology images https://physicsworld.com/a/novel-ai-tool-predicts-colorectal-cancer-survival-from-pathology-images/ Tue, 13 Jun 2023 08:30:20 +0000 https://physicsworld.com/?p=108395 AI tool uses histopathology images to predict prognosis, assess tumour aggression and suggest optimal treatments for patients with colorectal cancer

    The post Novel AI tool predicts colorectal cancer survival from pathology images appeared first on Physics World.

    ]]>
    Colorectal cancer (CRC) is the second most common cause of cancer death in the United States, leading to about 53,000 deaths each year. CRC is a complex and multifaceted disease that poses significant challenges in diagnosis, prognosis and treatment selection. In a study reported in Nature Communications, Kun-Hsing Yu and his team at Harvard Medical School developed the Multi-omics Multi-cohort Assessment (MOMA) system, a machine learning framework for the prediction of prognosis in colorectal cancer patients.

    The researchers showed that histopathology images can be used as reliable predictors of multi-omic aberrations: genomic, epigenomic, transcriptomic and proteomic abnormalities. “These findings hold great promise for improving patient outcomes and changing the way we approach colon cancer treatment,” says Yu.

    Current methods for assessing CRC prognosis include a combination of histopathological, genomic and clinical data analysis, with histopathologic examination remaining the gold standard for diagnosis. Treatment selection for CRC is based on histology subtypes and genetic variations. Inter-rater variations in histopathological diagnosis have been reported, however, and the genomic analysis process can take days or even weeks to complete, in addition to not being available in all clinics in developing countries. Such limitations may prevent CRC patients from receiving timely and appropriate treatments.

    Yu and his research team recognized the inability of human analysis of histopathology images to unlock valuable information about the disease. MOMA uses advanced image analysis techniques and machine learning (ML) algorithms to bridge the gap between the visual inspection of tumours and the molecular aberrations found in tumour cells. The ML framework successfully predicts the prognoses of early-stage colorectal cancer patients and identifies the genomics and proteomics status of cancer samples using histopathology images.

    The researchers developed a deep-learning model that can extract features and patterns from digitized histopathology images, enabling the identification of important cell features associated with certain conditions. They trained their model using information obtained from 1888 colorectal cancer patients from diverse populations.

    In tests on unseen images, the histopathology-based model showed remarkable accuracy in predicting a wide variety of mutations, including genomic alterations and transcriptomic variations. In addition, the AI model predicted patients’ survival outcomes, pointing to the potential of histopathology image features as prognostic indices.

    The team suggests that the ML model could also facilitate the development of personalized medicine, with the integration of histopathology images and quantitative omics data opening exciting avenues for personalized cancer treatments. This approach holds great promise for optimizing treatment options, minimizing side effects and improving overall patient outcomes.

    With the development of sophisticated algorithms and image processing techniques, AI tools for histopathology image analyses can be integrated into routine clinical practice. In the future, oncologists, pathologists and other healthcare professionals could use this valuable tool to improve diagnostic accuracy, predict response to treatment and tailor treatments for colorectal cancer patients.

    The post Novel AI tool predicts colorectal cancer survival from pathology images appeared first on Physics World.

    ]]>
    Research update AI tool uses histopathology images to predict prognosis, assess tumour aggression and suggest optimal treatments for patients with colorectal cancer https://physicsworld.com/wp-content/uploads/2023/06/MOMA-Yu-et-al.jpg newsletter1
    Electrocatalysis for the sustainable production of fuels and chemicals https://physicsworld.com/a/electrocatalysis-for-the-sustainable-production-of-fuels-and-chemicals/ Mon, 12 Jun 2023 14:14:04 +0000 https://physicsworld.com/?p=107458 Available to watch now, The Electrochemical Society, in partnership with Element Six, explores efforts to enable a future paradigm involving sustainable chemical processes for producing fuels and chemicals based on renewable resources

    The post Electrocatalysis for the sustainable production of fuels and chemicals appeared first on Physics World.

    ]]>

     

    Modern society relies on large-scale chemical processes for producing fuels and chemicals that drive many key sectors, including transportation, agriculture, and manufacturing, among others. To date, fossil resources have served as the primary feedstock and provided the vast majority of the energy demanded by the global economy. There are many challenges to the current paradigm, as modern processes are generally not sustainable, and while they provide for billions, there are billions of others who have minimal access to the modern energy system.

    This talk describes efforts to enable a future paradigm involving sustainable chemical processes for producing fuels and chemicals based on renewable resources. Examples include hydrogen (H2) production from water, CO2 conversion to carbon-based fuels and chemicals, and ammonia production from N2 and/or nitrate.

    A key focus of this webinar is the fundamental design and development of catalyst systems that can execute desired chemical transformations with high activity, selectivity, and durability, plus the integration of such catalysts into devices that can achieve high-performance, paving the path ahead for new, sustainable technologies.

    An interactive Q&A session follows the presentation.

    Thomas Francisco Jaramillo is an associate professor of chemical engineering and of energy science engineering at Stanford University, along with a faculty appointment in Photon Science at SLAC National Accelerator Laboratory. He serves as director of the SUNCAT Center for Interface Science and Catalysis, a joint partnership between Stanford and SLAC. Prof. Jaramillo’s research efforts are aimed at developing catalyst materials and new processes to improve sustainability in the energy and chemical sectors. A key emphasis is engineering catalyst materials at the nano- and atomic-scale to induce desired properties, and then on designing and developing new technologies that employ them. Examples include electrified processes to convert water, N2, and CO2 into valuable molecular products such as hydrogen (H2), ammonia-based fertilizers, and carbon-based products (e.g. fuels, plastics) for use in transportation, agriculture, energy storage, and in the chemical industry, among others. The overarching theme is the development of cost-effective, clean energy technologies that can benefit society and provide for economic growth in a sustainable manner.

    Thomas has authored more than 200 publications in peer-reviewed literature in these areas. His efforts have earned him a number of honours and awards including the 2021 Paul H Emmett Award in Fundamental Catalysis from the North American Catalysis Society; the 2014 Resonate Award from the Resnick Institute; 2011 Presidential Early Career Award for Scientists & Engineers; 2011 U.S. Department of Energy Hydrogen and Fuel Cell Program Research & Development Award; 2011 National Science Foundation (NSF) CAREER Award; and 2009 Mohr-Davidow Ventures Innovator Award. He is on the annual list of Highly Cited Researchers by Clarivate Analytics, ranking in the top 1% by citations (2018–present).

    Thomas is from Carolina, Puerto Rico, earning a BS in chemical engineering at Stanford University and MS and PhD degrees in chemical engineering at the University of California, Santa Barbara (UCSB). He then pursued post-doctoral research as the Hans Christian Ørsted Postdoctoral Fellow at the Technical University of Denmark, Department of Physics, prior to joining the Stanford faculty.

    
    

    The post Electrocatalysis for the sustainable production of fuels and chemicals appeared first on Physics World.

    ]]>
    Webinar Available to watch now, The Electrochemical Society, in partnership with Element Six, explores efforts to enable a future paradigm involving sustainable chemical processes for producing fuels and chemicals based on renewable resources https://physicsworld.com/wp-content/uploads/2023/05/2023-06-28-webinar-image.jpg
    Electron-hole symmetry in quantum dots shows promise for quantum computing https://physicsworld.com/a/electron-hole-symmetry-in-quantum-dots-shows-promise-for-quantum-computing/ Sun, 11 Jun 2023 14:31:48 +0000 https://physicsworld.com/?p=108372 Bilayer graphene devices could find widespread use as qubits and sensors

    The post Electron-hole symmetry in quantum dots shows promise for quantum computing appeared first on Physics World.

    ]]>
    Several unique phenomena that could benefit quantum computing have been observed in quantum dots made from bilayer graphene. The research was done by Christoph Stampfer at RWTH Aachen University and colleagues in Germany and Japan, who showed how the structure can host an electron in one layer and a hole in the other. What is more, the quantum spin states of these two entities are near perfect mirrors of each other.

    A quantum dot is a tiny piece of semiconductor with electronic properties that are more like an atom than a bulk material. For example, an electron in a quantum dot is excited into a series of quantized energy levels – much like in an atom. This is unlike a conventional solid, in which electrons are excited into a conduction band. This atom-like behaviour can be fine-tuned by adjusting the size and shape of the quantum dot.

    A quantum dot can be made using tiny pieces of graphene, which is a sheet of carbon just one atom thick. Such quantum dots can be made of just one sheet of graphene, two sheets (bilayer graphene) or more.

    Interesting spin qubits

    One promising application of graphene quantum dots is to create quantum bits (qubits) that store quantum information in the spin states of electrons. As Stampfer explains, the development of  graphene quantum dots has important implications for the development of quantum computers. “Graphene quantum dots, first recognized in 2007, emerged as interesting hosts for spin qubits, which can employ both electron and hole quantum dots to facilitate long-range coupling,” he says. Holes are particle-like entities that are created in a semiconductors when an electron is excited. “This breakthrough has laid the foundation for a promising quantum computing platform based on solid-state spin qubits,” he adds.

    Now, Stampfer and colleagues have pushed the idea further by fabricating quantum dots from bilayer graphene. Here, each graphene layer functions as an individual quantum dot, but closely interacts with its counterpart in the other layer.

    Bilayer graphene can trap electrons and holes when an external voltage is applied across them – creating a unique gate structure. Following recent efforts to reduce disorder in bilayer graphene’s molecular structure, Stampfer’s team has now reached a new milestone in this line of research.

    Gate tunability

    “In 2018, this approach first made it possible to fully utilize the unique electric-field-induced band gap in bilayer graphene to confine single charge carrier,” Stampfer explains. “By further improving the gate tunability, it is now possible to make quantum dot devices that go beyond what can be done in quantum dot materials including silicon, germanium or gallium arsenide.”

    A key advantage of bilayer structures are the properties of spin states of the quantum dot’s electrons and holes. Through their experiments, the team discovered that the states of the individual electrons and holes in one of the graphene layers are almost perfectly mirrored in the pair found in the other layer.

    “We show that bilayer graphene electron-hole double quantum dots have an almost perfect particle-hole symmetry,” Stampfer continues. “This allows for transport through the creation and annihilation of single electron-hole pairs with opposing quantum numbers.”

    These results could have important implications for quantum computing systems that use electron-spin qubits. This is because is should be possible to couple such qubits together over longer distances, while reading out their spin symmetrical states more reliably. This could ultimately enable quantum computers to become far more scalable, sophisticated, and resistant to errors than existing designs.

    Stampfer’s team also envisage many possible applications beyond quantum computing. predicting how bilayer graphene quantum dots could provide a basis for nanoscale detectors for terahertz waves, and could even be coupled to superconductors to create efficient sources of entangled pairs of particles.

    Through their future research, the researchers will now aim to delve deeper into the capabilities of bilayer graphene quantum dots; potentially bringing their widespread application in quantum technologies a step closer.

    The research is described in Nature.

    The post Electron-hole symmetry in quantum dots shows promise for quantum computing appeared first on Physics World.

    ]]>
    Research update Bilayer graphene devices could find widespread use as qubits and sensors https://physicsworld.com/wp-content/uploads/2023/06/Graphene-bilayer-quantum-dot.jpg
    Creative licence: how to stimulate innovation and generate new ideas https://physicsworld.com/a/creative-licence-how-to-stimulate-innovation-and-generate-new-ideas/ Sat, 10 Jun 2023 10:00:37 +0000 https://physicsworld.com/?p=108172 Dennis Sherwood on the creative process and how to grow a good idea

    The post Creative licence: how to stimulate innovation and generate new ideas appeared first on Physics World.

    ]]>
    Illustration of someone exchanging a light bulb for a bag of money

    Most people would agree that creativity plays an important role in a successful scientific career. But what exactly do we mean by creativity? And how can scientists become more innovative? Physicist, consultant and author Dennis Sherwood has made it his life’s work to delve into the creative process, and help others tap into their own creativity. He runs Silver Bullet Machine – a consultancy that helps companies solve problems, generate and implement new ideas, and grasp new opportunities. Sherwood is also the author of Creativity for Scientists and Engineers: a Practical Guide (published by IOP Publishing, which also produces Physics World), which includes a number of strategies to increase scientific creativity. The work was named best “Specialist Business Book” at the 2023 Business Book Awards.

    How do you define creativity in science and how does one go about developing it? We know it when we see it, but it’s a struggle to define it

    The question, “What is creativity?” has kept philosophers busy for centuries. In many ways it is a difficult question, but to me creativity is just having an idea. It’s as simple as that. An idea, of course, is something imaginary. It happens within one’s own head. It’s a vision of a future that doesn’t exist yet. So whenever you do have an idea, you are being creative. And, of course, that’s enormously valuable and pervades everything that a scientist does.

    Ideas are often easy to come by, but it can be harder to make them work. Do you separate out the “Eureka” moment from what’s involved in getting something to happen?

    Yes, there’s a real distinction between creativity, which is having the idea in the first place, and innovation, which is shaping that idea into something real. So I might have a fantastic idea for a better mousetrap or a light bulb. I’ll get excited about that and I’ll drive my wife mad talking about it. But until I can build that better mousetrap or make the light bulb work, it’s just an idea in my head. So creativity is step one in a four-step process. The second step is evaluation. Does that idea have any kind of legs? Is it worth spending time, emotional energy, money and other resources to take things further? Stage three is development – solving all of the problems to make it work. The fourth step is implementation – for example, getting a paper published, a piece of music played at a concert, or a product brought to market.

    Creativity is embedded in each of those steps. And one of the great examples, of course, is the light bulb itself. I think the first observation that the passage of electricity causes a light effect goes way back to the 1700s. In 1802 Humphry Davy was the first person to create the incandescent light bulb by using a platinum wire connected to a powerful battery. But it wasn’t until many years later that Thomas Edison patented his working light bulb in 1880. So in those nearly 80 years scientists were busy addressing all of the problems that arose from the fundamental idea, and much creativity was needed, for example to design a vacuum pump so that you could extract the air from the envelope of the bulb, so the filament wouldn’t burn up.

    In your book, you look at some specific examples of creativity in physics. Could you tell us about those?

    Physicists can benefit from creativity all over the place. If you are a researcher, you need to come up with the “big idea” for your research project, get a grant or persuade a company to finance it. You’re then into the actual work itself, where you will need to solve all the different problems that crop up. If you are building a team, then you need to be inventive in ensuring everyone works well together. If you are a physics teacher, creativity is enormously valuable in thinking of more exciting ways to put complex concepts across to students.

    And creativity is also something you need on a personal level too. For example, when I attend conferences I’m usually happy by myself. Although one of the reasons for going to a conference is to network and to meet people, for many years I was too shy to go up to someone and introduce myself – I just couldn’t do it. So I needed to have the idea in my head that it was something I should try at the very least. When I recognized that, and started talking to people at conferences, I found that most were pretty civil and spoke to me nicely. All those fears that I had of rejection went away.

    Dennis Sherwood receiving an award for his book

    Of course creativity plays a vital role in physics itself, right from the days of Archimedes, who famously had to determine the volume of an irregular object in the form of a gold crown that perhaps had some silver substituted in it. While Archimedes understood the concept of density, and could measure the crown’s total weight, the problem was figuring out how to measure the volume of that abstract object. Inspiration is said to have struck him in the bath, when he noticed the displacement of the water as he got in, and the original Eureka moment was born.

    Another great example comes from the creativity displayed by Johannes Kepler, as he looked at Tycho Brahe‘s data and attempted to determine the orbits of planets. To do this, he had to throw all his preconceptions away, for his original intent was to prove that the orbits were circles. When he realised that the data wouldn’t fit, rather than saying “the data are wrong”, he changed his mind, so discovering that the orbits are elliptical. That scientific discovery was hugely creative. But to me, changing his mind, despite deeply-held beliefs, is even more creative.

    Is it possible to have ideas deliberately, now?

    Yes. Sometimes, of course, you get lucky, and you can rejoice when it happens. But you can’t rely on that when you’ve got to get that research proposal in, or write your PhD thesis. That’s when you need to know how to make creativity deliberate, and something that you can tap into at will. There’s a widespread belief that it is more about intuition or that “light bulb” moment. You might choose to go for a walk by a river bank, because that worked for Albert Einstein, and you may have a flash of inspiration, or you may not. But there is a way to make idea generation deliberate and systematic.

    Over the years I’ve read many books about creativity and I was particularly fascinated by the Hungarian author Arthur Koestler. His 1964 book The Act of Creation is a fascinating study on the processes of discovery, invention, imagination and creativity across the arts and sciences. Koestler felt that the “act of creation” is not that of the Old Testament God – it does not create something out of nothing. Rather, it synthesizes, recombines and shuffles together already existing facts, faculties, skills and knowledge to form a new pattern.

    The main goal is knowing how to make creativity deliberate, and something that you can tap into at will

    I found that to be a really powerful statement, and it highlighted that in most cases creativity isn’t a Eureka moment out of the blue. It may look like that at the end, but what is actually happening is you take already existing fragments of knowledge and mix them together in different ways – a bit like playing with Lego bricks – you can put them together in all sorts of different ways.

    Indeed, every physicist takes things that already exist and recombines them into new patterns. And when some new knowledge comes along, you can bring that into the mix too and go further. When doing this, you may sometimes have to deconstruct an existing pattern to reveal new truths. So the more knowledge you have (or have access to), the more likely you are to be creative and ready to re-shape that knowledge – perhaps throwing some things away, perhaps exploring different patterns.

    Isaac Newton famously said that he stood on “the shoulders of giants”, which is acknowledging the component parts that he brought together. Deconstructing what we know and seeking new patterns is absolutely key.

    What are some of the barriers to creativity that scientists face, and do you have any tips for overcoming them?

    Whether it’s in academia or industry, the initial barrier is not understanding the fundamentals of the creative process in the first place. If you haven’t come across what I call “Koestler’s Law” – that statement about recombining existing elements – then you may not know how to go about it.

    Very often, creativity requires you to deconstruct existing knowledge first. So another big barrier is when someone is unwilling to do that themselves or to allow someone else to challenge their knowledge – especially if they are senior. The history of science is full of people who came up with novel concepts that were in opposition to the established wisdom at the time.

    If you are running a team or a lab, and want creativity to flourish, focus on building an environment with the right conditions for that to happen. There are a couple of chapters in my book that specifically look at how to address this issue in the particular context of academic communities.

    What are some of the factors that influence the creative process, and what can research institutions do to boost novel ideas?

    There are all sorts of pressures on people that influence their creativity, such as the way in which academics have to apply for grants and get funding. If a postdoc is attempting to get a position at a faculty and knows that the main metric they will be judged on is a large body of published papers, then that will be the main motivation. To achieve that, the postdoc is, understandably, likely not to want to take too many risks. But since creativity is necessarily uncertain, that increases the pressure to play safe, and this will inevitably limit creativity.

    Those pressures within academia – from getting grants to getting promoted – often squeeze creativity out. In fact, about a decade or so ago the Engineering and Physical Sciences Research Council (EPSRC), which provides government funding in the UK, felt that researchers were playing it too safe in their grant applications, and formed a committee to address the issue. One recommendation was that the EPSRC should create a grant where people like me could work with academic teams on a programme now called “Creativity@home”. Its main aim is to “generate and nurture creative thinking that might lead to potentially transformative research”. My consultancy is a preferred supplier and over the last 10 years I’ve done lots of great assignments. This was a deliberate attempt by the EPSRC to encourage scientists to be a bit bolder and more creative.

    The more knowledge you have, the more creative you can be. And that is why the creativity of teams is much more effective than that of an individual. There is a greater shared repertoire, which opens the door for more new thoughts and ideas.

    From computers to the Internet, and with the recent growth of AI systems, we all have new tools at our disposal. Do you think that technology is making people more creative, or is creativity a conserved quantity in humanity?

    All the things you mention should enrich creativity because there is more raw material to be creative with. Certainly some aspects of creativity are being taken over by AI – there are, for example, already many programs that can create music. But it’s all very well to use AI to discover a credible new pattern of notes of music, or words in an essay, or component parts for a new product. The richest creativity, however, comes from having the power to change my mind, and no AI agent is ever going to replace that. That will always be a purely human endeavour.

    The post Creative licence: how to stimulate innovation and generate new ideas appeared first on Physics World.

    ]]>
    Careers Dennis Sherwood on the creative process and how to grow a good idea https://physicsworld.com/wp-content/uploads/2023/05/2023-06-careers-Sherwood_feature.jpg
    Belle II particle detector is latest LEGO model, ‘Shut up and calculate’: the heavy metal version https://physicsworld.com/a/belle-ii-particle-detector-is-latest-lego-model-shut-up-and-calculate-the-heavy-metal-version/ Fri, 09 Jun 2023 15:27:05 +0000 https://physicsworld.com/?p=108383 Excerpts from the Red Folder

    The post Belle II particle detector is latest LEGO model, ‘Shut up and calculate’: the heavy metal version appeared first on Physics World.

    ]]>
    Only last month we had a design for a LEGO quantum computer and now comes a micro model of the Belle II experiment at the KEK particle-physics lab in Japan. The miniature brick version of the experiment, which was made in Germany by a team led by Torben Ferber at the Karlsruhe Institute of Technology (KIT), is made from 75 pieces and apparently takes less than 10 minutes to build. Despite being small, the design still includes details of Belle II’s particle identification system as well as the iconic blue and yellow coloured octagon shape of the detector. Torban and colleagues have published a parts list and building instructions in case you get the urge to create your own model.

    It seems like the University of Nottingham physicist Phil Moriarty is everywhere these days. In April, he reviewed a book about “quantum woo” for Physics World; and he also appears in this month’s Physics World Stories podcast pondering the question “Will AI chatbots replace physicists?”.

    Moriarty is also a heavy-metal guitarist and he has joined forces with some fellow musicians to release a song and video about the antithesis of quantum woo: “Shut up and calculate”.

    The phrase is attributed to the American physicist David Mermin and arose out of the vagueness that surrounds the definition of the Copenhagen interpretation of quantum mechanics. This interpretation of the quantum world has dominated physics since it was first developed in the 1920s by Werner Heisenberg and Niels Bohr, working in the Danish capital.

    While quantum mechanics is an incredibly successful theory, physicists continue to struggle with the bizarre and non-intuitive nature of the quantum world. Shut up and calculate became an almost pejorative response to attempts to find deeper philosophical meaning within quantum mechanics – an endeavour that did in some cases lead to quantum woo.

    Today, however, some of the more mysterious aspects of quantum mechanics – including entanglement and superposition – form the basis of practical quantum technologies. Indeed, today physicists are more likely to say “shut up and contemplate” the wonders of the quantum world. Perhaps that could be Moriarty’s next song.

     

    The post Belle II particle detector is latest LEGO model, ‘Shut up and calculate’: the heavy metal version appeared first on Physics World.

    ]]>
    Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/06/LEGO-Belle-II.jpg
    Will AI chatbots replace physicists? https://physicsworld.com/a/will-ai-chatbots-replace-physicists/ Fri, 09 Jun 2023 14:56:03 +0000 https://physicsworld.com/?p=108379 Large language models are changing the way physics is taught and practised

    The post Will AI chatbots replace physicists? appeared first on Physics World.

    ]]>
    When discussing the capabilities of the latest AI chatbots, a physicist may argue: “Okay, they’re impressive at regurgitating texts that sound increasingly human. But we physicists don’t have much to worry about. It will be ages before the bots learn to grapple with physical concepts and the creativity required to do real physics!”

    Such a view is almost certainly misguided. In a recent paper uploaded to arXiv, Colin West from the University of Colorado Boulder reported that the latest version of ChatGPT (built on GPT-4) scored 28 out of 30 on a test designed to assess students’ grasp of basic Newtonian mechanics. The previous version (GPT-3.5) managed just 15 correct answers, and neither version had any explicit programming regarding the laws of physics. Can you imagine the improvement 20 years from now?

    In the latest episode of the Physics World Stories podcast, Andrew Glester considers how the exponential improvement in GPT (and other large language models) will change the way we teach and practise physics. Should we be excited or scared? Should physics courses ban or embrace the use of AI chatbots? What are the skills that future physics will need? Will physics cease to exist as a discipline in the way we understand it now? These are just some of the existential questions tackled by two guests from the University of Nottingham: Philip Moriarty, a nanotechnology specialist; and Karel Green, an astronomy PhD student and Physics World contributor.

    The post Will AI chatbots replace physicists? appeared first on Physics World.

    ]]>
    Large language models are changing the way physics is taught and practised Large language models are changing the way physics is taught and practised Physics World Will AI chatbots replace physicists? 54:01 Podcast Large language models are changing the way physics is taught and practised https://physicsworld.com/wp-content/uploads/2023/06/AI-digital-head-1183316406-iStock_Maksim-Tkachenko_home.jpg newsletter
    Ask me anything: Moiya McTier – ‘There is no greater thrill than standing on a stage in front of a crowd full of curious people’ https://physicsworld.com/a/ask-me-anything-moiya-mctier-there-is-no-greater-thrill-than-standing-on-a-stage-in-front-of-a-crowd-full-of-curious-people/ Fri, 09 Jun 2023 10:00:06 +0000 https://physicsworld.com/?p=108186 Moiya McTier is an astrophysicist, science communicator and author in New York City, US

    The post Ask me anything: Moiya McTier – ‘There is no greater thrill than standing on a stage in front of a crowd full of curious people’ appeared first on Physics World.

    ]]>
    Moiya McTier

    What skills do you use every day in your job?

    As a freelance science communicator, giving talks, hosting podcasts and writing books, the skill I use most often is deciding where to start and end a story to most effectively explain a concept. It’s a fine line to walk, because I don’t want to insult the audience by beginning too early and basic, or lose them by starting too advanced.

    I have to figure out what my audience already knows and guess at what they’re interested in to teach them something new. If I’m giving a talk, I’ll do this by asking if they’ve heard of a topic, and then I will adapt my speech based on their responses and questions. This skill comes with practice and adjusting to countless confused audience stares. My advice is to check in with your audience early and often so you know if you’ve lost them.

    What do you like best and least about your job?

    To me, there is no greater thrill than standing on a stage in front of a crowd full of curious people. I love the spotlight almost as much as I love sharing my knowledge with others. The attention is fun, but the most rewarding part is the moment when I can see the light of understanding flick on in someone’s eyes as I’m explaining a concept.

    The thing I like least about my job is describing it to other people. Most people don’t know what a science communicator is, so I compare myself to Bill Nye or Carl Sagan, which works about 70% of the time. But when I keep talking, their confusion turns into “what do myths have to do with science” and “what do you mean you don’t teach at a university?”

    Lately, I’ve just been calling myself an author!

    What do you know today, that you wish you knew when you were starting out in your career?

    I wish I had known how long everything would take. I never expected my dreams to come true overnight, but I definitely underestimated how long it takes to grow a podcast audience, write a book, or get a TV deal (I’ve been working on that last one for more than three years).

    Now I understand that building something from scratch will always take longer and be a less direct path than you want it to be, so I’ve got to hone my skills and grow my platform while I wait. I’ve learned to be patient and think of the rejections as “not for now”s, but I would have saved myself a lot of frustration if I had known from the start that the timescale was years instead of months.

    The post Ask me anything: Moiya McTier – ‘There is no greater thrill than standing on a stage in front of a crowd full of curious people’ appeared first on Physics World.

    ]]>
    Careers Moiya McTier is an astrophysicist, science communicator and author in New York City, US https://physicsworld.com/wp-content/uploads/2023/05/2023-06-AMA-MoiyaMcTier_feature.jpg
    Ultrasound implant helps deliver powerful chemotherapy to brain tumours https://physicsworld.com/a/ultrasound-implant-helps-deliver-powerful-chemotherapy-to-brain-tumours/ Fri, 09 Jun 2023 08:50:36 +0000 https://physicsworld.com/?p=108358 First in-human trial shows that low-intensity pulsed ultrasound combined with microbubbles opens the blood–brain barrier for drug delivery to brain tumours

    The post Ultrasound implant helps deliver powerful chemotherapy to brain tumours appeared first on Physics World.

    ]]>
    An ultrasound device opens the blood–brain barrier

    Low-intensity pulsed ultrasound with simultaneous administration of intravenous microbubbles (LIPU-MB) may effectively enable delivery of drugs across the blood–brain barrier (BBB) into the human brain. That’s the conclusion of a study from Northwestern University Feinberg School of Medicine in Chicago. The researchers report that the chemotherapy drug albumin-bound paclitaxel was safely delivered into the brains of 17 patients with recurrent glioblastoma, in a phase 1 dose-escalation clinical trial.

    The study, reported in Lancet Oncology, provides the first direct evidence that LIPU-MB substantially increases the brain concentration of a systemically administered drug in humans. The researchers conclude that large-volume BBB opening is safe, reproducible, and can be repeated over multiple cycles of chemotherapy.

    The BBB limits the penetration of many chemotherapy drugs, making treatment of malignant brain tumours challenging. The drug paclitaxel, for example, is approximately 1400 times more potent than the standard chemotherapy agents used for gliomas — but it cannot cross the BBB. In high-grade gliomas, however, tumour cells infiltrate into the parenchyma where they are protected from exposure to drugs. As a result, 80–90% of glioblastomas recur within the 2 cm margin of peri-tumoural brain around the tumour resection cavity.

    Principal investigator Adam Sonabend, of the Northwestern Medicine Malnati Brain Tumor Institute, and colleagues conducted their study to evaluate the safety and maximal tolerated dose of albumin-bound paclitaxel after LIPU-MB-based opening of the BBB. They also aimed to assess the effect of LIPU-MB-based BBB opening on paclitaxel concentrations in peri-tumoural brain tissue.

    The study included 17 patients whose recurrent glioblastoma was unresponsive to one or more previous treatments. Several of these patients were also participants in a separate clinical trial investigating LIPU-MB with carboplatin chemotherapy.

    Following standard tumour resection, all patients had a SonoCloud-9 device (from CarThera) implanted into a window in their skulls, attached to the bone with surgical screws. The device, consisting of nine 1 MHz ultrasound emitters, is connected to a pulse generator via a single-use transdermal needle and cable. To open the BBB, the pulse generator activated the device for 4 min 30 s, with simultaneous intravenous injection of microbubbles for 30 s. Patients were awake during the sonication procedure. Immediately afterwards, the researchers intravenously administered chemotherapy over 30 min.

    The researchers found that most BBB integrity was restored within 60 min after LIPU-MB. As such, they advise that patients need to be infused within this time frame to optimize penetration of the administered chemotherapy drug.

    The first sonication treatment for each patient began one to three weeks after their surgery, followed by up to six subsequent cycles at three-week intervals. To assess safety and maximal tolerated dose, the researchers evaluated albumin-bound paclitaxel dose levels of 40, 80, 135, 175, 215 and 260 mg/m2. In total, they performed 68 cycles of LIPU-MB-based BBB opening across all patients.

    The primary endpoint of the study was dose-limiting toxicity during the first cycle of sonication and chemotherapy. Following BBB opening, some patients experienced immediate yet transient grade 1–2 headaches and other grade 1–2 neurological deficits. No dose-limiting toxicity was observed for doses of up to 215 mg/m². At 260 mg/m², one patient developed grade 3 encephalopathy during the first cycle (considered a dose-limiting toxicity) and another had grade 2 encephalopathy during the second cycle. In both cases, the toxicity resolved when doses were reduced and treatment could continue. An additional patient developed grade 2 peripheral neuropathy during the third cycle at a dose of 260 mg/m2.

    The researchers also acquired biopsy samples of sonicated and non-sonicated brain tissue from a subset of patients who needed to undergo additional neurosurgery. “Measurements of absolute drug concentrations in the human brain is especially important in gliomas because the peri-tumoural brain, where the BBB is intact, is infiltrated by glioma cells,” they explain.

    Pharmacokinetic studies showed that LIPU-MB increased the brain-to-plasma ratio of paclitaxel by 3.6 times compared with non-sonicated brain samples, while the increase seen with carboplatin was 5.8 times higher than in non-sonicated samples. The researchers also determined that LIPU-MB combined with albumin-bound paclitaxel infusion leads to paclitaxel concentrations that are cytotoxic for half of the human glioma cell lines.

    The team is currently undertaking a phase 2 clinical trial to investigate the delivery of albumin-bound paclitaxel plus carboplatin following surgical resection. “This emerging technology and approach has the potential for repurposing many existing drugs that are not considered for treatment of brain disease as they currently do not cross the blood–brain barrier,” comments Sonabend.

    The post Ultrasound implant helps deliver powerful chemotherapy to brain tumours appeared first on Physics World.

    ]]>
    Research update First in-human trial shows that low-intensity pulsed ultrasound combined with microbubbles opens the blood–brain barrier for drug delivery to brain tumours https://physicsworld.com/wp-content/uploads/2023/06/SonoCloud9_implanted_carthera.jpg
    X-ray detectors in space: the challenges and rewards of observing the ‘hot and energetic universe’ https://physicsworld.com/a/x-ray-detectors-in-space-the-challenges-and-rewards-of-observing-the-hot-and-energetic-universe/ Thu, 08 Jun 2023 15:51:46 +0000 https://physicsworld.com/?p=108366 This podcast features an instrument scientist working on ESA’s Athena mission

    The post X-ray detectors in space: the challenges and rewards of observing the ‘hot and energetic universe’ appeared first on Physics World.

    ]]>
    In this episode of the Physics World Weekly podcast the instrument scientist Roland den Hartog talks about the challenges of deploying superconductor-based detectors on satellites to do X-ray astronomy. Based at the Netherlands Institute for Space Research (SRON) in Leiden, he also explains how astronomers use X-rays to observe the “hot and energetic universe”. This involves studying a range of objects from huge galaxy clusters to compact objects such as black holes and neutron stars.

    Den Hartog is currently developing X-ray detectors for the European Space Agency’s Athena mission, which will launch in 2035. He explains that a primary goal of Athena is to gain a better understanding of the astrophysical origins of the elements by detecting the distinctive X-rays that they emit.

    The post X-ray detectors in space: the challenges and rewards of observing the ‘hot and energetic universe’ appeared first on Physics World.

    ]]>
    Podcast This podcast features an instrument scientist working on ESA’s Athena mission https://physicsworld.com/wp-content/uploads/2023/06/Roland-den-Hartog-list.jpg
    Metalens-based spectrometer fits on a chip https://physicsworld.com/a/metalens-based-spectrometer-fits-on-a-chip/ Thu, 08 Jun 2023 14:00:56 +0000 https://physicsworld.com/?p=108349 Compact new device could have applications in information security and processing

    The post Metalens-based spectrometer fits on a chip appeared first on Physics World.

    ]]>
    A diagram of the new super-compact optical spectrometer, showing the nanopillars on its surface and different colours of light being directed by the surface

    A new optical spectrometer is super-compact thanks to a metalens that focuses light at multiple wavelengths. The new device can detect light spectra with a resolution of 1 nm, and unlike its bulkier predecessors, it could be integrated onto a chip, with potential applications in security and information processing.

    Imaging spectrometers that operate in the visible, near and short-wave infrared (VNIR/SWIR) regions of the electromagnetic spectrum are routinely deployed in fields such as atmospheric science, ecology, geology, agriculture and forestry. They work by recording a series of monochromatic images, then analysing their spectrum over a given imaging area.

    One of the advantages spectrometers have over traditional cameras is that they are very good at controlling chromatic aberrations such as blurring or distortion that come about because light is spread over a region of space (dispersion) rather than focused to a point. They can do this because they contain additional optical elements such as a diffraction grating or a prism rather than just a lens and a detector.

    The snag, however, is that these extra elements, together with the large volume over which the light must propagate, makes spectrometers relatively bulky. That means they can’t be carried on small satellites or drones.

    A compact spectrometer

    Researchers led by Xianzhong Chen from the Institute of Photonics and Quantum Sciences at Herriot-Watt University, UK, have now developed a compact spectrometer that can detect light spectra with a resolution of 1 nm over a broad range of wavelengths. It accomplishes this feat thanks to a planar nanostructure called an optical metasurface that can manipulate the amplitude, phase and polarization of incident light on a subwavelength scale.

    The metasurface in the Herriot-Watt device consists of gold nanorods that are patterned atop an indium tin oxide-coated silicon dioxide substrate using standard electron beam lithography and lift-off processes. An individual device measures just 300 µm x 300 µm, and Chen explains that it is based on a novel lens design that accurately maps the wavelengths of an incident light beam to different positions on the flat focal plane of the lens. Using this design, it is possible to split and focus light with high-resolution control over its dispersion.

    According to the researchers, who describe the new metalens spectrometer in Light Science & Applications, the compact and ultrathin nature of their device mean that it could be used in on-chip integrated photonics where spectral analysis and information are processed in a compact platform.

    “Optical spectroscopy plays a very important role in various fields of science and technology with a wide range of practical applications,” Chen says. “The approach we propose is flexible and robust and provides a new scheme to control dispersion under illumination of both mono- and polychromatic incident light beams,” he tells Physics World.

    The researchers are now working on improving the resolution and operating bandwidth of their metasurface spectrometer by increasing the sample size to millimetre scales. “This will increase the radius of the multi-foci ring and include more wavelength-dependent focal points,” Chen says.

    The post Metalens-based spectrometer fits on a chip appeared first on Physics World.

    ]]>
    Research update Compact new device could have applications in information security and processing https://physicsworld.com/wp-content/uploads/2023/06/Cropped-spectrometer-1.png
    CERN physicists meet in London to plot future collider plans https://physicsworld.com/a/cern-physicists-meet-in-london-to-plot-future-collider-plans/ Thu, 08 Jun 2023 10:24:50 +0000 https://physicsworld.com/?p=108324 Leading experts from academia and industry came together to review recent progress on the Future Circular Collider

    The post CERN physicists meet in London to plot future collider plans appeared first on Physics World.

    ]]>
    Hundreds of physicists met in London this week for the ninth Future Circular Collider (FCC) Conference. Held at the Millennium Gloucester Hotel London in South Kensington, the five days of talks focused on the latest developments on the FCC – a huge proposed particle collider that would succeed CERN’s Large Hadron Collider (LHC). If built, the collider would cost around £11bn and involve the construction of a 100 km underground tunnel at the current CERN site to house an electron–positron collider (FCC-ee).

    Supporters hope that construction would begin in the early 2030s and last just over a decade, with the FCC-ee complete in 2045. The FCC-ee would focus on creating a million Higgs particles in total to allow physicists to study its properties with an accuracy an order of magnitude better that what is possible today with the LHC.

    Once the physics programme for the FCC-ee is complete, estimated in 2063, the same tunnel could then be used to house a proton–proton collider, dubbed FCC-hh. The FCC-hh, which would begin operation in the 2070s, would use the LHC and its pre-injector accelerators to feed the collider that could reach a top energy of 100 TeV – seven times greater than the LHC.

    CERN originally released a four-volume conceptual design report for the FCC in early 2019, while the following year CERN Council approved a feasibility study and a more detailed costing of the FCC. It also gave a green light to continued research and development into the magnet technology that will be required for such a machine at higher energies.

    Make it so

    This week’s meeting saw experts from academia and industry review the latest progress on the FCC and set the goals for the coming years. In a plenary address on Monday, CERN director-general Fabiola Gianotti confirmed the current schedule is on track.  “I believe FCC is the best project for CERN’s future,” noted Gianotti. “We need to work together to make it happen.”

    If given the green light, construction could begin in 2033 and occur in parallel with the operation of a major £1.1bn “high luminosity” upgrade of the LHC – dubbed HL-LHC – that will increase the collider’s luminosity by a factor of 10 over the original LHC. HL-LHC, which will begin operation in 2029, is set to finish it work in the early 2040s.

    Michael Benedikt and Frank Zimmermann from CERN gave an update on the feasibility study, which involves hundreds of physicists based in 150 institutes spread over 34 countries.

    Physicists and engineers have been working on the optimization of the ring placement layout, choosing between about 100 different variations based on geology, land availability, access to roads as well as  infrastructure needs such as water, electricity and transport.

    They found the “lowest risk” option was a 90.7 km ring, allowing the FCC to have two or four collision points. Access to the tunnel is being reduced from 12 points in the conceptual design report to eight in the feasibility study. In February, engineers began environmental studies and the preparation of geological investigations based on this blueprint.

    A mid-term review of the feasibility study is expected to be complete later this year and in February 2024, the CERN Council will hold a special meeting to analyse the study’s progress as well as an updated costing for the facility.

    The feasibility study for the FCC is expected to be complete in 2025 with possible CERN approval for the project three years later. A final decision would, though, depend on the outcome of an update to the European Strategy for Particle Physics in 2026.

    The post CERN physicists meet in London to plot future collider plans appeared first on Physics World.

    ]]>
    News Leading experts from academia and industry came together to review recent progress on the Future Circular Collider https://physicsworld.com/wp-content/uploads/2023/06/fcc.jpg
    The sound of silence: career opportunities in acoustic science and engineering https://physicsworld.com/a/the-sound-of-silence-career-opportunities-in-acoustic-science-and-engineering/ Wed, 07 Jun 2023 15:29:04 +0000 https://physicsworld.com/?p=108276 Acoustic specialists at QinetiQ minimize the noise made by ships, submarines and other maritime platforms

    The post The sound of silence: career opportunities in acoustic science and engineering appeared first on Physics World.

    ]]>
    The science of sound has a special importance in the marine world. Acoustic waves travel faster and further in water than in air, which has made sonar a vital technique for navigation as well as for locating subsea objects and structures. But the efficiency of sound propagation in water presents a problem for ships and submarines that want to avoid detection, which means that the key goal for many of the acoustic specialists at QinetiQ – the global defence and security company headquartered in the UK –  is to make these maritime platforms as quiet as possible.

    “It’s a hiding game,” says Dave Steele, who is technical lead for a team that designs and develops acoustic materials that stifle the sound reflected back from ships, submarines and other water-based structures. “Sound bounces off any object in the water, and so for anything that wants to remain hidden we need to control that energy.”

    Acoustic specialists at QinetiQ are investigating various different methods to suppress the noise generated by maritime platforms. In Toby Hill’s team, for example, the aim is to design propulsion systems that enable ships and submarines to move through the water as silently as possible. “The huge power plants and propellers in these propulsion systems generate lots of energy that is dumped into the sea,” Hill explains. “Our aim is to minimize the amount of noise they produce by developing shapes that interact more efficiently with the water flow.”

    Research and innovation

    Across most of these activities the key customer for QinetiQ is the UK’s Ministry of Defence (MoD), which in April 2023 renewed a long-term contract, called the Maritime Strategic Capability Agreement (MSCA), that commits almost £260m to research and innovation at QinetiQ over the next 10 years. While the renewal of the MSCA will sustain key skills and facilities across several of QinetiQ’s sites in the UK – with the ultimate aim of maintaining the MoD’s ability to design, build and operate the Royal Navy’s surface and subsurface fleet – it also provides its technical teams with the springboard they need to plan for the future.

    “We now have a commercial baseline that allows us to start exploring longer term solutions,” comments Steele. “It’s an exciting time, because we will be able to strengthen our core capabilities while also leveraging our skills and expertise to explore new opportunities with a range of different customers.”

    That long-term view is driving a need for a new influx of acoustic scientists and engineers. “We need more people because we have lots of work, and we need to strengthen our capability,” says Hill. “The MSCA is a long-running contract, but customers from the MoD or the commercial sector also come to us with specific problems that need to be solved within a shorter timeframe.”

    Whether developing noise-cancelling materials or reducing the sound made by propulsion systems, acoustic physicists like Hill and Steele typically exploit a combination of numerical, analytical and experimental techniques to take their solutions from initial idea to full integration and test. “For each project we might identify some candidate materials based on our prior knowledge and experience, and then we use analytical techniques and finite-element modelling to optimize the performance of the material for the specific application,” explains Steele. “We then create small-scale samples to assess their behaviour against our numerical predictions, and if that checks out we progressively scale up our testing regime towards the full-scale solution.”

    Propulsion stock image

    The scientists and engineers at QinetiQ have access to some unique experimental facilities to gather the real-world data they need to be confident in their designs. For Hill and his team, that means measuring the noise, vibration and turbulence created by both model structures and full-scale solutions in large water tunnels and other hydromechanical testing facilities. “It takes at least five years and billions of pounds to build these maritime platforms,” he says. “Our customers need to make multimillion-pound design decisions, and it makes sense for them to invest in the experimental facilities needed to prove that the design will perform as expected.”

    Field trials are also vital to verify the performance of design solutions in the harsh underwater environment, with submarines in particular subject to large variations in temperature and pressure. The technical teams work closely with their customers to translate lab results and numerical data into realistic exercises that address the specific challenges of each application. “We have the capabilities to test the performance of the materials when they are applied to end platform,” explains Steele. “We need to prove the design in a real-life scenario to de-risk the technology solution for the customer.”

    Peppered in with these long-term strategic projects are more urgent operational requests that demand expert acoustics knowledge. Team members are also often involved in collaborative projects, both within QinetiQ and with external companies who may be developing and deploying integrated platforms. “The unusual combination of skills and experience within the team puts us in a unique position to solve a diverse range of problems.” says Steele. “Customers often come to us because they recognize that we have this amazing core knowledge across a broad range of materials.”

    Training provided

    However, both Steele and Hill recognize that new recruits are unlikely to offer such a specialized combination of knowledge and skills from the outset – particularly when the classified nature of the work generally restricts the roles to UK nationals. “We don’t expect anyone to have experience of what we do,” says Hill. “We provide the training needed for new members of the team to develop the specific skills needed to work in maritime acoustics.”

    A small number of people, like Hill, join QinetiQ with relevant experience from industry or academia, but in most cases new recruits are younger scientists and engineers who are just starting out on their careers. “Generally we’re looking for graduates, PhDs or post-docs who are interested in an area that’s relevant for us and might have developed some transferable skills,” says Hill. “People from the physics community are likely to have the numerical or experimental skills we are looking for, and might be interested in solving the sort of problems we work on.”

    New graduates have the option of joining QinetiQ’s two-year training scheme, in which they are based in a “home” business while taking a series of six-month placements in other areas, or building their skills and expertise within a specific technical area. Steele’s team has also hosted undergraduate students who spend a year in industry as part of their degree, which in some cases has led to a permanent position. “Placement students and new graduates get involved in project work right from the start,” he says. “We give them context, training and support, but we also throw them into the work and provide them with the opportunity to do some real science.”

    Strong focus on collaboration

    Once in the organization, QinetiQ’s scientists and engineers have plenty of opportunities to develop their skills and experience, whether they decide to stay in the same team or move to another part of the business. Either way, the diverse range of projects and strong focus on collaboration provides sufficient variety and scientific challenge to keep things interesting. “Everyone works on several projects at the same time, which means that no two days are the same,” says Steele. “It’s not just a conveyer belt of designing new materials – we get to identify a problem, investigate it, come up with a solution, and then think about how the solution might be used in another setting.”

    The acoustic scientists and engineers within QinetiQ are also well aware that their specialist knowledge and skills is providing a tangible benefit, whether for an external customer or the UK’s national security. “We act as a consultancy, using our expertise to work in partnership with our customers to devise and optimize a solution,” says Steele. “People who join us from academia appreciate the context and purpose for their work, knowing that it yields an outcome that is genuinely useful for our customers.”

    Combined with the diverse opportunities within QinetiQ, Steele says that technical specialists within the organization generally enjoy a long and varied career. “QinetiQ is blessed with an intelligent, engaged and motivated workforce, which makes it a brilliant place to work,” says Steele. “People tend to stay for the variety and challenge of the science, and for the other people they get to work with.”

    The post The sound of silence: career opportunities in acoustic science and engineering appeared first on Physics World.

    ]]>
    Careers Acoustic specialists at QinetiQ minimize the noise made by ships, submarines and other maritime platforms https://physicsworld.com/wp-content/uploads/2023/06/Submarine-stock-image.jpg
    Magnetic trap keeps a superconducting microsphere levitated and stable https://physicsworld.com/a/magnetic-trap-keeps-a-superconducting-microsphere-levitated-and-stable/ Wed, 07 Jun 2023 11:00:02 +0000 https://physicsworld.com/?p=108294 Similar set-up could form the basis for precise quantum sensors and even dark-matter detectors

    The post Magnetic trap keeps a superconducting microsphere levitated and stable appeared first on Physics World.

    ]]>
    It might not look like much, but this tiny levitating particle could be the key to a new generation of quantum sensors. Using a carefully designed magnetic trap, physicists in Sweden and Austria succeeded in levitating a 48-μm-diameter sphere of superconducting material and keeping it stable enough to characterize its motion – an achievement they describe as a “critical first step” towards using the sphere’s position to create quantum states. Such position-based quantum states could have applications in several areas, including metrology and searches for the mysterious dark matter thought to make up 85% of the universe’s mass.

    To levitate their microsphere, the researchers needed to overcome both gravity and the attractive van der Waals force that would otherwise keep the microsphere glued to the surface. They did this by constructing a chip-based magnetic trap from wires made of niobium, which becomes a superconductor at low temperatures. This trap creates the magnetic field “landscape” needed to levitate the superconducting microsphere via the mechanism known as Meissner-state field expulsion, in which currents that arise in the superconductor completely oppose the external magnetic field.

    Stable levitation

    “Key to our success was achieving a magnetic field strength high enough to initiate levitation and to keep it stable,” explains team leader Wilef Wieczorek of the Chalmers Institute of Technology in Sweden. “For that, we had to carry 0.5 A of current at millikelvin temperature through the set-up without heating up the experiment.”

    The levitation remained stable over a period of days. During this time, researchers from Chalmers and the Institute for Quantum Optics and Quantum Information (IQOQI) in the Austrian Academy of Sciences measured the particle’s centre-of mass motion using an integrated DC superconducting quantum interference device (SQUID) magnetometer. They did this while continuously tuning the frequency of the magnetic trapping potential between 30 and 160 Hz, which enabled them to characterize the amplitude of the particle’s motion as a function of these frequency shifts.

    More sensitive force and acceleration sensors

    Wieczorek and colleagues say their experiment could make it possible to develop better sensors for force and acceleration. “Our work is critical first step to creating quantum states in the position of the micron-sized particle,” Wieczorek tells Physics World. “It paves the way to coupling the motion of the particle to superconducting quantum circuits, which would facilitate quantum state generation of the particles’ motion.”

    In the long term, Wieczorek says that the team’s platform could be developed into a precise force and acceleration sensor with applications in dark-matter searches. The instruments used in such searches must be highly sensitive to have any hope of detecting shifts due to dark matter, which is believed to interact with normal matter only weakly, via the force of gravity.

    Wieczorek and colleagues, who report their new technique in Physical Review Applied, say they will now try to reduce the motional amplitude of their microspheres by improving several technical aspects of their experiments. This might include installing passive cryogenic isolation and using feedback-based cooling techniques routinely employed in the field of cavity optomechanics.

    The post Magnetic trap keeps a superconducting microsphere levitated and stable appeared first on Physics World.

    ]]>
    Research update Similar set-up could form the basis for precise quantum sensors and even dark-matter detectors https://physicsworld.com/wp-content/uploads/2023/06/wieczorek-3b-imikropartikel.jpg
    How scientific models both help and deceive us in decision making https://physicsworld.com/a/how-scientific-models-both-help-and-deceive-us-in-decision-making/ Wed, 07 Jun 2023 10:00:55 +0000 https://physicsworld.com/?p=107958 Michela Massimi reviews Escape from Model Land by Erica Thompson

    The post How scientific models both help and deceive us in decision making appeared first on Physics World.

    ]]>
    We live in a society where scientific models surround us. They are used for everything from creating weather bulletins and making climate projections to providing economic forecasts and informing policies for public health. But despite being such useful tools, all scientific models have limitations. Because as any modeller knows, the output of a model is only as good as the data you put in.

    What’s more, uncertainties creep in at every corner of the modelling exercise. The results of a model depend on, for example, the values of the parameters, the boundary conditions, and the basic assumptions of the model itself. So how can we ensure scientific models are used responsibly when deciding matters of public policy? That’s the question tackled in Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It by Erica Thompson, who trained as a physicist and is now in the Department of Science, Technology, Engineering and Public Policy at University College London.

    Erica Thompson’s book is a tour de force in explaining the practical challenges of scientific modelling

    Thompson’s book is a tour de force in explaining the practical challenges of scientific modelling. What happens, Thompson asks, if the data we compare our model against are scant or hard to harvest? How can we assess the reliability of long-term model projections? And how can we work out if a model is a good representation of the real world? These are important questions because if we want to escape “model land”, we have to see where the limits of modelling lie.

    Think, for example, how politicians used epidemiological models during the COVID-19 pandemic. By seeing what might happen if nothing were done to stop the spread of the virus, governments used these worst-case-scenario forecasts to justify lockdowns and policies on social distancing. Or think about how we decide on climate policy by looking at long-term projections of what might happen with different levels of greenhouse gases in the atmosphere.

    But what exactly lies beyond model land? Thompson’s bold vision is that we should empower humans more wisely, deploying our expert judgement to use models reliably when making decisions. We can, the author argues, make models more trustworthy by being transparent about our value judgements, declaring where our conflicts of interest lie and involving a greater variety of experts.

    “If we are serious about addressing lack of confidence in science,” Thompson writes, “it is necessary for those who currently make their living from and have built their reputation on their models to stop trying to push their version of reality on others.” In particular, the author believes we should encourage modelling efforts from under-represented groups and those with different political views. “[We should] acknowledge that decision-making requires value judgments as well as predicted outcomes. And yes, that’s a big ask.”

    As such, the book builds on a well-established tradition in the contemporary philosophy of science, which examines how our own human values enter science when interpreting and selecting data, when choosing which approach to adopt to a problem, and when interpreting the outcomes of models. Even the latest 2022 report from the Intergovernmental Panel On Climate Change (IPCC) contains references to the philosophical literature.

    Thompson believes that the difference between the outcome of a model and the actions we take based on it – what the author dubs the “accountability gap” – can be bridged by offering “an expert bird’s-eye perspective from outside Model Land”. So rather than just reporting the results of models, we should “offer additional expert judgements, arrived at by consensus, about the degree to which model results are judged to be reliable”.

    The author believes that we need a rich and diverse variety of expert voices when extrapolating from models for decision-making

    The author essentially believes that we need a rich and diverse variety of expert voices when extrapolating from models for decision-making. Scientific models, in other words, aren’t just devices that take snapshots of some well-defined piece of reality. As the University of Edinburgh sociologist Donald MacKenzie says, we should see them as “engines” that take an active part in the decision-making process.

    In my own recent book Perspectival Realism, I discuss how scientific models deliver knowledge of what is possible by acting as what I call “inferential blueprints”. Models allow different communities to come together and make relevant and appropriate inferences about a target system. The Coupled Model Intercomparison Project, for example, doesn’t just involve modellers but also includes dendroclimatologists and scientists studying isotopes in corals, who provide data to help us reconstruct how the Earth’s temperature varied in the past.

    Escape from Model Land draws on research carried out by David Tuckett from University College London, who has studied how people make decisions under conditions of “radical uncertainty” (i.e. when the uncertainty cannot be quantified). Thompson explains how models can help us to assess risks and make appropriate decisions even though our emotional attachment often makes us unwilling to alter our assumptions or take into account conflicting information or external views. That’s the reason behind Thompson’s call for diversity in modelling: it’s so we can improve our decision making and get better policy outcomes.

    Overall, the author does a brilliant job at presenting technical information in an accessible and easy-to-read way. I found the book’s analysis of scientific modelling clear and well informed by the latest developments in philosophy. If we are truly to escape model land, as Thompson hopes, then we as humans – with all our biases and various levels of expertise – will have to be centre stage. Diversity, equality and inclusion will be crucial if models are to become more reliable and more trustworthy and, ultimately, allow us to make better and more informed decisions.

    • 2022 Basic Books 247pp £20/$30hb

    The post How scientific models both help and deceive us in decision making appeared first on Physics World.

    ]]>
    Opinion and reviews Michela Massimi reviews Escape from Model Land by Erica Thompson https://physicsworld.com/wp-content/uploads/2023/06/2023-06-Massimi-digital-people-shapes-1222970008-iStock_FotoMaximum.jpg
    DNA microcapsules deliver retrievable data storage https://physicsworld.com/a/dna-microcapsules-deliver-retrievable-data-storage/ Wed, 07 Jun 2023 08:30:26 +0000 https://physicsworld.com/?p=108255 New DNA-based data storage technique enables repeated random access to archived files

    The post DNA microcapsules deliver retrievable data storage appeared first on Physics World.

    ]]>
    Humans are generating increasing amounts of data, yet the ability to store all of this information is lagging behind. Since traditional long-term storage media such as hard discs or magnetic tape are limited in terms of their storage density, researchers are looking into small organic molecules and, more recently, DNA as molecular data carriers.

    A new technique dubbed “thermoconfined PCR” could be used to store data in synthetic DNA, say researchers at TU Eindhoven in the Netherlands. The technique, which involves localizing functionalized oligonucleotides inside thermoresponsive, semipermeable microcapsules, outperforms current DNA storage methods and provides a new approach for repeated random access to archived DNA files.

    The advantages of DNA

    DNA has many advantages when it comes to storing data. For one, the same amount of information may be stored in a much smaller physical volume than is possible with conventional technologies. DNA is also very stable and is thus suitable for long-term archiving. Using DNA to store data is also intuitive, since its main function in nature is to store the genetic information for all living organisms.

    DNA strands are polynucleotides that combine four different nucleobases – adenine (A), cytosine (C), guanine (G) and thymine (T). It is the sequence of these bases that determines the information stored. Rather than being stored as zeros and ones, data will be encoded in the AT and CG base pairs that make up DNA. The current best method can achieve a storage density of 17 exabytes per gram, a value that is six orders of magnitude higher than achievable with current non-DNA storage devices.

    In recent years, researchers have succeeded in synthesizing DNA on a large scale, meaning that using DNA for data storage is now viable. What is more, sequencing technologies – using light or nanopores, for example – have advanced to the point where high-throughput readout of DNA sequences is now possible.

    Stably encapsulating DNA files

    To selectively retrieve data encoded in the DNA, the polymerase chain reaction (PCR) is used to create millions of copies of the required piece of DNA. In the new study, a team of researchers led by Tom de Greef has used microreactors, the membranes of which have temperature-dependent permeabilities, to encapsulate the DNA and enhance the PCR process.

    “Our method is based on stably encapsulating DNA files functionalized with the polymer biotin in individual populations of the thermoresponsive microcapsules,” explains de Greef.

    The researchers anchor one DNA file per capsule. Above 50°C, the capsules seal themselves thanks to their reduced permeability. This allows the PCR process to take place separately in each capsule. Next, they lower the temperature to room temperature, which increases the capsule membrane’s permeability again and makes the file copies detach from the capsule. Importantly, since the original file remains anchored to a capsule, its quality does not deteriorate, in contrast to that observed in previous PCR-based DNA data storage techniques. Indeed, de Greef says that losses currently stand at 0.3% after three reads compared with 35% for existing methods.

    To make the data library easier to search, de Greef and colleagues labelled each of the files with a fluorescent molecule and each capsule has a different colour. “A device can then recognize the colours and separate them from another,” says de Greef. “A robotic arm could then neatly select the desired file from the pool of capsules.”

    The technique is detailed in Nature Nanotechnology.

    The post DNA microcapsules deliver retrievable data storage appeared first on Physics World.

    ]]>
    Research update New DNA-based data storage technique enables repeated random access to archived files https://physicsworld.com/wp-content/uploads/2023/06/Low-Res_BvOF-2023_0426_AOE-Tom-de-Greef.jpg
    Quantum entanglement doubles microscope resolution https://physicsworld.com/a/quantum-entanglement-doubles-microscope-resolution/ Tue, 06 Jun 2023 16:00:59 +0000 https://physicsworld.com/?p=108306 New technique shows promise for non-destructive bioimaging

    The post Quantum entanglement doubles microscope resolution appeared first on Physics World.

    ]]>
    Since the inception of quantum mechanics, physicists have sought to understand its repercussions for our universe. One of the theory’s stranger consequences is entanglement: the phenomenon whereby a pair or group of particles becomes connected in such a manner that the state of any one particle cannot be described independently. Instead, its state is intrinsically correlated with the state of the other(s), even if the particles are separated by large distances. As a result, a measurement performed on a particle in an isolated location can affect the state of its entangled twin far away.

    Researchers at the California Institute of Technology (Caltech) in the US have now discovered a way to use this quantum property to double the resolution of optical microscopes. The new technique, dubbed quantum microscopy by coincidence (QMC), illustrates the advantage of quantum microscopes over classical ones, and could have applications in non-destructive imaging of biological systems such as cancer cells.

    Quantum microscopy by coincidence

    An optical (light) microscope can resolve structures that are about half the wavelength of the light used. Anything smaller than that cannot be distinguished. Therefore, a possible route to improved resolution is to use higher intensities and shorter wavelengths of light.

    But there is a caveat. Shorter wavelengths of light have higher energies, and this highly energetic light can damage the object being imaged. Living cells and other organic materials are particularly fragile.

    In the latest work, which appears in Nature Communications, a team led by Lihong Wang used a pair of entangled photons, or biphotons, to circumvent this roadblock. The photons that make up the biphoton pair do not have an individual identity and they necessarily behave as a composite system. But, crucially, the wavelength of these composite photons is half the wavelength of an unentangled, classical photon at the same energy. Therefore, a biphoton pair carrying the same amount of energy as a classical photon can achieve double the resolution.

    Diagram of the optical setup, showing a beam leaving a laser, passing through various optics, being split into two paths, passing through an imaging and reference plane, and then recombining onto a detector

    To demonstrate this, Wang and colleagues used a crystal to split an incoming photon into an entangled biphoton pair made up of a signal photon and an idler photon. These biphotons travel along symmetric paths designed using a network of mirrors, lenses and prisms. The signal photon traverses the path containing the object being imaged, whereas the idler photon travels unobstructed. Eventually, both photons reach a detector plate, which records the information carried by the signal photon. This information is then correlated with the detection of the idler photon’s state and used to create an image.

    Advantages over classical microscopy

    The concept of using entangled photons to enhance imaging is not new, but it has previously been limited to imaging larger objects. The Caltech team is the first to demonstrate a viable setup that can resolve details down to the cellular scale. Using the spatial and temporal correlations between the signal and idler photon measurements (which do not exist for classical photons), Wang and colleagues also showed that the QMC method has advantages over classical microscopy in terms of noise resistance and image contrast.

    A figure showing two images of a cancer cell. The image taken with a classical microscope is blurry, the image taken with the quantum microscope shows better-resolved sub-cellular structures

    So far, the team has demonstrated the advantages of QMC through the bioimaging of cancer cell (see photo above). According to Wang, other applications could include non-destructive imaging of photosensitive materials such as organic molecules and memory devices. Additionally, since QMC produces a twofold improvement in the resolution of the microscope, any future advances in classical microscopy could be further enhanced by leveraging this property of quantum microscopy.

    But while QMC has much promise, a major challenge when compared with state-of-art classical microscopes is speed. Current methods to create entangled photons are inefficient, resulting in a low output of biphoton pairs. Since any advantage of QMC relies on being able to generate an abundance of biphotons, developing methods that can accomplish this will be crucial. “The development of strong and/or parallel quantum sources for quantum imaging is expected to speed up data acquisition,” Wang tells Physics World. Once that happens, quantum imaging techniques will truly come to the forefront of microscopy.

    The post Quantum entanglement doubles microscope resolution appeared first on Physics World.

    ]]>
    Research update New technique shows promise for non-destructive bioimaging https://physicsworld.com/wp-content/uploads/2023/06/Wang_Lihong-Lab-Quantum_Microscopy.jpg newsletter1
    Astronomers downsize proposed Arecibo observatory replacement https://physicsworld.com/a/astronomers-downsize-proposed-arecibo-observatory-replacement/ Tue, 06 Jun 2023 14:00:40 +0000 https://physicsworld.com/?p=108279 The Next Generation Arecibo Telescope would involve replacing the 305 m collapsed telescope with an array of small parabolic antennas

    The post Astronomers downsize proposed Arecibo observatory replacement appeared first on Physics World.

    ]]>
    Astronomers at the iconic Arecibo Observatory in Puerto Rico have revised their plans for a telescope to replace the original facility, which dramatically collapsed in 2020. The so-called Next Generation Arecibo Telescope (NGAT) would, if funded, involve building a phased array of small parabolic antennas to carry out pioneering research to maintain the island’s position at the forefront of astronomy.

    The Arecibo Observatory, which first opened in 1963, is located in a natural bowl and was used for research into radio astronomy, planetary and space studies as well as atmospheric science.  But on 1 December 2020 the radio telescope’s suspended platform – with its Gregorian dome focus and a plethora of instrumentation – fell after multiple suspension cables failed. The 900-tonne platform crashed into the 305 m dish, which lies almost 140 m below, destroying parts of it.

    Despite the damage, the National Science Foundation (NSF), which funds the observatory, decided it would not close the site. Early this year, it extended an agreement to maintain and operate the collapsed telescope from March until the end of September “to ensure a smooth transition to the next phase”. “NSF [will] work with a small business to handle the day-to-day operations and maintenance of the Arecibo site, ensuring maximal flexibility for the use of the site into the future,” says an NSF spokesperson.

    The NSF is also considering proposals to convert the facility into a Arecibo Center for STEM Education and Research. Meanwhile, the site continues to support research. As well as analysing historic data and transferring them from the site to the Texas Advanced Computing Center, scientists are still working with Arecibo’s ancillary equipment, which includes a lidar facility, optical laboratory and a 12 m radio antenna.

    But in 2021 Anish Roshi, the observatory’s head of radio astronomy, unveiled a proposal to replace the telescope with a phased array of 1112 parabolic dishes, each 9 m in diameter, placed on a tiltable, plate-like structure. This new facility, with an estimated cost of $454m, would provide the same collecting area as a 300 m parabolic dish. “It would have a much wider sky coverage and would offer capabilities for radio astronomy, planetary, and space and atmospheric sciences,” Roshi says. “It would be a unique instrument for doing science that competitive projects couldn’t do.”

    Downsize me

    A lack of support from the NSF, however, has forced researchers to go back to the drawing board to make the array “more cost-effective both for construction and operation” as Roshi puts it. In the revised proposal, submitted to arXiv late last month, his team now envisions a downsized version of the original concept. Dubbed NGAT-130, it would consist of 102 dishes, each 13 m in diameter, that would in combination have a collecting area equivalent to a single 130 m dish.

    “You can make a very competent telescope even with the reduced collecting area that could study solar coronal emissions, space weather and [hydrogen] intensity mapping, for example,” Roshi told Physics World. “We tried to do something more cost-effective, but appealing internationally.”

    Roshi concedes that his team has no cost estimates for its revised design. “We need to make a robust cost model for the mechanical structure and transmitters,” he says. “Both these require modelling and prototyping.”

    The NSF will not comment on the new proposal as it “does not speculate on awards that have yet to be reviewed”, according to a spokesperson. To reach that stage, Roshi says they aim to create a “structure” by August, in which engineers and scientists begin work on modelling, designing and prototyping NGAT-130.

    The post Astronomers downsize proposed Arecibo observatory replacement appeared first on Physics World.

    ]]>
    News The Next Generation Arecibo Telescope would involve replacing the 305 m collapsed telescope with an array of small parabolic antennas https://physicsworld.com/wp-content/uploads/2020/12/4-DSC03610-15.jpg
    Quantum repeater transmits entanglement over 50 kilometres https://physicsworld.com/a/quantum-repeater-transmits-entanglement-over-50-kilometres/ Tue, 06 Jun 2023 11:00:44 +0000 https://physicsworld.com/?p=108296 Experiment incorporates all the essential building blocks of a long-distance quantum network

    The post Quantum repeater transmits entanglement over 50 kilometres appeared first on Physics World.

    ]]>
    Physicists at the Universities of Innsbruck in Austria and Paris-Saclay in France have combined all the key functionalities of a long-distance quantum network into a single system for the first time. In a proof-of-principle experiment, they used this system to transfer quantum information via a so-called repeater node over a distance of 50 kilometres – far enough to indicate that the building blocks of practical, large-scale quantum networks may soon be within reach.

    Quantum networks have two fundamental components: the quantum systems themselves, known as nodes, and one or more reliable connections between them. Such a network could work by connecting the quantum bits (or qubits) of  multiple quantum computers to “share the load” of complex quantum calculations. It could also be used for super-secure quantum communications.

    But building a quantum network is no easy task. Such networks often work by transmitting single photons that are entangled; that is, its quantum state is closely linked to the state of another quantum particle. Unfortunately, the signal from a single photon is easily lost over long distances. Carriers of quantum information can also lose their quantum nature in a process known as decoherence. Boosting these signals is therefore essential.

    Repetition without hesitation or deviation

    Quantum repeaters can provide this boost, but not in a straightforward way. Because the rules of quantum mechanics restrict the copying of entangled states, repeaters cannot simply copy the signal they receive and pass it on to the next node. Instead, they must store information in a so-called quantum memory and then transfer it using a process known as a Bell state measurement (BSM).

    A fully capable quantum repeater also needs to comply with certain practical requirements. Firstly, the quantum signals need to be at wavelengths used in telecommunications, so they can be transmitted through optical fibres without too much loss. Secondly, the storage time of the quantum memory should exceed the time needed to generate entanglement. Finally, each step in the process should be deterministic, meaning that signals need to be produced after each successful step.

    All in one

    The latest work, which is described in Physical Review Letters, combines all three practical requirements in a single experiment. In the initial sequence of events, two trapped calcium ions each emit a photon, forming two entangled photon-ion pairs. Here the trapped ions work as qubits, and their quantum states are distributed over the quantum network. The photons in these pairs are then converted to the telecoms wavelength of 1550 nm and sent to two different nodes via separate 25-km-long optical fibres. The total distance between nodes is thus 50 km.

    Whenever one of the photons reaches its designated node, the state of its entangled ion gets stored in the ion’s protected memory states. The system then makes repeated attempts to send a second photon (entangled with the second ion) to the other node. Once photons are detected at both nodes, the experimenters perform a BSM to transfer the ion states to their respective entangled photons.

    To test this protocol, the researchers repeated it 44 720 times over a period of 33 minutes, registering 2053 successes in 2 229 883 attempts to entangle photons between the remote nodes. That might not sound like a high success rate, but the presence of the ion memories made establishing entanglement 128 times more likely. This indicates that further improvements may be possible, up to the limit where both detectors successfully detect both photons and are restricted only by decoherence in the ions’ memory states.

    Extending the network further

    The researchers also modelled how far their method could be pushed. Taking into account all the different factors, they showed that a network of 17 ion-based repeater nodes could establish entanglement between ions 800 kilometres apart. According to lead researcher Ben Lanyon of the University of Innsbruck, members of the team now plan to uncoil the fibres that are currently in their labs and send entanglement off campus, into the existing commercial optical fibre network. “Our vision is to get out of the lab and start building quantum networks of matter and light between cities and countries,” he tells Physics World.

    Ronald Hanson, a physicist at QuTech in the Netherlands who was not involved in the work, describes the result as important because it combines several elements required for a quantum repeater. In particular, he notes that the experiment demonstrates multi-qubit operation inside the node, efficient qubit-photon interfaces and telecom compatibility, with each element working at high fidelity to give a good overall performance – the most relevant metric. While a fully functioning quantum Internet is still some way off, Hanson believes this demonstration is a step towards functional quantum repeaters based on trapped ions.

    The post Quantum repeater transmits entanglement over 50 kilometres appeared first on Physics World.

    ]]>
    Research update Experiment incorporates all the essential building blocks of a long-distance quantum network https://physicsworld.com/wp-content/uploads/2023/06/quantum-repeater_web.jpg
    Laser speckle imaging assesses donor hearts https://physicsworld.com/a/laser-speckle-imaging-assesses-donor-hearts/ Tue, 06 Jun 2023 08:45:50 +0000 https://physicsworld.com/?p=108160 Improved imaging technique could provide valuable information regarding the quality of an organ to be transplanted

    The post Laser speckle imaging assesses donor hearts appeared first on Physics World.

    ]]>
    An imaging technique originally developed to detect how light scatters off red blood cells has been improved by researchers in France so that it can now safely image coronary blood circulation in donor hearts during ex situ heart perfusion (ESHP), a procedure used for heart preservation and screening. The new technique, known as laser speckle orthogonal contrast imaging (LSOCI), enables noninvasive high-resolution imaging of all the peripheral blood vessels of the heart in real time, and could provide valuable information to doctors on the quality of an organ to be transplanted.

    “Such dynamic speckle technology has been around for a long time,” explains team leader Elise Colin from Paris Saclay University and the start-up ITAE Medical Research, “but it is normally applied to stationary objects. We had no idea if we would be able to obtain images of blood activity at all when we applied it to an object with significant movement, like a beating heart.”

    Graft failure following heart transplantation surgery can come about because of abnormalities in the donor organ, such as coronary artery disease. The risk of these abnormalities increases with age or in patients with pre-existing heart conditions. Careful screening for such conditions is thus vital to determine whether an organ is eligible for transplantation.

    In recent years, ESHP has enabled assessment of the heart outside the body. Here, doctors monitor the performance of a donor heart after oxygenated nutrients have been supplied to it via its blood vessels. The problem is that conducting coronary angiography during ESHP (to screen for coronary artery disease) can damage the heart. Alternative imaging techniques to identify abnormal blood flow in donor organs are thus needed.

    Analysing speckle images

    The LSOCI technique used in this study analyses speckle images, which result from the many constructive and destructive interferences that occur when the surface or volume of an object is illuminated with coherent light such as that from a laser. In these images, researchers look at the speckle contrast parameter, which Colin describes as a type of “blur function”. “This is all the more important when the scatterers producing the signal are in motion, as is the case of red blood cells, for which this technique was developed,” she explains.

    Colin and colleagues have now improved LSOCI to observe small blood vessels in the heart. The new method, which they detail in the Journal of Biomedical Optics, is able to analyse blood flow in the organ using a specific polarimetric filter that favours the interactions between light waves that have undergone more multiple scattering. These interactions generally occur at depth in the blood vessels, meaning that surface light scattering is suppressed. The speckle patterns produced are therefore mainly produced by multiple scattering of moving red blood cells inside the vessels.

    In the case of an organ that moves periodically, like the heart, researchers need to be able to calculate the blur function without it being affected by the overall motion of the organ. To do this, Colin and colleagues developed an algorithm that allowed them to select the images that have the least movement between them, along different periods of heartbeat.

    “It is important to understand that the resulting images do not contain the same information as a radiometric image, for example,” she tells Physics World. “The images produced are motion images of red blood cells, and when the heart is made to stop beating, no vessels are visible in the image.”

    Valuable information for doctors

    The images obtained represent the vasculature of the heart at different time points and by analysing a sequence of these images, the technique can be used to visualize vasculatures as small as 100 µm in a matter of seconds. It could thus be used to identify myocardial perfusion abnormalities indicative of underlying heart conditions, say the researchers.

    “This information is valuable for doctors so that they can assess the quality of an organ to be transplanted,” says Colin. “Such information is important since it allows us to consider using grafts with less stringent age limits, for we now have a post-evaluation method to assess the health condition of these donor organs. An indirect consequence of this is that it increases the number of transplantation possibilities.”

    Colin and colleagues are now in the process of filing a patent for a method of temporal calibration based on their technique, but say that they still need to validate the concept specifically for their image enhancement method. “Once this has been done, we will be able to ensure that doctors have access to an image with a quantified medical index, meaning that the values are comparable over time from one system to another,” says Colin. “We would also like to continue our research on polarization optimization. This would allow us to achieve the best contrast and move towards obtaining three-dimensional information.”

    The post Laser speckle imaging assesses donor hearts appeared first on Physics World.

    ]]>
    Research update Improved imaging technique could provide valuable information regarding the quality of an organ to be transplanted https://physicsworld.com/wp-content/uploads/2023/05/Low-Res_Hearts-920.jpg
    Sony announces venture into quantum computing via UK firm Quantum Motion https://physicsworld.com/a/sony-announces-venture-into-quantum-computing-via-uk-firm-quantum-motion/ Mon, 05 Jun 2023 16:27:36 +0000 https://physicsworld.com/?p=108266 Move represents Japanese electronics giant's first push into quantum computing

    The post Sony announces venture into quantum computing via UK firm Quantum Motion appeared first on Physics World.

    ]]>
    The Japanese electronics giant Sony has announced its first steps into quantum computing by joining other investment groups in a £42m venture in the UK quantum computing firm Quantum Motion. The move by the investment arm of Sony aims to boost the company’s expertise in silicon quantum chip development as well as to assist in a potential quantum computer roll-out onto the Japanese market.

    Quantum Motion was founded in 2017 by scientists from University College London and the University of Oxford. It already raised a total of £20m via “seed investment” in 2017 and a “series A” investment in 2020. Quantum Motion uses qubits based on standard silicon chip technology and can therefore exploit the same manufacturing processes that mass-produce chips such as those found in smartphones.

    A full-scale quantum computer, when built, is likely to require a million logical qubits to perform quantum-based calculations, with each logical qubit needing thousands of physical qubits to allow for robust error checking. Such demands will, however, require a huge amount of associated hardware if they are to be achieved. Quantum Motion claims that its technology could tackle this problem because it develops scalable arrays of qubits based on CMOS silicon technology to achieve high-density qubits.

    The company will use money from Sony Innovation Fund as well as other investors such as Bosch Ventures, Porsche SE and Oxford Science Enterprises to build on the firm’s recent work. In 2020, for example, Quantum Motion managed to isolate a single electron and measure its quantum state for a record-breaking nine seconds, while last year it showed how it could  quickly characterize thousands of multiplexed quantum dots that had been fabricated in a chip factory.

    Despite being new to quantum computing, Sony’s investment will now give it access to expertise in quantum chip design and manufacturing. It is also an entry point into the Japanese market, which is expected to become one of the biggest for quantum computing. Quantum Motion chief executive James Palles-Dimmock, who is a physicist by training, says the company is delighted to have Sony Innovation Fund as an investor as it will help the firm to scale the development of silicon-based quantum computers.

    • IBM wants to build a 100,000 qubit quantum computer by 2033. It will reach its goal by working with the University of Tokyo to develop and scale quantum algorithms and by starting to build a viable supply chain. IBM will also work with the University of Chicago to bridge quantum communication and computation via classical and quantum parallelization as well as by adding quantum networks.

    The post Sony announces venture into quantum computing via UK firm Quantum Motion appeared first on Physics World.

    ]]>
    News Move represents Japanese electronics giant's first push into quantum computing https://physicsworld.com/wp-content/uploads/2023/06/Quantum-Motion-silicon-chip-4-low-res-788x525-1.jpg
    Leaky-wave metasurfaces connect waveguides to free-space optics https://physicsworld.com/a/leaky-wave-metasurfaces-connect-waveguides-to-free-space-optics/ Mon, 05 Jun 2023 13:35:51 +0000 https://physicsworld.com/?p=108271 Research could lead to portable quantum optics

    The post Leaky-wave metasurfaces connect waveguides to free-space optics appeared first on Physics World.

    ]]>
    Leaky-wave images

    Researchers in the US have shown how light travelling through optical waveguides can be converted into freely propagating light waves with arbitrarily shaped wavefronts – an achievement that the team claims as a first. Nanfang Yu and colleagues at Columbia University and at the City University of New York (CUNY), achieved the feat using “leaky-wave metasurfaces”.

    Although there are many different optical systems for controlling light, they tend to  fall into two types. One involves controlling the properties of light waves travelling through free space, and can include systems ranging from simple lenses, to advanced telescopes and holograms. The other type involves the use of photonic circuits, which manipulate light propagating along optical waveguides with a cross-sectional dimension of typically hundreds of nanometres. These circuits are ideal platforms for optical information processing, making them key elements of modern devices including sensors and optical communications chips.

    With advances in optical technologies ranging from augmented reality, to probes for controlling and manipulating neurons, there is growing motivation to integrate these two categories of optical control systems. Yet as Yu explains, the two have so far remained largely incompatible with each other.

    Interfacing challenges

    “There has always been a challenge in ‘interfacing’ these two categories,” he says. “It is fundamentally hard to transform a tiny and simple waveguide mode into a broad and complex free-space optical wave, or vice versa. However, demands for ‘hybrid’ systems consisting partly of photonic integrated circuits and partly of free-space optics are becoming real.”

    For Yu and colleagues, the solution lies with metasurfaces, which are thin sheets made from arrays of sub-wavelength sized structures. These metasurfaces can alter the properties of light waves passing through them. In their previous research, they showed how metasurfaces can be used to manipulate light travelling in free space.

    To extend these capabilities to guided light waves, the researchers started with a photonic crystal (PhC) comprising a square array of square holes in a polymer film. This PhC allows flat sheets of light to propagate back and forth as standing waves.

    Symmetry-breaking perturbation

    “In the next step, we introduced a symmetry-breaking perturbation to the PhC slab by deforming square holes of the PhC into rectangular ones,” Yu explains. “The perturbation lowers the degree of symmetry of the PhC so that the photonic modes are no longer confined to the slab and can leak into free space, with a leakage rate proportional to the magnitude of the perturbation.”

    The team found that by varying the perturbation across the slab – orienting its rectangular holes along different directions – they could fine-tune the shape of the wavefront of the leaking light. Using their leaky-wave metasurfaces, Yu’s team developed a new technique for converting the light propagating through a waveguide into a wave travelling in free space.

    “Here, an input waveguide mode is first expanded into a slab waveguide mode, which enters a leaky-wave metasurface and produces the desired surface emission,” CUNY’s Adam Overvig explains. “In this way, the initial simple waveguide mode confined within a waveguide with a cross-section on the order of one wavelength is eventually converted into a freely propagating light wave with a complex wavefront, over an area about 300 times of the wavelength.”

    The team demonstrated how their devices could produce diverse emission patterns. These included 2D arrays of focal spots, corkscrew wavefronts, holographic images, and laser beams with spatially varying polarizations. If the technology is scaled up, these could one day be applied across many different types of advanced optical systems. Applications include optical displays like holograms and augmented reality goggles; and high-capacity optical communication channels between computer chips.

    In quantum optics, optical lattices are used for trapping ultracold atoms and molecules. “Compared to traditional methods where optical lattices are produced by interference of multiple beams via free-space optics, our devices could be directly integrated into the vacuum chamber to simplify the optical system, making portable quantum optics applications such as atomic clocks a possibility,” Columbia’s Heqing Huang explains.

    The research is described in Nature Nanotechnology.

    The post Leaky-wave metasurfaces connect waveguides to free-space optics appeared first on Physics World.

    ]]>
    Research update Research could lead to portable quantum optics https://physicsworld.com/wp-content/uploads/2023/06/Leaky-wave-images-list.jpg
    Single-photon LIDAR system images 3D objects underwater https://physicsworld.com/a/single-photon-lidar-system-images-3d-objects-underwater/ Sat, 03 Jun 2023 11:00:15 +0000 https://physicsworld.com/?p=108207 New sensor works in real time and could have applications in off-shore engineering, marine archaeology and defence

    The post Single-photon LIDAR system images 3D objects underwater appeared first on Physics World.

    ]]>
    the single-photon system submerged in a tank

    A new LIDAR system can image objects in three dimensions underwater using a single-photon detector array. Developed by researchers at Herriot-Watt University in the UK, the technology could come in handy for applications such as inspecting, monitoring and surveying underwater objects, off-shore engineering, and even archaeology.

    “To the best of our knowledge, this is the first prototype of a fully-submerged imaging system based on quantum detection technologies,” says team leader Aurora Maccarone. While the team had previously demonstrated imaging using single-photon detection techniques that could penetrate turbid or highly attenuating underwater environments, the latest work goes a step further, proving that the system can indeed function while fully submerged in a large test tank. The researchers also improved the hardware and software used to reconstruct the 3D images, enabling them to perform the imaging in real time.

    3D imaging in highly turbid waters

    The operational concept of the sensor is quite simple, Maccarone explains. First, a green pulsed laser source illuminates the scene of interest. Objects in the scene reflect this pulsed illumination, and an ultra-sensitive array of single-photon detectors picks up the reflected light. “By measuring the return time of the reflected light, the distance to the target can be accurately measured, which allows us to build the 3D profile of the target,” says Maccarone. “Typically, the timing measurement is performed with picosecond timing resolution, which means we can resolve millimetre-scale details of the targets in the scene.”

    Crucially, the technique allows the researchers to distinguish between photons reflected by the target and those reflected by particles in the water. “This makes it particularly suitable for 3D imaging in highly turbid waters in which optical scattering can ruin image contrast and resolution,” Maccarone adds.

    The researchers tested their system in a water tank measuring 4 m x 3 m x 2 m. By adding varying amounts of scattering agent to the water, they were able to mimic the different light-scattering levels present in natural underwater environments. Because the optical array produces many hundreds of detection events per second, the researchers used algorithms specially developed for imaging in highly-light-scattering conditions to analyse the data.

    The range of applications for underwater LIDAR is extremely broad, Maccarone says. One possible use might be for inspecting underwater cables or the submerged portion of turbines. Other options include monitoring and surveying archaeological sites and applications in the security and defence sector.

    The main challenge now, Maccarone adds, is to shrink each component in the system and thus get its overall dimensions down to something that could fit in an underwater vehicle. “We are collaborating with industry to find a suitable solution to make this possible without compromising on the performance of the system,” she says.

    The researchers report their work in Optics Express.

    The post Single-photon LIDAR system images 3D objects underwater appeared first on Physics World.

    ]]>
    Research update New sensor works in real time and could have applications in off-shore engineering, marine archaeology and defence https://physicsworld.com/wp-content/uploads/2023/06/deep-sea-oil-well-iStock_MsLightBox.jpeg
    Poet laureate composes ode to NASA mission, celebrating body-based units of measurement https://physicsworld.com/a/poet-laureate-composes-ode-to-nasa-mission-celebrating-body-based-units-of-measurement/ Fri, 02 Jun 2023 15:44:43 +0000 https://physicsworld.com/?p=108251 Excerpts from the Red Folder

    The post Poet laureate composes ode to NASA mission, celebrating body-based units of measurement appeared first on Physics World.

    ]]>

    On 10 October 2024, NASA plans to launch the Europa Clipper mission, which will study Jupiter’s moon Europa in a series of flybys. In anticipation of the launch, the US poet laureate Ada Limón has written an ode to the mission.

    Called “In praise of mystery: a poem for Europa”, the 21-line poem was published and read aloud this week by Limón. According to a report from Reuters, it will also be engraved in the poet’s handwriting on the exterior of the spacecraft.

    You can listen to Limón recite her poem in the above video.

    The hand and the foot are two examples of body-based units of measurement. While these two units are mostly used in the English-speaking world, many other body-based measurement systems have emerged throughout the world.

    Still in use

    Now, a trio of Finnish researchers have done a comprehensive survey of such units across more 180 cultures worldwide. Writing the journal Science,  Roope Kaaronen, Mikael Manninen and Jussi Eronen of the University of Helsinki point out that many body-based units are still in use despite being replaced by standardized measurement systems.

    “We argue that body-based units have had, and may still have, advantages over standardized systems, such as in the design of ergonomic technologies,” they write, adding, “This helps explain the persistence of body-based measurement centuries after the first standardized measurement systems emerged”.

    Like me, you are probably wondering whether there are any units specific to Finland. Not surprisingly, there is one associated with Nordic skiing. The researchers write about a standardization in how 16th century Saami skis were made that is related to the height of the skier and the length of their feet.

    The post Poet laureate composes ode to NASA mission, celebrating body-based units of measurement appeared first on Physics World.

    ]]>
    Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/06/Europa-Clipper.jpg
    Dedicated vs multi-purpose SRS delivery platforms: is good enough, good enough? https://physicsworld.com/a/dedicated-vs-multi-purpose-srs-delivery-platforms-is-good-enough-good-enough/ Fri, 02 Jun 2023 12:59:49 +0000 https://physicsworld.com/?p=107464 Join the audience for a live webinar on 5 September 2023 sponsored by ZAP Surgical Systems, Inc.

    The post Dedicated vs multi-purpose SRS delivery platforms: is good enough, good enough? appeared first on Physics World.

    ]]>

    There have been an increasing number of platform comparison studies in recent years, with most comparisons limited to single-centre studies. Study bias is common in the literature, due to various study flaws, including a comparison of non-contemporary equipment, unequal expertise and comparison of plans created in a clinical versus study environment.

    NHS England sponsored a national benchmarking programme in order to evaluate current practices across clinical centres treating benign brain tumours and metastases. This NHS study, arguably the “study to end all studies”, was updated in 2022, to cover a range of state-of-the-art contemporary devices. This will be reviewed in detail during the webinar in addition to work published on lifetime risks from the body dose received during intracranial SRS.

    Ian Paddick began his career as a medical physicist in 1989, working at the Hammersmith Hospital, London. Since 1998 he has worked almost exclusively with the Gamma Knife, being responsible for more than 5000 patient treatments. He is considered a world authority on treatment plan metrics, having formulated the Paddick Conformity Index and the Gradient Index. In 2003, Ian formed Medical Physics Limited; a group of physicists providing expert radiosurgery physics services to SRS centres in the UK, Europe and the US. He was the first physicist to be elected as president of the International Stereotactic Radiosurgery Society (ISRS). He currently serves as co-chair of the ISRS Certification Committee.

    The post Dedicated vs multi-purpose SRS delivery platforms: is good enough, good enough? appeared first on Physics World.

    ]]>
    Webinar Join the audience for a live webinar on 5 September 2023 sponsored by ZAP Surgical Systems, Inc. https://physicsworld.com/wp-content/uploads/2023/05/2023-09-05-ZAP-image.jpg
    Great gaffe in the sky: the erroneous physics behind The Dark Side of the Moon https://physicsworld.com/a/great-gaffe-in-the-sky-the-erroneous-physics-behind-the-dark-side-of-the-moon/ Fri, 02 Jun 2023 11:20:41 +0000 https://physicsworld.com/?p=108161 Uncovering the physics inconsistencies of the ground-breaking Pink Floyd album cover

    The post Great gaffe in the sky: the erroneous physics behind <em>The Dark Side of the Moon</em> appeared first on Physics World.

    ]]>
    This year marks 50 years since British rock band Pink Floyd released their seminal album The Dark Side of the Moon. From my experience as a physics teacher, I can tell you that most teenagers today would struggle to name a single track on the album. But a majority of them still do recognize the iconic album cover, which depicts light refracting through a triangular prism. Indeed, I am convinced that students will be able to name both the album and the band if shown the artwork (even though neither appears on the front cover) making it a useful tool in the physics classroom even today.

    In terms of actual physics behind the art, I’ll skip right past the fact that the Moon does not have a true “dark side” – simply a “far side” that we cannot see from the Earth, as the Moon is tidally locked.  Amusingly enough, this is even referenced on the album itself where a background voice says “there is no dark side to the Moon, really” before adding “as a matter of fact, it’s all dark…” This is, perhaps, a nod to the fact that the Moon does not produce its own light?

    Setting these astronomical facts aside, there are two interesting aspects to the design that are directly relevant to physics students. One is how – if the original gatefold design is fully opened up – you see an image with light going through two prisms. In it, the light is split into its constituent colours before passing through a prism, but is then recombined into white light before passing through a second prism, and then being split up again.

    Apparently, this was done to allow interesting displays in record shops. Nevertheless, it illustrates one of Isaac Newton’s earliest contributions to optical physics, as it shows how white light is dispersed into its constituent colours by a prism, and how it can be recombined through another prism. A previous Physics World article – “Web of confusion” (May 2022) – has already highlighted a lively classroom discussion on some of the errors therein.

    But there is another aspect of the album’s artwork that is equally worthy of attention in physics classrooms. Can we use it determine the refractive index (RI) of the prism illustrated in the original image, and to find out if it corresponds to any available material? The RI of a material is essentially a measure of the extent to which light refracts as it enters or leaves the material. It is easily calculated, if one can measure the angles between the path of the light and a line drawn at right angles to the surface, known as the “normal”.

    I printed a few copies of the artwork and enlisted the help of some students to determine the RI of the material on the cover. We added in the normal where the light strikes the prism and where it emerges, and carefully measured the various angles of incidence and refraction, which allowed us to calculate values for the refractive index. Or should I say refractive indices – because what we discovered was somewhat disturbing.

    Having more than one RI isn’t a problem in itself. After all, at the two extremes of the spectrum, the RI for the violet light has to be greater than that of the red light. That’s why the light separates into its different colours: violet light slows down far more than red light when it enters a dense material, and that is why it bends through a greater angle. In fact, I had checked out typical values in advance and knew that the RI for red light passing through glass is usually about 1.51, while for violet light it’s about 1.53. But the Dark Side of the Moon image doesn’t produce values anything close to either.

    On the way into the prism on the album cover, the angles of incidence and refraction for the violet light yield an RI of 2.42, which is far too high to be ordinary glass. After digging around, we did find that it closely matches the RI of zincite – a transparent mineral that mainly contains zinc oxide. But zincite is usually tinted either yellow or red, so it hardly matches the image in the photo.

    That doesn’t really matter, though, as the material simply cannot be zincite or anything else for that matter. Because if it were zincite, we’d expect a similar, though slightly smaller, value for the red light. In fact, we get a value of 1.15 for red, which doesn’t correspond to any common material that I can track down.

    I wondered briefly if variations in the density of the prism could account for the inconsistencies, but that doesn’t work either

    It gets worse. When the light emerges to the right of the image, the angles measured there give us two more, entirely inconsistent, values: 1.08 for the violet light, and 1.85 for the red.

    I wondered briefly if variations in the density of the prism could account for the inconsistencies, but that doesn’t work either. Simply put, if the density (and the RIs) of the glass were varying, we’d expect to see the light follow a curved path through the glass, which does not make any sense. It’s almost as though Storm Thorgerson, who designed the album cover, decided to completely ignore Snell and the laws of refraction.

    It wouldn’t even have been that difficult for Thorgerson to create a more accurate and realistic version of the path the light should take. Just look at the image above, which was taken in 2017 by Mason Maxwell – an amateur photographer – using a glass prism. It shows what the Pink Floyd cover should really have been.

    Perhaps we can attribute the errors to artistic licence and a lack of general optics expertise. Maybe the request from Pink Floyd keyboardist Richard Wright for a “simple and bold” design – symbolizing the album’s deep themes surrounding riches, greed and conflict – ultimately “eclipsed” scientific accuracy. Either way, 50 years on, this iconic image is here to stay.

    The post Great gaffe in the sky: the erroneous physics behind <em>The Dark Side of the Moon</em> appeared first on Physics World.

    ]]>
    Blog Uncovering the physics inconsistencies of the ground-breaking Pink Floyd album cover https://physicsworld.com/wp-content/uploads/2023/05/LT-2023-06-02-Dark-Side-MMaxwell.jpg
    Photons from nuclear clock transition are seen at long last https://physicsworld.com/a/photons-from-nuclear-clock-transition-are-seen-at-long-last/ Fri, 02 Jun 2023 10:57:59 +0000 https://physicsworld.com/?p=108216 Breakthrough could lead to new experiments in fundamental physics

    The post Photons from nuclear clock transition are seen at long last appeared first on Physics World.

    ]]>
    The first direct measurement has been made of a thorium-229 nuclear transition that could potentially form the basis for a “nuclear clock”. Done at CERN, the research follows a 2016 experiment that confirmed the transition’s existence but did not detect the resulting emitted photon. Much work remains before a working clock can be produced, but if such a device proves possible, it could prove an important tool for research in fundamental physics.

    The most accurate clocks today are based on optically trapped ensembles of atoms such as strontium or ytterbium. Highly stable lasers are locked into resonance with the frequencies of specific atomic transitions, and the laser oscillations effectively behave like pendulum swings – albeit with much higher frequencies and therefore greater precision. These clocks can be stable to within 1 part in 1020, which means that they will be out by just 10 ms after 13.7 billion years of operation – the age of the universe.

    Atomic clocks are not just great timekeepers, physicists have used them to study a range of fundamental phenomena such as how Einstein’s general theory of relativity applies to atoms confined in optical traps. In search of ever greater precision and deeper insights, in 2003 Ekkehard Peik and Christian Tamm of Physikalisch-technische Bundesanstalt in Braunschweig, Germany proposed that a clock could be produced by interrogating not electronic energy levels of atoms but nuclear energy levels.

    Much smaller antenna

    Such a nuclear clock would be extremely well isolated from external noise. “An atom is something like 10-10 m [across]; a nucleus is something like 10-14 or 10-15 m,” explains Sandro Kraemer of KU Leuven in Belgium, who was involved in this latest research. “The nucleus is a much smaller antenna for the environment and is thus much less prone to shifts.”

    A nuclear clock might therefore be an excellent probe of hypothetical, very tiny temporal variations in the values of fundamental constants such as the fine structure constant, which quantifies the strength of the electromagnetic interaction. Any such changes would point to physics beyond the Standard Model. Moreover, nuclear binding is stronger that its atomic counterpart, so the shifts between energy levels are higher in energy and would be resonant with higher-frequency lasers, making a smaller change detectable.

    This is a double-edged sword, however, as most nuclear transitions occur a much higher frequencies than can be produced by today’s lasers. Thorium-229, however, has a metastable excited state around 8 eV above the ground state – a transition that lies in the vacuum ultraviolet.

    Suitable for excitation

    Kraemer explains that building a laser to excite this state should just about be possible, “Out of 3000 or so radionuclei we know today, thorium is the only one we know that has a state suitable for laser excitation”.

    First, however, researchers need to know the exact frequency of the transition. Indeed, the decay had long been predicted by theory, but attempts to detect the photon emitted had proved unsuccessful. In 2016, however, researchers at Ludwig Maximilian University of Munich indirectly confirmed its existence by measuring the emission of electrons in a process called internal conversion, in which the energy of the nuclear decay ionizes the atom.

    Now, Kraemer and colleagues have made the first direct detection of the emitted vacuum ultraviolet photons by studying excited thorium-229 ions. The underlying idea is not new, Kraemer says, but previously researchers have tried to do this by implanting uranium-233 into crystals, which can decay to the excited thorium-229. The problem, says Kraemer, is that this releases over 4 MeV of energy into the crystal, which “is good for killing cancer, but really bad for us” as it damages the crystal, interfering with its optical properties.

    In the new work therefore, the researchers used CERN’s ISOLDE facility to implant actinium-229 ions into magnesium fluoride and calcium fluoride crystals. These can decay to the metastable excited thorium-229 nucleus by β-decay, which releases four orders of magnitude less energy into the crystal. The researchers could therefore detect the photons and measure the transition energy. The final precision is still well short of the uncertainty needed to build a clock, and the researchers are now working with laser physicists to refine this.

    Kyle Beloy of the US National Institute for Standards and Technology is impressed by the measurement. “There is very significant potential for this thorium-229 system as a nuclear clock and even more so to do tests of fundamental physics eventually,” he says. “In this [work], they observe a photon as it is emitted from the excited state down to the ground state, and ultimately the goal of the community here is to do the reverse. The narrow band of frequencies that the nucleus will absorb is on the order of millihertz, whereas how well we know that is on the order of 1012 Hz, so it’s like a needle in a haystack, and essentially what they’ve done is to reduce the size of the haystack by a factor of seven. That’s a big step forward for anyone searching to excite the transition.”

    The research is described in Nature.

    The post Photons from nuclear clock transition are seen at long last appeared first on Physics World.

    ]]>
    Research update Breakthrough could lead to new experiments in fundamental physics https://physicsworld.com/wp-content/uploads/2023/06/ISOLDE-at-CERN.jpg newsletter1
    Perovskite solar cells reach new milestones for stability and efficiency https://physicsworld.com/a/perovskite-solar-cells-reach-new-milestones-for-stability-and-efficiency/ Thu, 01 Jun 2023 16:00:00 +0000 https://physicsworld.com/?p=108205 Three recent results highlight how the technology just keeps getting better

    The post Perovskite solar cells reach new milestones for stability and efficiency appeared first on Physics World.

    ]]>
    It’s been a good couple of months for perovskite solar cells, with a trio of new results that could make it easier to commercialize these next-generation devices.

    The first result concerns perovskite-only solar photovoltaic (PV) cells. The initial promise of perovskite solar cells has long been impaired by the unstable nature of these crystalline materials, which are prone to surface defects that impede the flow of charge carriers (electrons and holes). Annoyingly, heat and moisture – both unavoidable in any practical solar-energy device – make this instability worse. Consequently, perovskite solar cells can lose around a third of their efficiency after just a few hundred hours’ exposure to sunlight.

    Last year, Stefaan de Wolf and colleagues at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia took an important step towards solving this problem by constructing a cell that incorporates both three-dimensional and two-dimensional perovskite crystals. This multidimensional cell retained 95% of its initial efficiency after 1000 hours of exposure to sunlight at a temperature of 85°C and a relative humidity of 85%.

    In the latest study, published in Joule, Kai Liu and colleagues at Fudan University in China and the University of Victoria in Canada went a little further. Their cell retained 98.6% of its initial efficiency after 1000 hours of operational tests, thanks to a chemical coating that forms covalent bonds with the organic components in perovskites. According to the University of Victoria spin-out firm behind the coating, XLYNX, these bonds make the perovskite more stable, thereby limiting losses of efficiency, stability and performance.

    Efficiency records tumble for tandem cells

    The second promising result is a new efficiency record for so-called “tandem” solar cells, which combine perovskites with standard silicon material. In mid-April, researchers at KAUST, also led by de Wolf, announced that they had produced an experimental tandem cell with a power conversion efficiency of 33.2%. This value surpasses the previous world record of 32.5%, which was set in late 2022 by Steven Albrecht and colleagues at Helmholtz-Zentrum Berlin.

    Though the latest KAUST result has not been published yet, the team say the record has been certified by the European Solar Test Installation (ESTI). The KAUST cell is also currently at the top of the US National Renewable Energy Laboratory’s (NREL) Best Research-cell Efficiency Chart, though it may not stay there for long, given the recent pattern of competing research groups leapfrogging each other’s achievements.

    The final new result is yet another efficiency record, this time in a commercial product rather than an experimental device. On 24 May, the UK-based firm Oxford PV reported that a tandem cell manufactured at its production line near Berlin, Germany, converted 28.6% of incident solar energy into electricity. This figure, which has been certified by experts at Fraunhofer ISE in Freiburg, Germany, is significantly higher than the 22-24% typical of commercial silicon cells, and 1.5% above Oxford PV’s own record for a production-line device. Onwards and upwards!

    The post Perovskite solar cells reach new milestones for stability and efficiency appeared first on Physics World.

    ]]>
    Blog Three recent results highlight how the technology just keeps getting better https://physicsworld.com/wp-content/uploads/2023/06/KAUST-tandem-solar-cell.jpeg
    Machine learning meets nanotechnology, award-winning implant regulates blood pressure https://physicsworld.com/a/machine-learning-meets-nanotechnology-award-winning-implant-regulates-blood-pressure/ Thu, 01 Jun 2023 14:45:25 +0000 https://physicsworld.com/?p=108212 This podcast features a computational scientist and a medical researcher

    The post Machine learning meets nanotechnology, award-winning implant regulates blood pressure appeared first on Physics World.

    ]]>
    This episode of the Physics World Weekly podcast features an interview with Amanda Barnard, who began her career as a theoretical physicist and now leads a multidisciplinary research group that applies computational science across a wide range of fields including nanotechnology, materials science, chemistry, and medicine.

    Barnard is also deputy director and computational science lead at the School of Computing at the Australian National University in Canberra. She talks about her interest in applying machine learning to a wide range of problems, and about the challenges and rewards of doing university administration. Barnard is editor-in-chief of the journal Nano Futures, and she talks about how this role enhances her understanding of the field.

    Also in this episode, medical researcher Jordan Squair talks about a new medical implant that could help regulate blood pressure in people with spinal-cord injuries. Squair, who is based at EPFL in Switzerland, tells Physics World’s Tami Freeman about how the device was created and how it was successfully tested on a human subject.

    Freeman also congratulates Squair on winning the BioInnovation Institute & Science Prize for Innovation for his development of the implant.

    This podcast is sponsored by iseg.

    The post Machine learning meets nanotechnology, award-winning implant regulates blood pressure appeared first on Physics World.

    ]]>
    Podcast This podcast features a computational scientist and a medical researcher https://physicsworld.com/wp-content/uploads/2023/06/Amanda-and-Jordan.jpg
    Palladium oxides could make better superconductors https://physicsworld.com/a/palladium-oxides-could-make-better-superconductors/ Thu, 01 Jun 2023 11:00:18 +0000 https://physicsworld.com/?p=108189 New calculations reveal that palladates remain superconducting at higher temperatures than cuprates or nickelates

    The post Palladium oxides could make better superconductors appeared first on Physics World.

    ]]>
    Palladates – oxide materials based on the element palladium – could be used to make superconductors that work at higher temperatures than cuprates (copper oxides) or nickelates (nickel oxides), according to calculations by researchers at the University of Hyogo, Japan, TU Wien and colleagues. The new study further identifies two such palladates as being “virtually optimal” in terms of two properties important for high-temperature superconductors: the correlation strength and the spatial fluctuations of the electrons in the material.

    Superconductors are materials that conduct electricity without resistance when cooled to below a certain transition temperature, Tc. The first superconductor to be discovered was solid mercury in 1911, but its transition temperature is only a few degrees above absolute zero, meaning that expensive liquid helium coolant is required to keep it in the superconducting phase. Several other “conventional” superconductors, as they are known, were discovered shortly afterwards, but all have similarly low values of Tc.

    Beginning in the late 1980s, however, a new class of “high-temperature” superconductors with Tabove the boiling point of liquid nitrogen (77 K) emerged. These “unconventional” superconductors are not metals but insulators containing copper oxides (cuprates), and their existence suggests that superconductivity may persist at even higher temperatures. Recently, researchers have identified materials based on nickel oxides as being good high-temperature superconductors in the same vein as their cuprate cousins.

    A major goal of this research is to find materials that remain superconducting even at room temperatures. Such materials would greatly improve the efficiency of electrical generators and transmission lines, while also making common applications of superconductivity (including superconducting magnets in particle accelerators and medical devices like MRI scanners) simpler and cheaper.

    A fundamental unsolved problem

    The classical theory of superconductivity (known as the BCS theory after the initials of its discoverers, Bardeen, Cooper and Schrieffer) explains why mercury and most metallic elements superconduct below their Tc: their fermionic electrons pair up to create bosons called Cooper pairs. These bosons form a phase-coherent condensate that can flow through the material as a supercurrent that does not experience scattering, and superconductivity appears as a result. The theory falls short, however, when it comes to explaining the mechanisms behind high-temperature superconductors. Indeed, unconventional superconductivity is a fundamental unsolved problem in condensed-matter physics.

    To better understand these materials, researchers need to know how the electrons of these 3d-transition metals are correlated and how strongly they interact with each other. Spatial fluctuation effects (which are enhanced by the fact that these oxides are typically made as two-dimensional or thin-film materials) are also important. While techniques such as Feynman diagrammatic perturbations can be used to describe such fluctuations, they fall short when it comes to capturing correlation effects like the metal-insulator (Mott) transition, which is one of the cornerstones of high-temperature superconductivity.

    This is where a model known as dynamic mean field theory (DMFT) comes into its own. In the new work, researchers led by TU Wien solid-state physicist Karsten Held used so-called diagrammatic extensions to DMFT to study the superconducting behaviour of several palladate compounds.

    The calculations, which are detailed in Physical Review Letters, reveal that the interaction between electrons must be strong, but not too strong, to achieve high transition temperatures. Neither cuprates or nickelates are close to this optimum, medium-type interaction, but palladates are. “Palladium is directly one line below nickel in the periodic table,” Held observes. “The properties are similar, but the electrons there are on average somewhat further away from the atomic nucleus and each other, so the electronic interaction is weaker.”

    The researchers found that while some palladates, notably RbSr2PdO3 and A′2PdO2Cl2 (A′=Ba0.5La0.5), are “virtually optimal”, others, such as NdPdO2, are too weakly correlated. “Our theoretical description of superconductivity has reached a new level,” Motoharu Kitatani of the University of Hyogo tells Physics World. “We are positive that our experimental colleagues will now try to synthesize these materials.”

    The post Palladium oxides could make better superconductors appeared first on Physics World.

    ]]>
    Research update New calculations reveal that palladates remain superconducting at higher temperatures than cuprates or nickelates https://physicsworld.com/wp-content/uploads/2023/05/e54_1.jpg newsletter1
    Brain–spine interface enables natural walking after spinal cord injury https://physicsworld.com/a/brain-spine-interface-enables-natural-walking-after-spinal-cord-injury/ Thu, 01 Jun 2023 08:30:09 +0000 https://physicsworld.com/?p=108192 A “digital bridge” between the brain and spinal cord helped an individual with paralysis to stand and walk naturally

    The post Brain–spine interface enables natural walking after spinal cord injury appeared first on Physics World.

    ]]>
    To initiate walking, the brain sends commands to neurons located in the lumbosacral spinal cord – the region of the spine that controls leg movement. If an injury interrupts this communication between brain and spinal cord, it can cause permanent paralysis.

    Researchers have now developed a brain–spine interface (BSI) that can restore this communication. They demonstrated that the device, described in Nature, could help an individual with paralysis of the arms and legs to stand and walk naturally.

    “What we have been able to do is re-establish the communication between the brain and the region of the spinal cord controlling movement, using a digital bridge,” the study’s co-lead author Grégoire Courtine, from Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, explained in a press briefing. “We captured the thoughts of [the participant] and translated these into stimulation of the spinal cord to induce leg movement.”

    The BSI comprises two fully implantable systems that record cortical activity and stimulate the lumbosacral spinal cord in real time. To monitor electrocorticographic (ECoG) signals from the brain, the team used a 64-channel electrode grid embedded in a 50 mm diameter titanium case with the same thickness as the skull.

    A processing unit uses ECoG signals recorded from brain regions that control movement to predict the user’s motor intentions, and then converts these intentions into stimulation commands that activate leg muscles. Electrical stimulation is delivered to the targeted region using an implantable pulse generator connected to a 16-electrode paddle lead. The whole system operates wirelessly, allowing the user to move around independently.

    “We developed brain–machine interface technology based on unique implantable medical devices, named WIMAGINE, that are able to record the brain activity at the surface of the cortex,” says co-lead author Guillaume Charvet, head of the BCI programme at CEA-Leti’s Clinatec, in France. “We also developed a dedicated algorithm based on artificial intelligence methods able to decode in real time the intention of movement of the patient.”

    Clinical trial

    To test the BSI, the researchers recruited a 38-year-old male who had an incomplete cervical spinal cord injury from a bike accident 10 years earlier. He had previously participated in the STIMO trial, which involved targeted epidural electrical stimulation of the spinal cord. This enabled him to regain the ability to step with the help of a front-wheel walker. However, after three years of regular training with stimulation only, he had reached a plateau of recovery, motivating him to enrol in this latest study – STIMO-BSI.

    Jocelyne Bloch, co-lead author and a functional neurosurgeon at Lausanne University Hospital, surgically implanted two recording devices on the participant’s brain (on regions of the cerebral cortex that respond to the intention to move the left and right lower limbs) and the paddle lead on his lumbar spinal cord.

    The researchers first calibrated the BSI to select features of ECoG signals linked to the intention to move, and to configure stimulation programmes that modulate specific groups of lower limb muscles. They then used a multilinear algorithm that linked ECoG signals to the control of stimulation parameters. In just a few minutes, the algorithm calibrated a BSI that enabled the participant to control hip deflection.

    BSI training at Lausanne University Hospital

    To support walking with crutches, the team selected stimulation programmes that targeted muscles associated with weight acceptance, propulsion and swing functions. After several minutes of training with the BSI, the participant was able to walk naturally and independently. When the BSI was turned off, he instantly lost the ability to take steps; walking resumed as soon as it was turned back on.

    The researchers note that after the original STIMO trial, the participant regained basic walking ability during stimulation and partial mobility without stimulation. However, he had difficulty transitioning from standing to walking and stopping, and could only walk over flat surfaces. Using the BSI enabled him to climb up and down a steep ramp with ease, climb stairs, negotiate obstacles and traverse changing terrains, all using the same BSI configuration. The BSI remained reliable and stable for over one year of use, including at home without supervision.

    Functional recovery

    After completing 40 sessions of neurorehabilitation – walking with BSI, single-joint movements with BSI, balance with BSI and standard physiotherapy – the participant was able to walk with crutches, even when the implant was switched off and exhibited improvements in all conventional clinical assessments.

    These improvements without stimulation translated into a meaningful increase in his quality-of-life, such as walking independently around the house, getting in and out of a car, or sharing a beer standing at a bar with friends.

    “My wish was to walk again and I believed it was possible. I tried many things before and now I have to learn how to walk naturally again,” the participant, Gert-Jan, reported in the press briefing. “I can walk at least 100 or 200 metres, depending on the day, and I can stand for two or three minutes unsupported.”

    When asked to compare the BSI to the spinal-cord stimulation in the STIMO trial, he explained that stimulation alone didn’t feel completely natural. “The stimulation before was controlling me; now I am controlling the stimulation by my thoughts, that’s the big difference,” he said.

    The post Brain–spine interface enables natural walking after spinal cord injury appeared first on Physics World.

    ]]>
    Research update A “digital bridge” between the brain and spinal cord helped an individual with paralysis to stand and walk naturally https://physicsworld.com/wp-content/uploads/2023/06/Walking-with-digital-bridge.jpg newsletter1
    Surface plasmon polaritons launched by nano-emitters are imaged in the near field https://physicsworld.com/a/surface-plasmon-polaritons-launched-by-nano-emitters-are-imaged-in-the-near-field/ Wed, 31 May 2023 15:28:24 +0000 https://physicsworld.com/?p=108168 Tip-enhanced nanospectroscopy reveals quasiparticle standing waves

    The post Surface plasmon polaritons launched by nano-emitters are imaged in the near field appeared first on Physics World.

    ]]>
    Light emitters made from 2D and quasi-2D materials are currently of great interest in nano-optoelectronics because their lack of dielectric screening means that their electron–hole pairs (excitons) are incredibly sensitive to their environment. This is advantageous for making devices such as highly responsive photosensors and electrochemical sensors.

    When deposited directly onto the surface of a metal in a metal/dielectric substrate, the light emitted by these quasi-2D materials or “nano-emitters” can generate surface plasmon polaritons (SPPs). These are light–matter quasiparticles that exist at a metal/dielectric interface and propagate along it as a wave. An SPP is an electromagnetic wave (polariton) in the dielectric that is coupled to an oscillation of electric charge on the surface of the metal (surface plasmon). As a result, SPPs have properties that are similar to both matter and light.

    The electromagnetic field of an SPP is confined to the near field. This means that it exists only at the metal/dielectric interface, with its intensity decaying exponentially with increasing distance into each medium. This results in a large enhancement of the electric field, making SPPs incredibly sensitive to their environment. What is more, near-field light can be manipulated at sub-wavelength length scales.

    Until now, SPP/nano-emitter systems have been studied extensively in the optical far field, but the imaging techniques used are diffraction-limited and important sub-wavelength mechanisms cannot be visualized. In a new study described in Nature Communications, researchers in the US have used tip-enhanced nanospectrosopy to study SPPs in nano-emitters in the near field. This allowed the team to visualize spatial and spectral properties of the propagating SPPs. Indeed, their research could lead to exciting new practical plasmonic devices.

    Bigger is not always better

    In recent years, research into photonic devices and their integration into circuits has been of great interest in industry and academia. This is because compared to purely electronic devices, photonic devices can achieve higher energy efficiencies and faster operating speeds.

    However, there are two big challenges that must be overcome before photonics overtakes electronics in mainstream applications. One is that purely photonic devices are difficult to connect together to form larger circuits; and the other is that the size of photonic devices cannot be made smaller than about half the wavelength of the light they process. The latter limits device sizes to about 500 nm, which is much larger than modern transistors.

    Both of these problems can be solved by creating devices that operate using SPPs, rather than conventional light. This is because the light-like properties of SPPs allow for extremely fast device operation, whereas the matter-like properties of SPPs allow for easier integration into circuits and operation below the diffraction limit.

    However, in order to design practical nano-electronics, a better understanding of the sub-wavelength behaviour of SPPs is needed. Now, Kiyoung Jo, a PhD student at the University of Pennsylvania, and colleagues have studied SPPs using tip-enhanced nanospectroscopy. This technique couples a far-field spectrometer with an atomic force microscope (AFM).

    SPP standing wave

    The gold-coated AFM tip scatters light in the near-field, which allows the SPPs to be spatially and spectrally imaged using the spectrometer. The sample was fabricated by spin-coating a solution of quasi-2D nanoplatelets (nanometre-scale flakes of the light emitter CdSe/CdxZn1-xS) onto an gold substrate and then depositing an aluminium oxide dielectric on top using atomic layer deposition.

    The nanoplatelets were excited using a laser and their subsequent light emission launched SPPs that propagated along the gold/aluminium oxide interface. The researchers observed that the SPPs could propagate up to hundreds of microns and could also be reflected by the gold tip back along their original path. In case of reflections, the incident and reflected SPPs interfered with one another, forming a standing wave between the tip and the nanoplatelet (see figure: “Quasiparticle reflections”). Experimentally, these were observed as parabolic-shaped fringes.

    As the distance between the tip and the nanoplatelet was increased, the researchers found that the electric-field intensity varied periodically. This confirmed the presence of a standing wave and demonstrated how the nanoplatelet and tip act as a kind of cavity. Computer simulations showed, however, that, although both tip and nanoplatelet are required to observe fringes, the electromagnetic field generated by the SPPs is present with only one, confirming that both are able to launch SPPs.

    The researchers also investigated the effect of the sample properties on the SPP emission. For example, they found that fringes only occurred when the nanoplatelets were “edge-up” (perpendicular to the plane of the substrate), and the excitation laser was polarized such that its magnetic field was perpendicular to the plane of incidence (TM polarization). As a result the polarization of the excitation laser can be used as a “switch” to easily turn the SPPs on and off, which is an important feature for opto-electronic devices. The team also found that the shape of the fringes could be used to determine the dipole orientation of the nano-emitter, with the parabolic shape suggesting a slight incline (circular fringes would indicate an angle of exactly 90° to the plane of the substrate).

    Thickness also played an important role in the properties of the SPPs, with thicker nanoplatelets yielding stronger electric fields, and thicker dielectrics resulting in longer SPP propagation distances. Studies using different dielectric materials (titanium dioxide; and monolayer tungsten diselenide) indicated that, due to increased electric-field confinement, a larger dielectric permittivity also resulted in longer propagation distances. This is important to know, as the propagation distance directly correlates to energy transfer by the SPPs. Jo summarizes that “We find, visualize and characterize the sub-wavelength-scale energy flow via SPPs in the vicinity of individual nanoscale emitters.”

    The team has shown that tip-enhanced nanospectroscopy is a powerful tool for the study of the near-field in SPP systems, allowing various properties, such as dipole orientation and implications of sample design, to be determined. “The ability to image and examine sub-wavelength photonic phenomena in excitonic semiconductors makes [near-field scanning optical microscopy] a valuable tool for fundamental studies as well as semiconductor characterization,” says Deep Jariwala, who is corresponding author on the paper describing the work. Such an enhanced understanding of SPP systems will be invaluable in the development of practical nano-optoelectronic devices.

    The post Surface plasmon polaritons launched by nano-emitters are imaged in the near field appeared first on Physics World.

    ]]>
    Research update Tip-enhanced nanospectroscopy reveals quasiparticle standing waves https://physicsworld.com/wp-content/uploads/2023/05/SPP-experimental-setup.jpg
    Silicon photomultipliers: gearing up for applications in gamma-ray astronomy https://physicsworld.com/a/silicon-photomultipliers-gearing-up-for-applications-in-gamma-ray-astronomy/ Wed, 31 May 2023 13:13:15 +0000 https://physicsworld.com/?p=108105 Silicon photomultipliers will provide a core enabling technology in the Cherenkov Telescope Array, the world’s largest and most sensitive gamma-ray observatory

    The post Silicon photomultipliers: gearing up for applications in gamma-ray astronomy appeared first on Physics World.

    ]]>
    Hamamatsu Photonics, a Japanese optoelectronics manufacturer that operates across diverse industrial, scientific and medical markets, is evaluating cutting-edge opportunities in high-energy physics for its silicon photomultiplier (SiPM) technology portfolio. Near term, that means the focus is on emerging applications in astroparticle physics and gamma-ray astronomy, while further down the line there’s the promise of at-scale SiPM deployment within particle accelerator facilities like CERN, KEK and Fermilab to probe new physics beyond the Standard Model.

    What of the basics? The SiPM – also known as a Multi-Pixel Photon Counter (MPPC) – is a solid-state photomultiplier comprised of a high-density matrix of avalanche photodiodes operating in Geiger mode (such that a single electron–hole pair generated by absorption of a photon can trigger a strong “avalanche” effect). In this way, the technology provides the basis of an optical sensing platform that’s ideally suited to single-photon counting and other ultralow-light applications at wavelengths ranging from the vacuum-ultraviolet through the visible to the near-infrared.

    Hamamatsu, for its part, currently supplies commercial SiPM solutions into a range of established and emerging applications spanning academic research (e.g. quantum computing and quantum communication experiments); nuclear medicine (e.g. positron emission tomography); hygiene monitoring in food production facilities; as well as light detection and ranging (LiDAR) systems for autonomous vehicles. Other customers include instrumentation OEMs specializing in areas such as fluorescence microscopy and scanning laser ophthalmoscopy. Taken together, what underpins these diverse use-cases is the SiPM’s unique specification sheet, combining high photon detection efficiency (PDE) with ruggedness, resistance to excess light and immunity to magnetic fields.

    Gamma-ray insights

    Evidently, those same characteristics are well-matched to the technical requirements of the next generation of detectors for astroparticle physics (the study of elementary particles of cosmic origin and their relation to astrophysics and cosmology). A case in point is the Cherenkov Telescope Array (CTA) Observatory, an ambitious international research initiative that’s in the process of building the world’s largest and most sensitive high-energy gamma-ray observatory, comprising 64 telescopes of different sizes to cover a broad gamma-ray energy range (from 20 GeV to 300 TeV). The telescopes will populate two arrays – one site located in the Canary Islands, Spain; the other in Chile – to cover both the northern and southern hemispheres.

    Mauro Bombonati

    By way of context, when gamma rays reach the Earth’s atmosphere, they interact with its outer layers to produce cascades of subatomic particles known as “air showers” or “particle showers.” These ultrahigh-energy particles can travel faster than light in the air, creating a blue flash of Cherenkov light (like the sonic boom created by an aircraft exceeding the speed of sound).

    While spread over a large area (typically 250 m in diameter), the Cherenkov light lasts for only a few nanoseconds – just long enough to be tracked by the mirrors of the CTA’s telescopes and detected by the high-speed cameras positioned at their foci. As such, the CTA will ultimately enable astronomers to investigate the parent gamma rays and their cosmic origins.

    “In terms of ongoing product development and innovation, we are interested in how the SiPM platform can be used for atmospheric detection of Cherenkov light,” explains Mauro Bombonati, senior sales engineer at Hamamatsu Photonics’ Italian division in Milan. “We see the CTA initiative as an ideal proving ground for advanced SiPM detectors and, by extension, a stepping-stone for future deployment of SiPM technology in large-scale accelerator facilities – for example, to support neutrino experiments and the search for dark matter.”

    Blue-sky collaboration

    With this in mind, Hamamatsu’s R&D team has collaborated closely with the Italian National Institute of Astrophysics (INAF) in the context of the ASTRI project, an international consortium that’s in the process of building nine dual-mirror telescopes (4 m in diameter) for atmospheric Cherenkov astronomy. As a preferred technology partner, Hamamatsu handled the design, development and optimization of ad hoc SiPM modules used to populate the compact Cherenkov cameras of the ASTRI telescopes. The resulting ASTRI mini-array is currently being installed at the Teide Observatory (Tenerife, Canary Islands) and represents a “pathfinder” for the CTA’s sub-array of 37 small-scale telescopes (SSTs) that will be installed at Paranal (Chile).

    Upon completion, the CTA will further comprise 23 medium-sized telescopes (MSTs) – each at 12 m diameter and distributed over both array sites – as well as four large-sized telescopes (LSTs) at 23 m diameter. Operationally, the LST and MST camera systems will exploit photomultiplier tubes; the SST cameras, in contrast, will use SiPMs to convert Cherenkov light into electrical data for high-speed readout and analysis.

    It’s also worth noting that INAF, along with other CTA project teams, is pursuing variations on the SST theme, with slight modifications to the geometry and design of the SST telescopes to realize an optimum approach versus CTA technical requirements. Within Hamamatsu, too, the device-level R&D effort is ongoing – specifically improving SiPM PDE in the near-UV (200–400 nm), where Cherenkov light intensity is optimum.

    the focal plane of an ASTRI telescope with SiPM detector array

    “We’re improving the wafer fabrication process to reduce the number of lattice defects in the photoelectric conversion layer,” notes Bombonati. The goal is increased carrier lifetime and greater numbers of carriers reaching the avalanche layer. “To date,” he adds, “Hamamatsu engineers have demonstrated a 16% enhancement in the detector sensitivity at 350 nm.”

    Another focus of Hamamatsu’s R&D involves pile-up suppression in SiPM detectors – i.e. to make the rising edge of the signal waveform sharper by adjusting the quenching resistor and reducing terminal capacitance. In this way, a lower trigger threshold can be used to separate Cherenkov “events” from noise, such that lower-energy events can be observed as standard.

    Equally significant is the exploitation of through-silicon-via (TSV) technology, which is essentially a vertical electrical connection that passes completely through a silicon wafer to maximize the active area for photon detection while simultaneously minimizing dead space (thereby enhancing PDE while also lowering crosstalk between SiPM pixels).

    Competitive intelligence

    Strategically, Hamamatsu maintains a watching brief on the wider landscape in high-energy physics to ensure a customer-driven frame of reference for its in-house innovation programme. A case in point is the company’s “observer status” within CERN’s European Committee for Future Accelerators (ECFA), an initiative that underpins community-wide development of long-term R&D roadmaps for accelerator and detector technologies.

    “Engagement with the ECFA helps us to prioritize emerging technology trends and user requirements for SiPM in astroparticle physics and accelerator-based science,” concludes Bombonati. “At the same time, developing SiPM solutions for frontier research in high-energy physics also yields paybacks elsewhere – not least in terms of enhanced capability and competitive differentiation for our more established industrial applications.”

    The post Silicon photomultipliers: gearing up for applications in gamma-ray astronomy appeared first on Physics World.

    ]]>
    Analysis Silicon photomultipliers will provide a core enabling technology in the Cherenkov Telescope Array, the world’s largest and most sensitive gamma-ray observatory https://physicsworld.com/wp-content/uploads/2023/05/ASTRI-telescope.jpg newsletter
    Cannabis breath-test research goes up in smoke https://physicsworld.com/a/cannabis-breath-test-research-goes-up-in-smoke/ Wed, 31 May 2023 09:58:49 +0000 https://physicsworld.com/?p=108149 Using breath samples to determine whether drivers have consumed too much marijuana remains a pipe dream – for now

    The post Cannabis breath-test research goes up in smoke appeared first on Physics World.

    ]]>
    Roadside breath tests are a staple of policing. Whenever officers suspect drivers of being drunk, they ask them to blow into a tube. This tube leads to a handheld device popularly known as a Breathalyzer that analyses the breath sample and outputs an estimate of the driver’s blood-alcohol level. Though not infallible, Breathalyzers are quick and accurate enough to help get drunks off the road before they harm themselves and others.

    But what if the driver hasn’t been drinking? What if, instead, they’ve been smoking some fine, fine weed?

    Like alcohol, cannabis is legal in many jurisdictions. Like alcohol, it can render users unfit to drive for several hours, long after their last dance with Mary Jane is but a hazy, munchie-filled memory. So, is there a Breathalyzer for cannabis?

    The answer, so far, is no – but not for lack of trying. The latest effort comes from researchers at the US National Institute of Standards and Technology (NIST) and the University of Colorado at Boulder. Led by Tara Lovestead and Kavita M Jeerage of NIST’s applied chemical and materials division, the team set out to measure the amount of tetrahydrocannabinolic acid (THC, the active ingredient in cannabis) in users’ breath, and to monitor how it changes over time.

    Barriers to a cannabis breath test

    Such studies are challenging for three reasons. One is that, unlike alcohol, relatively little THC shows up directly in a user’s breath. Instead, a cannabis Breathalyzer – let’s call it a Reefalyzer – would have to detect tiny amounts of THC in particles that form within the lungs and are then exhaled.

    A further challenge is that THC can persist in the bodies of habitual users for weeks after any high has worn off. This means that a yes/no answer isn’t good enough: a practical Reefalyzer would have to distinguish between intoxicating and non-intoxicating levels of THC.

    Finally, although Colorado is one of several US states to allow marijuana use for recreational as well as medical purposes, the drug remains illegal at a federal level. As a result, the federally-employed NIST researchers could not handle the drug they were trying to study.

    A “federally compliant mobile laboratory”

    In a paper published in the Journal of Breath Research, Lovestead and colleagues outline the ingenious way they overcame one of these challenges. To collect breath samples from cannabis users in a controlled way, the team developed a “federally compliant mobile laboratory” that met users at their place of residence. There, the researchers collected breath and blood samples before and after users returned to their homes to smoke high-THC cannabis from a local dispensary. Finally, the researchers used laboratory instruments to measure the amount of THC in the users’ breath.

    So far, so good. From a Reefalyzer perspective, though, the results were disappointing. “We expected to see higher THC concentrations in the breath samples collected an hour after people used,” Lovestead told the NIST press office. In fact, the researchers found that pre-use and post-use THC levels spanned a similar range. “In many cases, we would not have been able to tell whether the person smoked within the last hour based on the concentration of THC in their breath,” she concluded.

    The team identified a few possible avenues for future experiments. One possibility would be to measure the flow rate of breath samples, to help identify outliers and investigate whether flow plays a role in aerosol capture. Another would be to perform tests on THC-spiked aerosols generated in a laboratory, rather than relying solely on human subjects.

    The bottom line, though, is that the researchers say their results “do not support the idea that detecting THC in breath as a single measurement could reliably indicate recent cannabis use”. So if you’re waiting for Reefalyzers to appear alongside Breathalyzers in your favourite TV cop show – well, don’t hold your breath.

    The post Cannabis breath-test research goes up in smoke appeared first on Physics World.

    ]]>
    Blog Using breath samples to determine whether drivers have consumed too much marijuana remains a pipe dream – for now https://physicsworld.com/wp-content/uploads/2023/05/Cinnamon_Bidwell_Research_Lab_0047PC_cr_web.jpg newsletter
    Award winning studies focus on reducing radiotherapy risks https://physicsworld.com/a/award-winning-studies-focus-on-reducing-radiotherapy-risks/ Wed, 31 May 2023 08:41:00 +0000 https://physicsworld.com/?p=108141 Studies on toxicity modelling, HDR brachytherapy and AI-based lung disease screening were among those chosen as the Best Papers at the ESTRO 2023 congress

    The post Award winning studies focus on reducing radiotherapy risks appeared first on Physics World.

    ]]>
    ESTRO 2023, the annual congress of the European Society for Radiotherapy and Oncology, featured an extensive scientific programme spanning six key themes: physics, brachytherapy, clinical, interdisciplinary, radiobiology and RTT (radiation therapists). For each of these tracks, one submitted abstract was chosen as the “Best Paper” in its class, with the winners presenting their research in a dedicated “Highlights of Proffered Papers” plenary session.

    Toxicity modelling

    In the physics track, the Best Paper Award went to Tiziana Rancati from the National Cancer Institute of Milan, for a study examining models of late toxicity after prostate cancer radiotherapy. In particular, Rancati introduced a model for normal tissue complication probability (NTCP) based on Cox regression (a method for predicting the time to an event using several variables): the Cox-NTCP model.

    “The specific purpose of this analysis was to propose a Cox-NTCP model for late toxicity after prostate cancer radiotherapy, including genetic information from a polygenic risk score incorporating SNP–SNP interactions,” Rancati explained. To develop their model, Rancati and colleagues worked within the REQUITE and RADPrecise projects, multi-centre studies of cancer patients that aimed to validate predictive models and biomarkers to reduce radiotherapy side effects.

    Their analysis considered four late-toxicity endpoints: grade 1+ and 2+ rectal bleeding; grade 2+ late urinary frequency; and grade 1+ late haematuria (blood in the urine). For dosimetry, they investigated equivalent uniform dose (EUD) values calculated from dose–volume histograms (DVHs) and dose–surface histograms (DSHs).

    Using the two-year REQUITE follow-up data, Rancati and colleagues developed an interaction-aware polygenic risk score. They started with 43 SNPs (single nucleotide polymorphisms, the most common type of genetic variation among people) known to be associated with late toxicity, validated 13 that worked within REQUITE, and used data mining to find SNP–SNP combinations associated with either increased or decreased risk of toxicity. They then weighted the risk score and protective risk score to create a polygenic risk score with interactions (PRSi).

    The analysis included 1482 patients, the majority of whom received volumetric modulated arc therapy (VMAT) with conventional fractionation. Patient follow-up occurred at between one and eight years, with a median follow-up of two years. “With such heterogeneity in follow-up, we shifted from static NTCP models to actuarial NTCP models based on Cox regression,” Rancati explained. “This takes into account the maximum follow-up time of each patient and the time of any toxicity.”

    Rancati shared some results from the study. For grade 2+ urinary frequency, for example, the EUD to the whole bladder (calculated from the DSH) was the best dosimetric predictor of long-term toxicity. Cox-NTCP curves of toxicity versus bladder-surface EUD showed that toxicity was most likely for radiosensitive patients with PRSi scores of 1, and lowest for those with scores of -1, as seen in radioresistant patients. She noted that the curves were different for three- and five-year follow-up, emphasizing the importance of including time into NTCP models, while the PRSi scores show the importance of the genetic risk factors.

    Results for haematuria were similar, but with EUD to the bladder neck appearing more important than dose to the whole bladder. For rectal bleeding, the best dosimetric descriptor was rectal EUD calculated from the DVH. Rancati noted that in this case, the PRSi score was still associated but less discriminative than seen with other toxicities, with a shallower dose–response curve.

    “We showed the benefit of adding a polygenic risk score with interactions to Cox-NTCP prediction models,” Rancati concluded. “These models allow both patient-specific tailoring of the prediction and accounting for the follow-up time. Dose to organs- or sub-organs-at-risk modulates the risk of toxicity.”

    Improving quality-of-life

    The Best Paper Award in the brachytherapy track went to Vivek Anand from the Hinduja Hospital and Medical Research Centre in Mumbai, India. Anand presented a study comparing quality-of-life for patients with tongue cancer after treatment with external-beam radiotherapy (EBRT), or EBRT plus high-dose rate (HDR) brachytherapy.

    Vivek Anand

    Anand explained that adjuvant radiotherapy for treatment of tongue cancers is known to reduce the patient’s quality-of-life. HDR brachytherapy, however, can deliver a high dose of radiation to the tumour while sparing adjacent normal tissues. “This modality reduces morbidity without compromising on outcomes,” he said.

    The study included 63 oral tongue cancer patients who had undergone surgery followed by adjuvant radiotherapy, using either using EBRT or EBRT plus brachytherapy. EBRT was delivered to the neck nodes and whole tongue, with a higher dose boost delivered to the tumour bed and positive nodes. In the second group, patients received EBRT to the neck nodes, with a higher dose to positive nodes, plus six days of HDR brachytherapy to the primary tumour bed. Patients with cancerous nodes also had weekly concurrent chemotherapy.

    To compare functional outcomes in the two groups, the researchers used a questionnaire – the EORTC Quality of Life Head and Neck Module – which asks patients to rate dozens of factors including, for example, pain in the mouth and jaw, problems swallowing, loose teeth, speech problems, dry mouth, skin problems, and weight loss or gain. They also examined overall survival in both groups.

    The researchers found that the overall treatment time was slightly increased in the EBRT plus brachytherapy group, from 43.6 to 51.1 days. However, there was no difference in overall survival between the two groups.

    Of the 63 patients, 24 in EBRT group and 18 in the EBRT plus brachytherapy group completed the questionnaire, at median follow-ups of 37 and 35 months, respectively. “All symptom scales showed that brachytherapy was better,” said Anand. “Clinically and symptomatically, they were worse in the EBRT group. The only statistically significant parameter was weight loss, which proves that brachytherapy had little problems when used for the oral cavity.”

    Anand concluded that delivering radiation dose by brachytherapy to the oral tongue improves the patient’s quality-of-life, noting that the increased treatment time in the EBRT plus brachytherapy group did not result in decreased outcomes. “Larger numbers of patients and longer follow-up are warranted to study if we can make this one of the standard-of-care treatments,” he said.

    Screening for lung disease

    Andrew Hope from Princess Margaret Cancer Centre and the University of Toronto was the winner of the Best Paper Award in the interdisciplinary track, for his study on AI-based screening for interstitial lung disease (ILD). ILD poses a big challenge for oncology, Hope explained. It predisposes patients to lung cancer, but also increases the risks of cancer treatment. For radiotherapy, ILD increases the risk of radiation pneumonitis and even death.

    Andrew Hope

    ILD is traditionally diagnosed before radiation treatment using the patient’s diagnostic imaging scans or by noting clinical symptoms such as shortness of breath. But in some cases, patients may progress onto radiotherapy with undetected ILD, increasing the risk of radiation-related complications.

    “However, there is an additional image that is available – the treatment planning image,” said Hope. “Routinely, this is not diagnostically reviewed or assessed. So we thought that there might be an opportunity to explore this image in a more diagnostic fashion.”

    To automatically identify patients with ILD during radiotherapy planning, Hope and colleagues developed a machine learning pipeline called the MIRA clinical learning environment (MIRACLE). The MIRACLE-ILD system uses convolutional neural networks (CNNs) to identify ILD from a planning image, including a 2D U-net to perform the lung contouring and a 3D CNN for classification.

    Following initial training of the MIRACLE-ILD on diagnostic CTs (which did not work well due to differences between diagnostic and planning scans), the researchers retrained the model using transfer learning with a radiotherapy-specific data set. They chose to threshold the model to provide 65–75% sensitivity to ILD at the cost of a 15–20% false positive rate.

    To verify the clinical performance of MIRACLE-ILD, the team first deployed the model in “silent mode”, with no notifications sent to treating physicians. This study included 180 patients, nine of whom had ILD. MIRACLE-ILD correctly identified six of these cases, with a reasonable accuracy (86%), sensitivity (67%) and specificity (87%). “MIRACLE would have detected two of the four patients that were unknown to have ILD by the treating team at the time,” Hope noted.

    In May 2022, the team moved on to the live phase, in which any positive cases were flagged to the physicians. This study included 254 patients, 13 of whom had ILD, and used the same model and threshold as before. MIRACLE-ILD flagged 42 patients as ILD-positive, with good accuracy (84%) and specificity (85%), but slightly lower sensitivity (54%) than previously. Here, there were seven unknown ILD cases and the system found three of them prior to treatment.

    “In total, we had 434 patients of which 22 had ILD. The overall performance of the model was quite reasonable, with an accuracy of 85% and a specificity of 86%. We detected five of 11 unknown ILD cases in this cohort,” said Hope. “We feel this represents a validated prospective way to screen radiotherapy plans for the possibility of a patient having ILD.”

    He pointed out that the system remains live at the Princess Margaret Cancer Centre and is used to screen every patient who receives thoracic radiotherapy.

    The rest of the best

    Clinical Best Paper: Molecular classification of endometrial cancer is predictive of response to adjuvant radiotherapyNanda Horeweg, Leiden University Medical Center

    Radiobiology Best Paper: Hypoxic tumour cells drive tumour relapse after radiotherapy as revealed by a novel tracing toolApostolos Menegakis, Netherlands Cancer Institute

    RTT Best Paper: Randomized trial of person-centered versus standard RTT care for breast cancer patients NCT04507568Michael Velec, Princess Margaret Cancer Centre

    The post Award winning studies focus on reducing radiotherapy risks appeared first on Physics World.

    ]]>
    Research update Studies on toxicity modelling, HDR brachytherapy and AI-based lung disease screening were among those chosen as the Best Papers at the ESTRO 2023 congress https://physicsworld.com/wp-content/uploads/2023/05/ESTRO-physics-Rancati.jpg
    Incremental gains, continuous improvement: the recipe for success in nanopositioning QA https://physicsworld.com/a/incremental-gains-continuous-improvement-the-recipe-for-success-in-nanopositioning-qa/ Tue, 30 May 2023 15:11:52 +0000 https://physicsworld.com/?p=108022 Queensgate is betting its portfolio of nanopositioning stages will yield game-changing performance gains

    The post Incremental gains, continuous improvement: the recipe for success in nanopositioning QA appeared first on Physics World.

    ]]>
    Queensgate industrial metrology

    Enhanced spatial correction in multi-axis nanopositioning stages provided the original motivation – and, ultimately, the successful production outcome – for the latest project in the long-running R&D collaboration between Queensgate, a UK manufacturer of high-precision nanopositioning products, and scientists at the National Physical Laboratory (NPL), the UK’s National Metrology Institute.

    With funding from Analysis for Innovators (A4I) – a programme run by Innovate UK, the UK’s innovation agency – the two partners undertook a “deep dive” into the nature and extent of parasitic (off-axis) motion errors in Queensgate’s multi-axis nanopositioning stages. Their granular investigation has yielded a practical correction and calibration methodology that will reinforce Queensgate’s end-to-end quality assurance (QA) across product design, development and manufacturing for its portfolio of piezo-driven nanopositioning stages (as well as enabling technologies such as piezo actuators, capacitive sensors, control electronics and software).

    “Our collaboration with Queensgate has yielded reciprocal benefits over a wide range of joint R&D projects for the past decade or so,” explains Andrew Yacoot, principal scientist who leads NPL’s dimensional nanometrology programme and chairs the Working Group for Dimensional Nanometrology of the Consultative Committee for Length (one of ten Consultative Committees that oversee the SI units, the international standards of measurement). That win-win sees NPL address one of its broader missions: helping specialist technology companies to solve thorny industrial problems and, by extension, delivering transferable innovation, continuous product improvement and long-term commercial impacts. “At the same time,” adds Yacoot, “we get a direct line into Queensgate’s product development team to inform them of our unique, often non-standard, nanopositioning requirements for nanoscale science and metrology.”

    Andrew Yacoot of NPL

    If that’s the back-story, what of the project detail? For starters, spatial-error correction in nanopositioning stages represents a non-trivial exercise in applied measurement – owing, in large part, to the difficulty of capturing and analysing sufficient data points, also the complexities associated with coding the necessary error-correction algorithms. All of which provides the context for Queensgate’s latest tie-up with NPL, where Yacoot and colleagues exploit multi-axis interferometric instrumentation to support the laboratory’s ongoing R&D efforts in high-accuracy nanopositioning.

    To this end, a dedicated NPL stage rig uses three orthogonally mounted, plane-mirror differential interferometers (designed by NPL) to measure the relative displacement between a mirror cube (mounted on a stage) and a set of reference mirrors. The interferometers are illuminated using light from stabilized helium-neon lasers that have been calibrated against NPL’s primary metre-realization laser to give traceable position measurement. To reduce thermal and acoustic effects, the entire set-up is also enclosed and mounted on a vibration isolation platform.

    Made to measure

    Using this experimental rig to characterize spatial errors (and inform the subsequent calibration process), the NPL project team put two Queensgate stages through their paces: the QGSP-XY-600-Z-600 (which has a 600 μm range along the x, y and z axes) and the QGNPS-XY-100D (which moves 100 μm in the x and y axes only). The latter is a high-performance stage that has been well characterized as part of a previous Queensgate/NPL collaboration on high-speed atomic force microscopy (AFM). Using a “known good” stage also allows the calibration methodology to be assessed in situations where the errors are smaller and thereby demonstrate the transferability of the error-correction techniques.

    Zoom in and NPL’s measurement methodology is simple enough – albeit necessarily exhaustive. For each point in the stage’s volume of motion, the stage was commanded to move to a position and then allowed to settle for a specified time. “Closed-loop control ensures this position reflects the displacement reported by the stage’s capacitive sensors,” explains Yacoot. “The actual displacement is then collected from the NPL interferometers, in order to determine the spatial positioning error.”

    Experimental insights

    Operationally, the software for control of the stage rig and data collection was written by Edward Heaps, a member of Yacoot’s nanometrology team. His work was informed by previous studies showing that a scan of 11 points along each axis gives sufficient data for mapping spatial positioning errors (and, crucially, without an excessive timeframe for data acquisition).

    For the 3D stage, Heaps captured a total of 1331 (11×11×11) data points at 40 μm (commanded) intervals, while for the 2D stage a total of 121 (11×11) points were captured at 10 μm (commanded) intervals. Furthermore, it was necessary to capture actual spatial positions for the commanded points for all axes moving in both directions – to assess repeatable errors caused by unavoidable hysteretic processes within the stage – while repeating the entire measurement cycle six times to quantify stochastic errors.

    The resulting data set underpins a dedicated error-correction algorithm devised and optimized by Yacoot’s colleague Alistair Forbes, a mathematician and NPL Fellow. Following implementation of the algorithm within prototype stage firmware, the algorithm provides the basis for a robust calibration procedure that – evidenced by a repeat set of experimental measurements on the spatially corrected stages – yields a significant tightening of the positioning errors in the devices under study (see tables 1 and 2). Equally, the large multi-axis stage achieved performance improvements in line with the uncompensated shorter-range xy stage – opening up opportunities to deploy stages with longer travel ranges (600 µm x 600 μm) in high-precision applications like AFM, nanolithography and 3D nanoprinting.

    “Right now, we are implementing the correction algorithm into full production-quality firmware while rolling out the calibration process within our assembly operations,” explains Sam Frost, production manager and site lead at Queensgate’s manufacturing facility in Paignton, UK. “There’s more work needed to standardize the new-look workflows, but we’ll be shipping the first commercial stages to benefit from NPL’s enhanced measurement and calibration methodology later on in the spring.”

    Meanwhile, Queensgate’s product manager, Craig Goodman, is already laying the ground for the next joint project with NPL’s nanometrology team. With follow-on funding secured in the latest A4I round earlier this year, the partners will seek to build on the error-correction advances in linear nanopositioning stages, tailoring the multi-axis correction algorithm for application in Queensgate’s tip-tilt stages (which combine linear and angular motion along the x, y and z axes). “Tip-tilt stages are used in advanced silicon wafer processing and, owing to their construction, exhibit large cross-coupling errors between the two rotational axes,” explains Goodman. “It’s a complex proposition to quantify cross-talk between all the different actuators and sensors in a tip-tilt platform, let alone translate those insights into an optimized correction and calibration scheme.”

    Queensgate table 1

    Queensgate table 2

    The post Incremental gains, continuous improvement: the recipe for success in nanopositioning QA appeared first on Physics World.

    ]]>
    Analysis Queensgate is betting its portfolio of nanopositioning stages will yield game-changing performance gains https://physicsworld.com/wp-content/uploads/2023/05/Queensgate-list.jpg
    Dynamic nuclear polarization: how a technique from particle physics is transforming medical imaging https://physicsworld.com/a/dynamic-nuclear-polarization-how-a-technique-from-particle-physics-is-transforming-medical-imaging/ Tue, 30 May 2023 14:00:31 +0000 https://physicsworld.com/?p=107849 Jack Miller looks at the power of “dynamic nuclear polarization” in medicine

    The post Dynamic nuclear polarization: how a technique from particle physics is transforming medical imaging appeared first on Physics World.

    ]]>
    Image of the brain taken with dissolution dynamic nuclear polarization

    Life, for physicists, is an odd thing, seeming to create order in a universe that mostly tends towards disorder. At a biochemical level, life is even stranger – controlled and thermodynamically powered by a myriad of different molecules that most of us have probably never heard of. In fact, there’s one molecule – pyruvic acid – that’s crucial in keeping us alive.

    When burned, pyruvic acid releases carbon dioxide and water. If you’re exercising hard and your muscles are running low on oxygen, it’s converted anaerobically into lactic acid, which can give you a painful stitch. Later, your liver recycles the lactic acid back into sugars and the process starts anew.

    But pyruvic acid – known chemically as 2-oxypropanoic acid (CH3CO-COOH) – is also a marker for what’s going on inside your body. Run up a flight of stairs, skip a meal or get anaesthetised, and the rate at which pyruvic acid is metabolized (and what it’s converted into) will change. The speed with which it’s made or consumed will also vary enormously if you’re unfortunate enough to have a heart attack or develop cancer.

    As it turns out, we can track this molecule by exploiting the intrinsic angular momentum, or “spin”, of the nuclei in pyruvic acid. Spin is a fundamental physical property that comes in either integer or (in the case of protons and carbon-13 nuclei for example) half-integer multiples of ħ (Planck’s constant divided by 2π). Using an experimental technique known as “dissolution dynamic nuclear polarization” (d-DNP), it’s possible to create a version of the acid where many more of its carbon-13 nuclei exist in one spin state than another.

    By injecting this “hyperpolarized” pyruvic acid into a biological system, we can improve the notoriously poor signal-to-noise ratio of magnetic-resonance imaging (MRI) by a staggering five orders of magnitude. MRI, which has been of huge benefit in medicine, uses a mix of strong magnetic fields and radio waves to yield detailed images of human anatomy and physiological process inside the body. Its downside is, though, that patients often have sit for over an hour in an MRI machine for clinicians to get images that have a good enough resolution for their needs.

    With d-DNP, however, we can gain spectacular MRI images that reveal in detail what happens to pyruvic acid in biological systems. Over the last 20 years, the technique has been used to image bacteria, yeast and mammalian cells. It has looked at animals such as rats, mice, snakes, pigs, axolotls – and even dogs being treated for cancer. Most importantly, about 1000 people at 20 or so research labs around the world have been imaged using d-DNP with almost 50 clinical trials under way.

    So how does this technique work and what can it reveal to us about the human body?

    Ups and downs of magnetic resonance

    Giving clinicians valuable images of the location of water and fat in the body, the beauty of MRI is that it’s non-invasive and won’t harm a patient – even if sitting inside the bore of a magnet is not particularly pleasant. But magnetic resonance can yield far more than just pretty pictures because the behaviour of a nucleus in an applied magnetic field depends on where the nucleus is in a molecule and its precise location in the human body. In fact, we can use radio waves to measure the quantity and location of those nuclei in biological systems, turning MRI into a spectroscopic technique.

    MRI spectroscopy is able to reveal the precise distribution of molecules, such as lactic acid and adenosine triphosphate (ATP – the source of energy for use and storage at the cellular level) in almost any biological tissue. Unfortunately, these molecules are usually present at such low concentration that MRI images of them have a much lower resolution that equivalent images of water or fat. Worse, most MRI spectroscopy experiments require a patient to sit still for hours to get enough decent data, which is difficult especially if they’ve got an itchy nose or need the toilet.

    In the late 1990s, however, Jan Henrik Ardenkjær-Larsen – a physicist at the Technical University of Denmark (TUD) in Copenhagen – realized that d-DNP could make MRI spectroscopy much more sensitive. Developed with his TUD colleague Klaes Golman and others, the technique of d-DNP involves some beautiful basic physics that emerged from nuclear and particle labs back in the 1950s (see box at the end of this article,  “Stealing polarization from electrons”). At the heart of d-DNP is the concept of “nuclear polarization”, which comes from the energy levels of a nucleus with spin being split into two (or more) components when exposed to a magnetic field. The difference in energy, which is proportional to the strength of the field, provides useful information about the location of the nucleus.

    To get an easily measurable signal, however, you need far more nuclei in the higher-energy state (n) than in the lower-energy state (n). The key figure of merit is the “absolute nuclear polarization”, P, which is the difference between the number of nuclei in the two states divided by their total number i.e. (n­n) / (n­ + n). For protons or carbon-13 nuclei, which have a half-integer spin, P depends only on temperature, magnetic field and their “gyromagnetic ratio” (magnetic moment divided by angular momentum).

    1 In search of improved polarization

    Graph

    The value of P can range from a minimum of 0 to a maximum of 1 at absolute zero. At room temperature and in magnetic fields that we can reasonably achieve in the lab, P is annoyingly small – typically 10–6 or less. In other words, if there are exactly a million spins in the lower state, there are only a million and one in the upper state. However, in a macroscopic biological material there will be enough spin-half nuclei for it to become magnetized – albeit still relatively weakly – when placed in a magnetic field.

    Precessing around the applied field several million times per second, this weak magnetization can be measured by applying a pulse of radio waves. They generate a time-varying magnetic field, which induces a voltage in a nearby electrical circuit. To obtain an MRI image, all you need to do is vary the applied magnetic field across a sample and bathe it in radio waves. The result of such experiments is a map of the frequency and phase of the magnetic-resonance signal.

    But because P is so small, the magnetization is frustratingly weak, the recorded voltages are small, and the image resolution is poor. Patients requiring, say, a high-resolution brain scan often have sit for over an hour in an MRI machine for clinicians to get a big enough signal-to-noise ratio on the images they need. So even though modern hospital MRI scanners use superconductors that generate some of the strongest and most homogeneous magnetic fields on the planet, MRI – both for imaging and spectroscopy – is still a hugely time-consuming technique. What d-DNP can do is make MRI spectroscopy much more sensitive.

    2 Delicately does it

    The technique involves mixing pyruvic acid with a stable chemical source of unpaired electrons, typically a carbonyl radical trapped in a tiny molecular cage known as a “trityl radical”. The mixture is put in a vial, which is lowered into a bath of liquid helium, cooling it down to a temperature of 1.4K (figure 2). Microwaves are then fired at the sample, transferring the polarization from the carbonyl’s electrons to the nuclei in the pyruvic acid, which now has a polarization about five orders of magnitude higher than at room temperature.

    The acid is then transferred into a patient or other biological system in a nearby MRI scanner. This is done by squirting superheated water at a temperature of about 200ºC through a pipe on to the frozen acid so it rapidly melts. Another pipe is used to suck the acid up through a sterilized filter, which removes the trityl radical. The acid is then mixed with a base (to make sure it’s pH neutral), collected in a syringe and injected into the sample or patient. As the temperature of the pyruvic acid has changed almost instantaneously, the spins in the warm liquid are completely out of thermodynamic equilibrium.

    Any enterprising experimentalist has got no more than five minutes to take advantage of the humungous increase in magnetization that dynanmic dissolution nuclear polarization affords

    This is not an experiment for the faint hearted as pouring boiling water on to a cryostat is not usually a great idea. It’s also a race against time. From the moment that spin-polarized pyruvic acid is created, its signal starts dropping, returning to equilibrium with a characteristic decay time of about 60 seconds. Any enterprising experimentalist has therefore got no more than five minutes or so to take advantage of the humungous increase in magnetization – and hence signal – that d-DNP affords.

    Get in quick

    And that’s the big drawback of pyruvic acid. Only processes that occur faster than about 60s can be studied. Researchers literally have to run from their cryostat with their syringe full of pyruvic acid to the scanner. But once injected into a living system, advanced spectroscopic imaging techniques can follow the acid as it moves through the body, monitoring where it is, how quickly it moves and – most importantly – what it changes into (figures 3 and 4).

    3 Heart of the matter

    The first people to be imaged with the technique were a group of men who had previously been diagnosed with prostate cancer. In a study led by Sarah Nelson from the University of California, San Francisco, in 2013, highly skilled pharmacists created the hyperpolarized pyruvic acid using a repurposed magnet from an Oxford Instruments nuclear magnetic resonance (NMR) machine operating at a field of 3.35T (Sci. Transl. Med. 5 198). After injecting the substance into patients, the researchers were able to detect the cancer in each person examined from the increased amount of lactic acid they subsequently produced.

    Lactic acid is one of the hallmarks of cancer because tumours produce a lot of it, acidifying the local environment, disturbing nearby cells and helping the tumour to spread. In one patient, the team in San Francisco even spotted an additional tumour deposit that conventional imaging missed. Confirmed by another biopsy, the detection ultimately led to the doctors changing the treatment that the patient underwent.

    4 Brain impact

    One difficulty with d-DNP is that the liquid helium, which is essential for the technique, cannot easily be sterilized. Spores remain visible in it, which – if they got inside a sick patient – could be deadly. It is therefore difficult to ensure that the technique is sterile, safe, repeatable and nowhere near as dangerous as it sounds. Our current solution is to transport the pyruvic acid from the cryostat to the syringe by via a single-use sterilized coaxial plastic tube.

    These devices are outrageously expensive to make as they involve various sterile filters, flowing chemicals and computer-driven syringes to handle the pyruvic acid. The tube also has to be sturdy enough to withstand a temperature difference of almost 500ºC (i.e. from liquid-helium temperatures to the boiling-hot solvent) without cracking and spraying fluid around. Each scan on a human participant can therefore cost several thousand pounds.

    Jack Miller in the lab

    But when you think how much it costs to treat cancer patients with surgery or drugs, it’s very much a price worth paying. And the results are breath-taking. What you get is a series of images that, roughly speaking, show the concentration of the pyruvic acid as it moves through the body and the concentration of what it turns into. These images are an invaluable insight into the human condition because the amount of acid depends on the specific biochemical reactions that occur in different parts of the body.

    We know, for example, that anti-cancer chemotherapy drugs are successful if they slow the rate at which pyruvic acid is converted into lactic acid. So by imaging a cancer patient with d-DNP after they’ve taken the drugs, clinicians might be able to tell within days or hours if the drug is likely to work. Without d-DNP, patients often require another set of scans weeks later to see if they have worked and if the tumours have shrunk.

    There are nearly 50 registered clinical trials using d-DNP around the world, including one I am setting up myself in Denmark. It will aim to help women suffering from locally advanced ovarian cancer, who in about 30% of cases currently need to undergo difficult operations that do not successfully remove the tumour. Surgeons are currently unable to accurately predict if they will be able to cut out a tumour before they start, and may – in hindsight – wish they had tried chemotherapy for longer beforehand.

    The technique is the outcome of more than six decades of supposedly arcane basic physics that many would have dismissed as being irrelevant and of no use to the “real world”

    Being able to quickly and objectively measure and quantify an individual’s disease – and how it is responding to therapy – is a holy grail of much medical research. Dissolution DNP could be a way to let us do this on a routine basis and is, I argue, a great example of interdisciplinary research and applied physics. The technique is the outcome of more than six decades of supposedly arcane basic physics that many would have dismissed at the time as being irrelevant and of no use to the “real world”.

    I take great comfort in knowing that this wonderful mix of quantum physics, chemistry and clinical medicine is literally saving lives.

    Stealing polarization from electrons

    The principle behind dissolution dynamic nuclear polarization (d-DNP) can be traced back to the US theoretical physicist Albert Overhauser, who realized way back in 1953 that the gyromagnetic ratio of electrons is about 500 times bigger than for nuclei. Given that P is proportional to this ratio, the electrons’ polarization will therefore be far larger too. Overhauser predicted that by firing microwaves of just the right energy at a metal such as lithium-7, which has unpaired electrons, you ought to be able to transfer the large polarization of the electrons to its nuclei.

    Three years later Thomas Carver and Charles Slichter showed that the polarization could indeed be “loaned” from electrons in this way (Phys. Rev. 102 975). Using battery-powered solenoid magnets, they increased their polarization by two orders of magnitude from about 10–9 to 10–7. Other physicists joined in the quest for higher nuclear polarizations, with much progress made by the Latvian-born physicist Anatole Abragam. Rather than using lithium-7, he cooled a particular paramagnetic salt in a strong magnetic field until almost all its electrons at thermal equilibrium were in the ground state, achieving a polarization of nearly 1.

    By firing microwaves at the sample, he was able to transfer a big chunk of the electrons’ huge polarization to the nuclei. The nuclei’s polarization rose over half an hour to about 0.8, which is many orders of magnitude bigger than it would be otherwise. The nuclei are said to be “dynamically polarized” because as soon as the microwaves are switched off, the electrons and nuclei both relax back to equilibrium. The value of P falls exponentially away with a half life ranging from seconds (for electrons) to days (for nuclei at very low temperatures).

    The technique, which was then known simply as dynamic nuclear polarization (i.e. without the “dissolution” term), also became of interest to high-energy physicists at labs like CERN, who realized that metre-sized blocks of cryogenically cooled paramagnetic salts could be used as targets for experiments. These materials can be given a known spin, so by firing beams of particles into them, it became possible to study how hadrons interact under controlled conditions. By the 1970s, the technique had gone from being an obscure solid-state physics “trick” to a routine and useful feature of particle physics.

    But there’s a big difference between measuring hadrons at low temperature and probing living biological materials. To do so, we need a molecule that polarizes easily, decays slowly and does something biologically interesting once injected into a living organism. Pyruvic acid fits the bill perfectly. Apart from being at the heart of all chemical reactions that power life, it’s miscible with commonly used chemical electronic free radicals, readily dissolves in hot solutions and is safe when injected into humans.

    The post Dynamic nuclear polarization: how a technique from particle physics is transforming medical imaging appeared first on Physics World.

    ]]>
    Feature Jack Miller looks at the power of “dynamic nuclear polarization” in medicine https://physicsworld.com/wp-content/uploads/2023/05/2023-05-Miller-brain-scans-LISTS.png
    Australian firm’s watery solution for solar power https://physicsworld.com/a/australian-firms-watery-solution-for-solar-power/ Tue, 30 May 2023 13:13:07 +0000 https://physicsworld.com/?p=108123 A novel approach to the counterintuitive problem of solar farms receiving too much sunlight

    The post Australian firm’s watery solution for solar power appeared first on Physics World.

    ]]>

    Australian company RayGen is tackling a problem that faces all solar farms: how to deal with the Sun’s intermittency in a way that makes economic sense. In temperate, cloudy locations such as the UK, the problem is often not enough sunlight. But where the Sun can be ferocious in the middle of the day – including a large part of Australia – electricity generation can threaten to overpower the grid. Raygen’s solution is to capture solar energy at high efficiencies, then store excess energy as a heat differential between two pools of water.

    So can this technology be a game-changer in the global shift to renewable energy? To find out more, watch the video, or read the Physics World article ‘Combining solar power with thermal storage to avoid wasting energy‘ by science writer Richard Stevenson.

    The post Australian firm’s watery solution for solar power appeared first on Physics World.

    ]]>
    Video A novel approach to the counterintuitive problem of solar farms receiving too much sunlight https://physicsworld.com/wp-content/uploads/2023/05/RayGen-video-listings.png
    Transparency window appears in an ensemble of ions https://physicsworld.com/a/transparency-window-appears-in-an-ensemble-of-ions/ Tue, 30 May 2023 09:00:55 +0000 https://physicsworld.com/?p=108089 "Collectively induced transparency" in an optical cavity has applications in quantum optical technologies

    The post Transparency window appears in an ensemble of ions appeared first on Physics World.

    ]]>
    Physicists in the US have discovered a laser-based “switch” that turns a sample of ions completely transparent at certain frequencies. Working at the California Institute of Technology (Caltech), the researchers found that when they coupled ytterbium ions (Yb3+) to a nanophotonic resonator and strongly excited them with laser light, the ions abruptly stopped reflecting light at frequencies associated with their vibrations. This effect, which the team dubs “collectively induced transparency”, could have applications in quantum optical devices.

    “We discovered the phenomenon while trying to develop techniques to control ytterbium atoms coupled to an optical cavity using laser light,” co-team leader Andrei Faraon tells Physics World. The cavity, which measures 20 microns across, contains roughly a million Yb3+ ions. As a group, these ions are vibrating at a broad distribution of frequencies, but Faraon explains that each individual ion only vibrates within a very narrow frequency range.

    “When probed with a laser with lower power, the system is opaque,” he continues. “When the laser is tuned at a frequency exactly in the middle of the frequency distribution, however, and its power increased, the system becomes transparent.”

    Akin to destructive interference

    This selective transparency effect is related to how the ions oscillate with respect to the laser, Faraon says. He compares it to the well-known phenomenon of destructive interference, in which waves from two or more sources cancel each other out. In the system studied in this work, the groups of ions absorb and re-emit light continuously. Normally, this re-emission process means that laser light gets reflected. At the collectively induced transparency frequency, however, something very different happens: the re-emitted light from each of the ions in a group balances, leading to a dramatic decrease in reflection.

    As well as collectively induced transparency, Faraon and colleagues also observed that the ensemble of ions can absorb and emit light much faster or slower than a single ion depending on the intensity of the laser. These processes are known as super-radiance and sub-radiance, respectively, and are not well understood. Even so, the researchers say that this highly nonlinear optical emission pattern could be exploited to create more efficient quantum optical technologies. Examples might include quantum memories in which information is stored in an ensemble of strongly coupled ions, as well as solid-state super-radiant lasers for ensemble-based quantum interconnects in quantum information processors.

    The research is described in Nature.

    The post Transparency window appears in an ensemble of ions appeared first on Physics World.

    ]]>
    Research update "Collectively induced transparency" in an optical cavity has applications in quantum optical technologies https://physicsworld.com/wp-content/uploads/2023/05/1.jpg
    Toichiro Kinoshita: the theorist whose calculations of g-2 shed light on our understanding of nature https://physicsworld.com/a/toichiro-kinoshita-the-theorist-whose-calculations-of-g-2-shed-light-on-our-understanding-of-nature/ Mon, 29 May 2023 10:00:40 +0000 https://physicsworld.com/?p=107968 Robert P Crease pays tribute to the late Toichiro “Tom” Kinoshita, who was a pioneer of quantum electrodynamics

    The post Toichiro Kinoshita: the theorist whose calculations of <em>g</em>-2 shed light on our understanding of nature appeared first on Physics World.

    ]]>
    Toichiro Kinoshita (left) and Richard Feynman on a boat

    In both his personal and his professional life, the pioneering theoretical physicist Toichiro “Tom” Kinoshita forged the steadiest of paths through the most tumultuous of times. Born on 23 January 1925 in Tokyo, Japan, he spent the bulk of his career in the US where he played a trailblazing role in the development of quantum electrodynamics (QED). Most notably, his calculations of one of its key constants – g-2 – helped make QED the most precise theory in the history of physics.

    Kinoshita, who died on 23 March 2023 aged 98, was no stranger to me. He was the father-in-law of a close friend and I had known him for almost three decades. In fact, I was fortunate to be able to talk with Kinoshita in depth about his long and fruitful career during an eight-hour oral-history interview that I carried out in 2016 for the Niels Bohr Library and Archives of the American Institute of Physics.

    Japanese roots

    As I discovered during our conversation, Kinoshita was the heir to a family of rice-farm owners who expected their male child to take over the family business. Their plans were disrupted by Japan’s role in the Second World War, which had already begun by the time Kinoshita was a teenager. Most of his peers were drafted to serve in the military, many never to return.

    But Kinoshita was lucky. The Japanese military wanted those who had a talent for physics to calculate bomb trajectories for artillery barrages at the battle front. The authorities therefore pushed Kinoshita through a tightly compressed version of his high school and college curriculum at the University of Tokyo. Along the way, he learned advanced physics from mentors who taught articles, smuggled into Japan by submarine, that had been written by Werner Heisenberg and other German physicists.

    Kinoshita learned advanced physics from mentors who taught articles, smuggled into Japan by submarine, that had been written by Werner Heisenberg and other German physicists

    In August 1945, while on his university summer break, Kinoshita was at home with his parents in the city of Yonago when he heard on the radio that Hiroshima, which lay about 125 km to the south, had been flattened. As he told me in our interview, Kinoshita knew – from the magnitude of the explosion – that this was no ordinary bomb, but one that had to be tapping atomic energy.  “I knew what atomic energy can do, so I thought immediately this must be an A-bomb,” he said.

    A few days later he was at Shinjuku train station in Tokyo when everyone was unexpectedly instructed to stay put for important news. In what was a highly unusual move, the Japanese emperor came on the public address system to announce that Japan had surrendered. Kinoshita was relieved, as others around him were too; like so many Japanese people he was afraid of and appalled by the war begun by his country’s military leaders. “Wow, that’s good. I don’t have to die,” he recalled thinking.

    Hundreds of thousands of American troops arrived a few weeks later and occupied the country. The new US-installed government pushed through a nationwide land-reform programme. The Kinoshita family’s land was seized and distributed among its sharecroppers, leaving Kinoshita with no inheritance. Strange as it may seem, he was thrilled because his sudden poverty freed him of his family’s expectations that he would become a landlord of rice farms. Instead, he would be able to pursue physics.

    Surviving on grants from the University of Tokyo and from teaching physics classes at another nearby university, Kinoshita graduated in 1947 before going on to do a PhD. His mentor was Sin-Itiro Tomonaga, who later shared the 1965 Nobel Prize for Physics with Richard Feynman and Julian Schwinger. Tomonaga brought Kinoshita to the attention of Robert Oppenheimer, the US physicist who had headed up the Manhattan atomic-bomb project.

    Oppenheimer in turn arranged for Kinoshita and his colleague Yoichiro Nambu – another future Nobel laureate – to be postdocs at the Institute for Advanced Study (IAS) in Princeton, New Jersey. Kinoshita could, however, barely scrape together the money for the passage and he was forced to take a cargo boat from Tokyo to Seattle. He also had to leave behind his wife Masako or “Masa” Kinoshita (née Matsuoka) – a former student in one of his classes whom he had married in 1951. Her wealthy parents, members of Japan’s small Marxist community, had been jailed during the war, then lost everything when Allied bombs destroyed their family business.

    From Seattle, Kinoshita visited labs on the US west coast, including the Lawrence Berkeley Laboratory and the California Institute of Technology. Travelling by bus and train, he headed east across the Rockies, visiting first Denver and then Enrico Fermi’s lab in Chicago. Eventually he arrived in Princeton, with his wife joining him in 1953. Later that year he stayed with a landlady who couldn’t pronounce “Toichiro” and so dubbed him “Tom” – a name that was to stick for the rest of his life.

    Wobbly foundations

    In 1956 – after two years at the IAS and another at Columbia University in New York – Tom and Masa ended up at Cornell University, where he stayed for the rest of his career. There, Masa practised a traditional Japanese textile artform known as kumihimo, or “gathered threads”, giving workshops in the US and Japan, and publishing a monumental, 360-page book on the subject in 1994. She rediscovered and developed an archaic and nearly forgotten form of kumihimo that involved complex loops, redeploying it using her background in mathematics.

    In 1962 Kinoshita visited CERN on a Ford Foundation fellowship. On the second day of his visit to Geneva, he joined a lab tour, and – while on the very first stop – found himself mesmerized by a graph that experimentalists at the Proton Synchrotron had tacked to the wall. Having measured the way muons wobble in a magnetic field, they wanted to know how their findings tallied with the theoretical value and were seeking someone who could calculate it.

    Kinoshita was stunned by the graph, which reminded him of aspects of the research into QED he had carried out with Tomonaga during the war.  He dropped out of the tour, went to the library, and worked the rest of the night. The next morning he returned to the Proton Synchrotron and told the experimentalists, “I know how!”

    The Muon g-2 experiment at Fermilab

    It was exciting work, for the number was intimately woven into the foundations of QED. That theory conceives of particles as spinning magnets, with the ratio of their magnetic moments to their spin known as g. In the simplest form of quantum mechanics, g has a value of exactly 2. But reality had to be different, for muons are tugged by traces of all other particles – known and unknown, leptons and hadrons – each of which slightly affects the wobble.

    Given that QED was a blueprint incorporating everything that theorists knew about, the difference between the experimentally determined value of g and 2 therefore measured the comprehensiveness and accuracy of the entire theoretical architecture of QED. In other words, measuring g-2 could reveal if that architecture was sound, even if it couldn’t tell you the exact location of any defect.

    In fact, g-2 was so fundamental to QED that if nature contained new physics – particles or forces not yet discovered, and thus not in the theory – they would show up as the difference between the theoretically predicted amount and the value measured in experiments. Rarely does it make sense to go all out in pursuing calculations of a number; nobody measures recipe ingredients to thousandths of a gram or petrol to billionths of a litre. But g-2 is different. From a muon’s wobble, you can get precision.

    The calculations, though, were incredibly hard, because they were unsolvable and thus had to proceed in a series of successive, ever more precise approximations. What’s more, each newly discovered particle and force had to be incorporated. Physicists commonly expresses this complexity in terms of the “Feynman diagrams” of each possible interaction, with each diagram corresponding to a series of long equations, and Kinoshita had to evaluate hundreds and even thousands of them.

    When physicists say that QED is the most precisely calculated theory in the history of science, they can thank Kinoshita

    Back then, Kinoshita worked alone and by hand in calculating g-2. As the years went by, he took on more helpers and used more powerful computers. Kinoshita eventually spent over half a century as a pioneer in the physics use of supercomputers and became one of their biggest users as he summed six, eight and then 10 orders of Feynman diagrams to calculate g-2 ever more precisely. When physicists say that QED is the most precisely calculated theory in the history of science, they can thank Kinoshita.

    Meanwhile, a series of ever larger and more precise experiments were built to compare the experimental value with his: a sequence of three at CERN, one at Brookhaven National Laboratory and another at Fermilab. Sometimes the results were close to Kinoshita’s number, spreading fear among physicists that there was no new physics, while at other times the results were so far off from the predicted value that experimentalists and theorists alike were thrilled.

    Kinoshita became an increasingly high-profile physicist as the go-to person for understanding the foundations of the Standard Model of particle physics. In fact, g-2 became an ever more high-profile number, as the world’s most powerful accelerator, the Large Hadron Collider, was eking out fewer and fewer surprises.

    Despite officially retiring from Cornell in 1995, Kinoshita remained active in physics. In 2018, aged 93, he published a paper in Physical Review D (97 036001) refining his calculation of g-2 to the 10th order. His final paper – on the general theory of g-2 calculations to all orders –appeared the following year in Atoms (7 28). His student and close collaborator Makiko Nio, from RIKEN research lab in Japan, is one of the physicists now continuing the work.

    The critical point

    Quiet, methodical and meticulous, Kinoshita always appreciated or would contribute to the humour in every situation. Late in his life, friends learned to look for the sign that he was about to make a witty remark: an almost imperceptible uptick at both corners of his mouth, and a slight deepening of the wrinkles that fringed them. Eventually, Kinoshita moved away from Cornell, reluctantly, to a house in Amherst, Massachusetts, built by the architect Ray Kinoshita, one of his three daughters.

    She had designed a house for herself with a separate living area for her parents, with shoji screens, open shelving and a wooden deck looking out into the woods, similar to the living quarters they had been accustomed to. The University of Massachusetts made Kinoshita an adjunct and gave him an office, where he showed up almost every day until COVID hit.

    Admiring colleagues periodically put Kinoshita forward for a Nobel prize. He never received it, surely because his contributions, though indispensable to contemporary physics, are difficult to label. Physicists, however, benefit hugely from people like Kinoshita, who are intimately familiar with the resources, methods and techniques that underpin their field. Such physicists propel the discipline forward, yet cannot be easily pigeon-holed as discoverers or theory-creators. Kinoshita was like a reliable and trustworthy engineer who gives you the confidence that the house you and your entire community are living in won’t collapse.

    Masa sadly died last year, and Tom soon after. The two will be buried together in Ithaca, near Cornell. Their headstone has been designed by their daughter Ray and by Ray’s own daughter Emilia Kinoshita, a designer and materials researcher. It will feature a blend of Feynman diagrams and kumihimo patterns, embodying the deepest shapes and rhythms of the unruly world that Masa and Tom lived through and explored.

    The post Toichiro Kinoshita: the theorist whose calculations of <em>g</em>-2 shed light on our understanding of nature appeared first on Physics World.

    ]]>
    Opinion and reviews Robert P Crease pays tribute to the late Toichiro “Tom” Kinoshita, who was a pioneer of quantum electrodynamics https://physicsworld.com/wp-content/uploads/2023/05/2023-05-CP-Kinoshita-and-R-Feynman-1961-featured.jpg
    Microwave photons are entangled with optical photons https://physicsworld.com/a/microwave-photons-are-entangled-with-optical-photons/ Sun, 28 May 2023 14:44:12 +0000 https://physicsworld.com/?p=108097 New scheme could improve superconductor-based quantum computers

    The post Microwave photons are entangled with optical photons appeared first on Physics World.

    ]]>
    A protocol for entangling microwave and optical photons has been demonstrated by researchers in Austria. This has the potential to help to overcome one of the central issues in the formation of a quantum internet by allowing microwave frequency circuits to exchange quantum information through optical fibres.

    The central vision underpinning a quantum internet – first articulated back in 2008 by Jeff Kimble of Caltech in the US – is that networked quantum processors could exchange quantum information, much as classical computers exchange classical information via the Internet. Transferring quantum information is far more difficult, however, because background noise can destroy quantum superpositions in a process called decoherence.

    Many of the most powerful quantum computers in existence, such as IBM’s Osprey, use superconducting qubits. These work at microwave frequencies, which makes them extremely vulnerable to disruption by background thermal radiation – and explains why they need to be kept at cryogenic temperatures. It also makes transferring information between superconducting qubits extremely difficult. “[One way] is to build ultracold links,” explains Johannes Fink of the Institute for Science and Technology Austria in Klosterneuburg. “The record was just published in Nature [by Andreas Wallraff’s group at ETH Zurich in Switzerland and colleagues]: 30 m at 10–50 mK – that has some challenges for scaling up.” In contrast, he says, “fibre optics works really well for communication – we use it all the time when we surf the Internet”.

    Quantum transduction

    A scheme whereby quantum information could be transferred between microwave qubits by sending photons down optical fibres would therefore be extremely valuable. The most direct approach is quantum transduction, in which, by the interaction with a third photon, a microwave photon is up-converted to an optical photon that can be sent along fibres.

    Unfortunately, practical implementations of this process also introduce both loss and noise: “You send ten photons and maybe only one of them gets converted…and maybe your device adds some extra photons because it was hot or for some other reason,” says Fink’s PhD student Rishabh Sahu, who is joint first author on a paper describing this latest research. “Both of these bring the fidelity of transduction down.”

    An alternative way to transfer quantum information is called quantum teleportation and was first demonstrated experimentally in 1997 by Anton Zeilinger’s group at the University of Innsbruck – for which Zeilinger shared the 2022 Nobel Prize for Physics. When a qubit interacts with one photon in an entangled pair, its own quantum state gets entangled with the second photon.

    Entanglement swap

    A quantum network could be produced under ambient conditions if this second photon could travel down a low-loss optical fibre to interact with an identically prepared transmission photon from a second network node through a so-called Bell state measurement. This would perform an “entanglement swap” between the remote superconducting qubits.

    Entangled photon pairs are generated by a process called spontaneous parametric down-conversion, whereby one photon splits into a two. However, nobody had previously managed to generate an entangled pair of photons whose energy differed by a factor of more than 10,000. This difference encompasses a photon at an optical telecoms wavelength of about 1550 nm; and another at a microwave wavelength of about 3 cm.

    Fink’s group pumped a lithium niobate optical resonator that was part of a microwave resonator with a high-power laser at telecom wavelengths. The vast majority of the laser light simply came back out of the resonator unchanged and was filtered out. However, approximately one photon per pulse split into two entangled photons – one microwave and the other at a wavelength just slightly longer than the pump photons.

    “We verified this entanglement by measuring the covariances of the two electromagnetic field fluctuations. We found microwave-optical correlations that are stronger than classically allowed, which signifies that the two fields are in an entangled state.” says Liu Qiu, a postdoctoral researcher and joint first author on the paper describing the work. The researchers now hope to extend this entanglement to qubits and room temperature fibres, implement quantum teleportation and entangle qubits in separate dilution refrigerators.

    Alexandre Blais of the Université de Sherbrooke in Canada collaborated on Wallraff’s Nature paper and he is impressed with Fink and colleague’s work, “Normally optics and microwaves don’t talk to each other. Optics is really high energy and tends to ruin the quantum coherence properties of your microwave circuits. Now [the researchers] have standing photons: if I want to transfer that information into another fridge I need to transfer that information into a flying photon in an optical fibre, and there will be loss there. And that photon then has to travel down that fibre, enter the second fridge and do some magic…We should not think that this makes everything easy now – it’s just the beginning, but that doesn’t take away from the quality of the experiment.”

    The research is described in Science.

    The post Microwave photons are entangled with optical photons appeared first on Physics World.

    ]]>
    Research update New scheme could improve superconductor-based quantum computers https://physicsworld.com/wp-content/uploads/2023/05/Quantum-computing-concept-illustration.jpg newsletter1
    Trapped in ice: the surprisingly high levels of artificial radioactive isotopes found in glaciers https://physicsworld.com/a/trapped-in-ice-the-surprisingly-high-levels-of-artificial-radioactive-isotopes-found-in-glaciers/ Sat, 27 May 2023 10:00:26 +0000 https://physicsworld.com/?p=107739 New research shows that glaciers harbour a surprisingly large concentration of toxic nuclear materials, which poses risks as the glaciers melt

    The post Trapped in ice: the surprisingly high levels of artificial radioactive isotopes found in glaciers appeared first on Physics World.

    ]]>
    Think of glaciers and images of vast, pristine sheets of ice, blanketing swathes of the Arctic and the Antarctic come to mind. While it’s true that 99% of glacial ice is restrained to the polar regions of our planet, glaciers are also found in mountain ranges on almost every continent, covering nearly 10% of the Earth’s land surface. Glacial ice is also the largest reservoir of fresh water on our planet – holding almost 69% of the world’s fresh water.

    Despite appearing as silvery untouched rivers of ice in images, glaciers contain many organic deposits, such as dust and microbes. But researchers are finding they also encompass a worrying amount of toxic nuclear materials, and we are only now beginning to understand the risks posed as glaciers melt.

    “For some of these glaciers that have been assessed, particularly the ones in the European Alps and other parts of Europe, the concentrations of some of these fallout radionuclides are as high as we’ve recorded them inside disaster zones like Chernobyl or the Fukushima area in Japan,” explains Philip Owens, an environmental scientist at the University of Northern British Columbia, in Canada.

    Dust, dirt, microbes

    Up close, glaciers are not perfectly white. They are often grey and dirty looking, even black in places, thanks to deposits. Known as cryoconite, this dark, fine sediment that forms on glacial surfaces is made up of dust, dirt and soot, as well as small rock and mineral particles. It originates from a variety of places, including the local surroundings such as weathered rocks and exposed ground near the glacier – but also from faraway sources like deserts and arid land, wildfires and combustion engines. 

    These materials are carried onto glaciers through various processes such as wind, rain, atmospheric circulations, and anthropogenic and animal activities. Because this cryoconite is dark in colour it heats up in the Sun and melts the ice, creating water-filled depressions. These holes then become traps for more material, causing larger collections of cryoconite to form.

    cryoconite sample hole

    Cryoconite is also full of organic materials such as algae, fungi, bacteria and other microbes. As these collect, grow and multiply on the sediment, they start to form a considerable part of the cryoconite mass. The organic matter also produces sticky biofilms, which help the microbes to stick to the sediment and each other, and form communities, helping collections of cryoconite to further grow.

    But cryoconite is not just rocks, dust, dirt and microbes. Research has shown that it is also full of many different anthropogenic contaminants, including heavy metals, pesticides, microplastics and antibiotics. Like the more natural components, these too are trapped by the watery depressions and sticky biofilms, binding to the dust and minerals in the sediment.

    Far-reaching radioactive fallout

    In recent years it has become clear that cryoconite is often full of another rather unexpected contaminant – nuclear material in the form of “fallout radionuclides” (FRNs). Tests found that the concentrations of these artificial radionuclides greatly exceed those in other terrestrial environments. Indeed, some of these sediments are the most radioactive ever found outside of nuclear exclusion zones and test sites.

    Map of where samples were taken and radioactive materials recorded

    It has been known for a while that the surfaces of glaciers can have unusually high levels of radioactivity. In recent years scientists have been exploring the issue in more detail. According to glaciologist Caroline Clason from Durham University, in the UK, the concentration of radioactivity seen in cryoconite is sometimes “two or even three orders of magnitude higher than we would find in other types of environmental matrices, like sediments and soils, lichens and mosses that we find in different parts of the world”.

    In 2017 Clason and colleagues discovered that levels of fallout radionuclides in cryoconite from the Isfallsglaciären glacier in Arctic Sweden were up to 100 times higher than in material collected in the valley around the glacier (figure 1). Concentrations of the radioactive isotope caesium-137 (137Cs) were as high as 4500 becquerels per kilogram (Bq/kg), with average levels of around 3000 Bq/kg (TC 15 5151). “It’s quite incredible how much [radioactivity] the material on the glacier surface has managed to accumulate,” says Clason. “Much more than we see in the rest of the environment in the same location.”

    In 2018 cryoconite on a Norwegian glacier was found to be even more radioactive (Sci. Tot. Env. 814 152656). Samples, collected by a team led by Edyta Łokas, an earth scientist at the Institute of Nuclear Physics of the Polish Academy of Sciences, from 12 cryoconite holes on the Blåisen glacier revealed concentrations of 137Cs as high as 25,000 Bq/kg, with an average level of around 18,000 Bq/kg. Levels of 137Cs in soils and sediments are usually between 0.5 and 600 Bq/kg (Sci. Rep. 7 9623).

    Chernobyl’s contamination

    The artificial radionuclides 137Cs and caesium-134 (134Cs) are fission products produced by the splitting of uranium-235 in nuclear power reactors and some nuclear weapons. Most of the caesium isotopes on the Norwegian and Swedish glaciers originate from the Chernobyl nuclear accident, but there is also fallout from the hundreds of atmospheric nuclear tests conducted in the mid-20th century.

    Infamous as the worst disaster in the history of nuclear power generation, the Chernobyl incident took place on 26 April 1986 during a low-power test of the Number Four reactor at the Chernobyl nuclear power plant, which was then in the Soviet Union. The test caused an explosion and fire that destroyed the reactor building, and the catastrophic incident released a significant amount of radioactive material, including isotopes of plutonium, iodine, strontium and caesium. Most of this fell in the immediate vicinity of the nuclear power plant and large areas of what is now Ukraine, Belarus and Russia, but atmospheric circulations, as well as wind and storm patterns, also scattered it over much of the northern hemisphere.

    Weather patterns dumped a substantial amount of the radioactive fallout from Chernobyl in Scandinavia. Norway is estimated to have received around 6% of the 137Cs and 134Cs released from the nuclear power plant. The isotopes were carried to the country by a south-easterly wind and deposited during rainfall in the days following the nuclear disaster.

    The caesium then entered the food chain, as it was taken up by plants, lichens and fungi, which were eaten by grazing animals such as reindeer and sheep. In the years following the disaster large amounts of meat, milk and cheese from reindeer and sheep in Norway and Sweden had caesium-isotope concentrations that massively exceeded limits set by the authorities. These foods are still regularly tested.

    There was also significant fallout from Chernobyl in the Austrian Alps, with heavy rainfall in the days following the disaster leading to very high levels of contamination in some areas. A 2009 survey of the Hallstätter and Schladminger glaciers in northern Austria found concentrations of 137Cs in cryoconite ranging from 1700 Bq/kg to 140,000 Bq/kg (J. Env. Rad. 100 590).

    Wind, rain, fire and more

    There appear to be several reasons why cryoconite accumulates radionuclides and becomes so radioactive. Radioactive material is transported through the atmosphere by winds and global circulation patterns. It is then washed out of the atmosphere by precipitation, which is known to be particularly effective at collecting particulate matter and bringing it down to the ground. Furthermore, levels of rain, snowfall and fog tend to be high in the mountain and polar regions that host glaciers.

    A lot of dry material, from phenomena such as forest fires and dust storms, also gets dumped in glacial environments. This dust, soot and similar material travels via atmospheric circulation, but as it does so it starts to bind together and scavenge other material from the atmosphere – including pollutants such as radionuclides – until it becomes too heavy and falls to the ground.

    Diagram of how radionuclides get into glaciers

    Once radionuclides and other contaminants are in the glacial environment, they are shifted around by hydrological processes. In warmer parts of the year, snow pack and ice in a glacial catchment melt, along with parts of the glacier itself. This melt water flows onto and over the glacier, taking contaminants like the radionuclides that were stored in the snow and ice with it. As the water flows through channels and holes across the glacier, it is filtered by cryoconite sitting in these depressions, which is full of materials including silts and clay that are known to bind elements such as radionuclides, metals and other anthropogenic particles (figure 2).

    Organic scavengers

    The biological component of cryoconite also seems to enhance its ability to collect and accumulate radionuclides. Indeed, Łokas explains that for cryoconite with a high proportion of organic material – such as algae, fungi and bacteria – the concentration of radionuclides is much higher.

    The cryoconite on the Blåisen glacier in Norway that had particularly high levels of radioactivity also had a high organic content. While studies of other glaciers have found cryoconite that was between 5% and 15% biological material, the sediments from Blåisen were around 30% organic matter. The researchers say this could be part of the reason for its high concentrations of radionuclides.

    Edyta Lokas stood on a glacier

    Łokas says that the ability of cryoconite to hold and concentrate radionuclides seems to be “related to metal binding properties of extracellular substances that are excreted by micro-organisms”. These sticky biofilms immobilize metals, and other materials that can be toxic, to prevent them from entering the cells of the micro-organisms, she explains.

    This link between organic matter and fallout radionuclides has also been spotted elsewhere. When Owens analysed cryoconite samples from the Castle Creek glacier in British Columbia, Canada, he found a significant positive relationship between the concentration of radionuclides in samples and the percentage of organic material (Sci. Rep. 9 12531). The more biological material, the more radioactive material.

    Owens explains that fallout radionuclides are everywhere. What’s happening on glaciers, he says, is that they are “being focused into these really small locations on the glacier surface”. There are ways that both the materials that make up the sediment and the extracellular substances excreted by the micro-organisms that live in it, can bind contaminants. This all makes the cryoconite a highly efficient scavenging agent, and over time radionuclides that have fallen all over the glacial catchment become concentrated in it.

    Varying sources and concentrations

    Although it tends to be the most concentrated, 137Cs is not the only radionuclide found in cryoconite. High concentrations of other radioactive materials, such as americium-241 (241Am), bismuth-207 (207Bi) and plutonium (Pu) isotopes, have also been detected. These are linked to the global fallout of radionuclides from atmospheric nuclear weapons tests rather than nuclear-power disasters.

    This mix of inputs, along with global atmospheric circulation and weather patterns, means that sources and concentrations of radioisotopes on glaciers vary across the planet. For instance, Owens says that while levels of radionuclides are high in cryoconites in Canada they are mainly from nuclear bomb tests, as it is a long way from Chernobyl.

    Łokas is currently analysing details of radioactivity in cryoconites from various sites around the world, including in the Arctic, Iceland, the European Alps, South America, the Caucasus Mountains, British Columbia and Antarctica. Glaciologists from many countries, including Owens and Clason, have donated, collected and tested samples for this work.

    Wide view of the Gries glacier in the Alps

    Tests have found that radioactivity is particularly high in the Alps and Scandinavia, while Łokas says the lowest levels found so far have been on glaciers in Iceland and Greenland. No signal from Chernobyl was identified in these areas, just the global fallout from weapons tests, Łokas adds.

    The work has also identified some interesting radionuclide signals. There are higher proportions of 238Pu, 239Pu and 240Pu in cryoconites from the southern hemisphere than the northern hemisphere, Łokas says. This is due to the failure of a satellite carrying a SNAP-9A radiothermal generator in 1964. The satellite disintegrated, releasing around a kilogram of 238Pu into the atmosphere, mainly over the southern hemisphere.

    There is also a spike in 238Pu isotopes from samples of the Exploradores glacier in Chilean Patagonia. This is likely linked to the failed Russian Mars probe that broke up in the atmosphere over South America in 1996, Łokas says. It was carrying around 200 g of 238Pu pellets and, while their exact fate is unknown, they are thought to have fallen somewhere over Chile and Bolivia.

    Cause of concern?

    It is as yet unclear how worried we need to be about this concentration of radioactive material on glaciers. There is no certainty over whether it poses an environmental risk on a large scale, or whether it is a localized issue on the glaciers, Clason says. “I certainly wouldn’t want to go and eat the material on the ice surface; it’s really quite radioactive in comparison to other environmental sediments,” she adds. “But the extent to which that’s a problem once you are outside of that immediate glacial catchment, we just don’t know.”

    When the sediment is sitting on the glacier, it is unlikely to be an issue for the ecosystem and human health. But as glaciers melt and retreat more and more of that legacy material stored on the ice gets released

    There are reasons to be concerned. Radioactive materials have well-documented negative impacts on health. Glaciers also store vast amounts of fresh water, with billions of people around the world using the melt water for agriculture and drinking water. As the climate warms, glaciers are also retreating, which could potentially release stored contaminants and sediments in high concentrations.

    “With all the glacial melt, this cryoconite material is coming into a lot more contact with glacial melt water. It’s now beginning to be exposed and can be delivered to the downstream ecosystem,” Owens explains. When the sediment is sitting on the glacier, he says, it is unlikely to be an issue for the ecosystem and human health. But as glaciers melt and retreat more and more of that legacy material stored on the ice gets released.

    It is also not clear exactly how much radioactivity there might be in a glacial system, Clason adds. “In addition to direct atmospheric deposition of radionuclides, a lot of the radioactivity we see in cryoconite is likely being melted out of old snow and ice that was deposited many years ago,” Clason explains. “The ice itself has an inventory of radioactivity that isn’t well understood.”

    Once it flows into rivers, the radioactive material is likely to be diluted, Owens says, “but we don’t know,” he cautions. Clason agrees. “While the concentrations are high where we sample, in the grand scheme of things, once all that material has been washed off or the glacier melts and deposits it in the environment, it might be diluted to the extent that it’s not above the concentrations you see in the environment otherwise,” she says. “So that’s what we need to figure out next.”

    In the future, Clason hopes to carry out more detailed analysis of the amount of cryoconite on glacial surfaces, using techniques such as high-resolution drone imagery. This would allow researchers to estimate how much radioactivity there might be on a glacier. Mapping the cryoconite on the surface like this, and then combining the information with glacier melt models, could help us understand how the sediments and the contaminants they contain might be released in the future.

    The post Trapped in ice: the surprisingly high levels of artificial radioactive isotopes found in glaciers appeared first on Physics World.

    ]]>
    Feature New research shows that glaciers harbour a surprisingly large concentration of toxic nuclear materials, which poses risks as the glaciers melt https://physicsworld.com/wp-content/uploads/2023/05/Rhone-glacier-Alps-Switzerland-hero-626589710-Shutterstock_nullplus.jpg newsletter
    Illustrations of Ben Franklin’s kite experiment are wrong, keeping gummy sweets fresh https://physicsworld.com/a/illustrations-of-ben-franklins-kite-experiment-are-wrong-keeping-gummy-sweets-fresh/ Fri, 26 May 2023 15:03:02 +0000 https://physicsworld.com/?p=108109 Excerpts from the Red Folder

    The post Illustrations of Ben Franklin’s kite experiment are wrong, keeping gummy sweets fresh appeared first on Physics World.

    ]]>
    Franklin experiment

    Some of the most iconic images of the American polymath Benjamin Franklin show him doing a very silly thing — flying a kite in a thunderstorm. This is a reference to a famous experiment that Franklin is believed to have done in 1752, confirming that lightning was an electrical phenomenon similar to that observed in primitive batteries of the time.

    However, according to the Brazilian historian Breno Arsioli Moura, most illustrations of the experiment misrepresent what Franklin is believed to have done.

    According to Moura, the purpose of the experiment was to show that electricity can flow like a fluid from the sky to the ground. To do this, Franklin fastened a small metal spike to a kite and connected the spike to a conducting kite string. The other end of the kite string was fastened to a metal key. Also tied to the key was an insulating silk ribbon, which was held by Franklin as he flew the kite.

    Tell-tale spark

    The idea was that the metal spike would collect electrical charge from the sky and the charge would then flow down the string where it would accumulate in the key. After some accumulation time, Franklin would put a finger near to the key and look for a tell-tale spark that would confirm the flow of electricity from the sky to the ground.

    The above image was published by Currier & Ives 120 years after the experiment was done. Franklin is shown gripping the conducting string with his right hand and holding the key in his left hand. Moura points that if done this way, the experiment would simply not work because Franklin would short-circuit the current to earth.

    Franklin is also shown with a boy who is assumed to be his son William – but William would have been in his early twenties at the time of the experiment.

    No lightning

    In many illustrations, Franklin is shown doing the experiment with lightning flashing near the kite, but Moura says he did not intend to draw lightning down on himself – he wasn’t stupid, after all. Finally, Moura says that Franklin recommended that the kite flier be under a roof or some other cover to ensure that the silk ribbon remained dry because a wet ribbon would conduct electricity. Indeed, Moura says that many depictions fail to show the silk ribbon, which was a key component of the experimental set-up.

    Because of his role in American independence, Franklin is a much romanticised figure in the US, so it’s not surprising that some artistic licence has been taken with how his famous experiment is portayed. Moura also points out that Franklin’s own account of the experiment is vague and that the artistic depictions tend to reflect a description published in 1767 by the English chemist Joseph Priestley. Although Franklin is thought to have approved of Priestley’s description, Moura says that the two accounts differ on several important points.

    So why is this important to Moura? He believes that the uncritical use of inaccurate illustrations in classrooms and textbooks could undermine the teaching of important experiments. He also says that poor illustrations could also encourage people to try to do unsafe experiments, like flying a kite in an electrical storm. He describes his research in the journal Science & Education.

    Stale sweets

    It’s always disappointing to bite into a gummy sweet only for it to be all hard and stale. Keeping them soft and fresh is an important issue given that texture can be just as important as taste for such candies.

    Now, researchers in Turkey at Ozyegin University and Middle East Technical University have carried out a series of experiments to explore how changing key parts of the gummy-making process, such as the concentration of starch and gelatin, can affect the final product, as well as how the sweets behave when stored at different temperatures.

    The researchers used a statistical model to describe how each combination affected the quality of the gummy sweets, finding that to keep the sweets as soft as possible for as long as possible, it was best to store them at a warm temperature.

    After this sweet success, the researchers now plan to study the role of plant-based formulations, mould shapes, and packaging types on quality. The team describes its research in the journal Physics of Fluids.

     

    The post Illustrations of Ben Franklin’s kite experiment are wrong, keeping gummy sweets fresh appeared first on Physics World.

    ]]>
    Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/05/Ben-Franklin-list.jpg
    Pacemakers, defibrillators not affected by high-power electric vehicle chargers https://physicsworld.com/a/pacemakers-defibrillators-not-affected-by-high-power-electric-vehicle-chargers/ Fri, 26 May 2023 08:45:06 +0000 https://physicsworld.com/?p=108080 Cardiovascular researchers found no instances of electromagnetic interference from high-power electric vehicle chargers in individuals with cardiac devices

    The post Pacemakers, defibrillators not affected by high-power electric vehicle chargers appeared first on Physics World.

    ]]>
    For millions of people with cardiac pacemakers and defibrillators, and for the manufacturers of these life-saving cardiac implantable electronic devices (CIEDs), a primary concern is avoiding device malfunction caused by electromagnetic interference.

    Currently, there are no official recommendations on the use of high-power chargers, such as those for electric vehicles, for people with CIEDs. Yet, according to a research group at the German Heart Centre Munich, German Centre for Cardiovascular Research and Cardiology Department at Auckland City Hospital, battery electric vehicles and their high-power charging stations “are a potential source” of electromagnetic interference for patients with CIEDs.

    Early research on electromagnetic interference between CIEDs and electric vehicle chargers was conducted with cars that required smaller current flows and electromagnetic fields than those on the market today. Newer high-power chargers use DC power and can deliver up to 350 kW.

    “As the charging current is directly proportional to the magnetic field, the high-power chargers have the potential to cause clinically relevant [electromagnetic interference],” the researchers write in their latest study, published in the journal EP Europace.

    In the study, the researchers asked 130 individuals with CIEDs to plug in and charge an electric car. No instances of electromagnetic interference or other adverse effects were found.

    “This study was designed as a worst-case scenario to maximize the chance of electromagnetic interference,” says lead author Carsten Lennerz in a European Society of Cardiology press release. “Despite this, we found no clinically relevant electromagnetic interference and no device malfunction during the use of high-power chargers, suggesting that no restrictions should be placed on their use for patients with cardiac devices.”

    Six car models from four manufacturers, including a test vehicle capable of drawing 350 kW from a high-power charger, were used in the study. The maximum voltage drawn was 1000 V, and the maximum current was 500 A. Participants were asked to place the charging cable directly over their CIED to maximize the likelihood of electromagnetic interference and were monitored for signs of device malfunction or changes in heart rhythms. During a total of 561 charges, the researchers did not detect any inhibition of pacing in pacemakers, nor any rapid arrhythmias that might lead to shock for patients with defibrillators.

    Lennerz, an assistant professor at the German Heart Centre Munich, commented that while the current study was dedicated to testing for electromagnetic interference related to high-power charging technology, home chargers, which use a smaller current but alternating current (AC), are also likely to be safe for individuals with CIEDs.

    Sub-clinical electromagnetic interference was not examined in the current study. Future work studying sub-clinical electromagnetic interference, the researchers note, should consider that wireless monitoring techniques may be impacted by electromagnetic interference. They also point out that because a small number of each specific CIED was tested, very rare events specific to any one device may not have been captured.

    The methods and results of their study notwithstanding, the researchers remind individuals with CIEDs that it is better to not place a charging cable directly over a cardiac device and to not remain near a charge cable for long periods of time.

    The post Pacemakers, defibrillators not affected by high-power electric vehicle chargers appeared first on Physics World.

    ]]>
    Research update Cardiovascular researchers found no instances of electromagnetic interference from high-power electric vehicle chargers in individuals with cardiac devices https://physicsworld.com/wp-content/uploads/2023/05/electric-car-954558336-iStock_PlargueDoctor.jpg newsletter1
    FASER searches for dark photons at the LHC, and also finds neutrinos https://physicsworld.com/a/faser-searches-for-dark-photons-at-the-lhc-and-also-finds-neutrinos/ Thu, 25 May 2023 16:34:50 +0000 https://physicsworld.com/?p=108052 CERN’s Jamie Boyd talks about the ForwArd Search ExpeRiment

    The post FASER searches for dark photons at the LHC, and also finds neutrinos appeared first on Physics World.

    ]]>
    In this episode of the Physics World Weekly podcast, the CERN physicist Jamie Boyd talks about the ForwArd Search ExpeRiment (FASER), which is located 480 m downstream from a particle collision point on the Large Hadron Collider (LHC) in Geneva.

    FASER is on the lookout for weakly interacting particles that are created in LHC collisions and then travel through rock and concrete to reach the detector. Earlier this year the experiment made history by being the first to detect neutrinos created at a particle collider.

    But as Boyd explains, neutrinos were not the primary target when FASER was first proposed. Instead, the experiment was built to study hypothetical particles – such as dark photons – that are associated with dark matter. Dark matter is itself a hypothetical substance that many physicists believe can explain some puzzling properties of galaxies and larger-scale structures in the universe.

    This podcast is sponsored by iseg.

    The post FASER searches for dark photons at the LHC, and also finds neutrinos appeared first on Physics World.

    ]]>
    Podcast CERN’s Jamie Boyd talks about the ForwArd Search ExpeRiment https://physicsworld.com/wp-content/uploads/2023/05/Jamie-Boyd-list.jpg newsletter
    Large metalenses are produced on a mass scale https://physicsworld.com/a/large-metalenses-are-produced-on-a-mass-scale/ Thu, 25 May 2023 14:51:39 +0000 https://physicsworld.com/?p=108057 Nanostamping is combined with photolithography to create flat optics

    The post Large metalenses are produced on a mass scale appeared first on Physics World.

    ]]>
    Range of metalenses

    From eyeglasses to space telescopes, lenses play crucial roles in technologies ranging from the mundane to the cutting edge. While traditional refractive lenses are a fundamental building block of optics, they are bulky and this can restrict how they are used. Metalenses are much thinner than conventional lenses and in the last two decades plenty of light has been shone on the potential of these devices, which sparkle as a promising alternative.

    Metalenses are thin structures made of arrays of “meta-atoms”, which are motifs with dimensions that are smaller than the wavelength of light. It is these meta-atoms that interact with light and change its direction of propagation.

    Unlike conventional refractive lenses, metalenses can be less than one micron thick, reducing the overall volume of optical systems. They can also provide ideal diffraction-limited focusing performance, while avoiding some problems associated with refractive lenses such as aberrations.

    As a result, metalenses show great promise for shrinking optical devices, which could be useful in a range of applications from better mobile-phone cameras to less bulky wearable displays. However, due to the nature of their intricate design and their material requirements, metalenses have yet to reach mass-manufacturing at reasonable feasibility and cost. Now, a team of researchers in Pohang University of Science and Technology (POSTECH) in South Korea, led by Junsuk Rho, has developed a new method of fabricating hundreds of centimetre-sized metalenses all at once. In a paper published in Nature Materials, they describe how they used several different lithography techniques and hybrid materials to create metalenses for use in displays and virtual reality (VR) devices. In particular, they show how nanoimprint lithography, or nanostamps, can provide a low-cost scalable way of producing metalenses.

    When conventional thick lenses are used in optics, light is refracted as it travels between air and the lens material, and vice versa. It is this refraction that changes the path of the light and therefore it is the shape of the lens and its refractive index that is the basis for controlling light.

    The production process

    Refractive index and shape still matter in metalenses. But, because a metalenses is macroscopically flat, it is the shape and composition of the meta-atoms that define a device’s optical properties.

    The team’s hybrid meta-atoms are made of a titania-covered resin that is moulded onto the surface of glass substrates of various size as shown in the figure “On display”. The meta-atoms are 900 nm tall, 380 nm long, and 70 nm wide. The titania coating is only 23 nm thick. This type of high-resolution nanopatterning is traditionally expensive and can only be used to cover small areas at one time.

    Silicon technology meets nanostamping

    Now, Rho and colleagues have simplified the production of metalenses by integrating three already mature fabrication technologies. These are photolithography, nanoimprint lithography and atomic layer deposition. Photolithography involves using deep-ultraviolet lasers to create patterns on silicon wafers. This is a standard technique in the electronics industry and it can also be used to make small-scale metalenses. However, it is an expensive process that is not viable for the large-scale manufacturing of metalenses.

    Instead of using deep ultraviolet photolithography to make the metalenses, it was used by the team to pattern a master stamp that was 12 inches (30 cm) across and had a feature resolution of 40 nm (see figure “Production process”). The stamp was used to imprint the inverse of the meta-atom structure in a replica mould made of soft silicone. Liquid resin was then poured into the silicone mould, where it flowed into the nanogrooves before hardening. This allowed the team to make hundreds of metalenses (the 1 cm cylinders in figure 2) at the same time. Indeed, the sophisticated surface structures shown in the scanning electron microscope image (see figure “Production process”) can be made in less than 15 minutes.

    Prototype display

    The refractive index of the resin is too low to provide the desired control of light, so a thin layer of titania was deposited on top of the resin to increase the index of refraction as well as to boost the mechanical strength of the structure.

    Let there be light VR

    To demonstrate the potential of the metalenses, the team integrated them in a prototype VR display. Commercial VR devices use reflection or diffraction to project virtual images to the eyes of the user – and this results in bulky devices that must accommodate the appropriate focal length for the optics. Their metalens-based VR display reduces the distance the light has to travel by using a transmission-based design. This makes the display lightweight and comfortable to wear. Although the team only tested the display with static images, the device showed promise by creating images using red, green and blue light; the building blocks of all-colour displays (see figure “Prototype display”).

    The researchers say that their scalable fabrication method produces metalenses with higher performance than devices made using more traditional methods. While there is still a lot of room for progress, the advent of mass-produced metalenses opens the door for their use in  biosensors, colour printing and holograms – as well as VR displays.

    The post Large metalenses are produced on a mass scale appeared first on Physics World.

    ]]>
    Research update Nanostamping is combined with photolithography to create flat optics https://physicsworld.com/wp-content/uploads/2023/05/Heba-list.jpg
    Scintillator array validates MRI-guided multileaf collimator tracking https://physicsworld.com/a/scintillator-array-validates-mri-guided-multileaf-collimator-tracking/ Thu, 25 May 2023 08:45:29 +0000 https://physicsworld.com/?p=108015 MLC tracking on an MR-linac could mitigate respiratory motion during radiotherapy, without the time penalty associated with gating

    The post Scintillator array validates MRI-guided multileaf collimator tracking appeared first on Physics World.

    ]]>
    MRI-guided radiotherapy systems offer the ability to visualize tumour targets and surrounding organs with high accuracy, with the potential to perform real-time treatment adaptation based on anatomical changes. And for moving tumours, imaging during treatment can ensure that the radiation beam remains focused on the target to maximize healthy tissue sparing.

    To mitigate the impact of respiratory motion, for example, one option is to perform gating. The Elekta Unity MR-linac is equipped with an automatic gating functionality in which a comprehensive motion management (CMM) system continuously monitors the 3D tumour motion. Radiation is only delivered when the target lies within a specified gating envelope – when it moves outside of this region, irradiation is paused.

    “Although this is highly effective, it means that treatment time is drastically increased,” said Prescilla Uijtewaal from the University Medical Center Utrecht. “Therefore we would like to do multileaf collimator [MLC] tracking instead of gating.”

    Speaking at the recent ESTRO 2023 congress in Vienna, she described the new MR-linac tracking approach. The idea is to use the position of the tumour measured by the CMM not to gate the beam, but to move it towards the tumour location using MLC tracking (in which the collimator leaves are shifted to match each new position).

    To validate this workflow, Uijtewaal and colleagues developed a dosimetric insert jointly with Medscint and ModusQA that contains gafchromic film plus an array of eight plastic scintillation detectors positioned at the centre and edges of the target. Film dosimetry is low cost and can create 2D dose maps, while scintillators enable instantaneous dose read-out and time-resolved dosimetry. “Combining the two in a single insert provides great dosimetric analysis,” Uijtewaal pointed out.

    The insert is designed to fit inside the MR-compatible Quasar MRI4D phantom, which can be programmed to move with patient-derived respiratory motion patterns. The researchers created an intensity-modulated radiotherapy (IMRT) plan for delivery to a 3 cm spherical target in the phantom. They used the dosimetric insert to compare the planned versus the measured dose, for static delivery, motion with MLC-tracking and motion with no-tracking scenarios.

    Film and scintillator dosimetry

    Film measurements revealed that dose maps from plans delivered without tracking differed significantly from the static dose map, with large hot and cold spots at the edges of the target. Applying MLC tracking based on the CCM’s motion vector restored the dose map back to that seen with static delivery, with extremely small differences between the delivered and the planned dose.

    The dose measured by each scintillator matched well with the value at the corresponding location on the 2D dose maps. The researchers also used the time-response component of the scintillators to analyse the dose over 400 s with 15 Hz temporal resolution. They assessed the dose measured by all eight scintillators, again in static, tracking and no-tracking scenarios.

    “We saw that the tracking dose followed the static dose really well, which means that the tracking was really effective,” said Uijtewaal. “We also saw that both the static and the tracking dose agreed well with the treatment planning system.” Without tracking, measurements deviated significantly from the planned dose, particularly for scintillators located at the caudal edge of the target.

    Uijtewaal concluded that “the MR-linac’s comprehensive motion manager is compatible with MLC tracking, facilitating a pre-clinical MLC tracking workflow”. Next, she said, her research group (headed by Martin Fast) plans to enhance the tracking workflow to work towards clinical implementation of MLC tracking on the MR-linac. The team also hopes to increase the number of scintillation detectors, to increase coverage and ultimately implement 3D dosimetry.

    The post Scintillator array validates MRI-guided multileaf collimator tracking appeared first on Physics World.

    ]]>
    Research update MLC tracking on an MR-linac could mitigate respiratory motion during radiotherapy, without the time penalty associated with gating https://physicsworld.com/wp-content/uploads/2023/05/UMC-Utrecht.jpg
    Quantum computer technology developments at HRL Laboratories https://physicsworld.com/a/quantum-computer-technology-developments-at-hrl-laboratories/ Wed, 24 May 2023 13:06:45 +0000 https://physicsworld.com/?p=108018 HRL Labs showcase their exchange-only qubit quantum computer technology

    The post Quantum computer technology developments at HRL Laboratories appeared first on Physics World.

    ]]>

    This short video, filmed at the APS March Meeting in Las Vegas, focuses on the developments in quantum computer technology at HRL Laboratories. As Eric Williams, the company’s director of talent acquisition, explains, HRL Laboratories used to be known as Hughes Research Laboratories. With headquarters in southern California, HRL’s achivements include the development of the first-ever laser in 1960.

    Teresa Brecht, a research scientist at HRL Laboratories, then explains the company’s contributions to finding a fault-tolerant quantum computer for the future. She talks about the variety of semiconductor qubit-in-silicon that she works on, called the exchange-only qubit. This qubit, she says, is small and similar enough to classical transistors that it can be manufactured at scale – and has a similar control paradigm to classical computers too, in that it operates by turning pulses on and off.

    Another advantage is that, relative to other qubit approaches, the exchange-only qubit largely avoids the problems of cross-talk – meaning that it can do individual exchange pulses with “five nines of fidelity, 100 million times a second”. Brecht also points out that HRL Laboratories’ recent Nature paper shares their multi-qubit results with 97% fidelity.

    The slicon-based exchange-only qubit approach that HRL Labs has demonstrated at small scale, Brecht concludes, may be a very promising pathway towards operating a fault-tolerant quantum computer for the future.

    The post Quantum computer technology developments at HRL Laboratories appeared first on Physics World.

    ]]>
    Video HRL Labs showcase their exchange-only qubit quantum computer technology https://physicsworld.com/wp-content/uploads/2023/05/HRL-Laboratories-frontis.png
    Behind the scenes of science museums https://physicsworld.com/a/behind-the-scenes-of-science-museums/ Wed, 24 May 2023 10:00:30 +0000 https://physicsworld.com/?p=107396 Andrew Robinson reviews Curious Devices and Mighty Machines: Exploring Science Museums by Samuel J M M Alberti

    The post Behind the scenes of science museums appeared first on Physics World.

    ]]>
    “You owe me a new mobile,” grumbled Samuel J M M Alberti’s colleague after his daughter spotted her iPhone model in the new collection of National Museums Scotland. Like most people, the girl associated museums with historical artefacts and had concluded her phone was a relic that needed replacing right away. “Never mind that the exhibit was showing contemporary technology,” says Alberti in the introduction to his new book Curious Devices and Mighty Machines: Exploring Science Museums. “In her mind the museum was indelibly associated with bygones.”

    But they have a wider remit, according to Alberti, who is director of collections at National Museums Scotland. As he goes on to explain, “Museums collect old and new, tangible and intangible, but most of all, they collect stories.” In fact, the organization, which oversees four museums including the National Museum of Scotland in Edinburgh, has been acquiring iPhones for a decade as an extension of its existing collections on communication, which begin with telegraphy.

    Curious Minds and Mighty Machines is an experienced and enjoyable exploration of diverse aspects of the past and present of science museums

    So how did curators choose from the two dozen iPhone models available and the billion handsets sold? In his chapter on collecting, Alberti explains that they went for handsets with stories. This thinking follows the example of, say, the National Museum of American History, where one of the most significant Apple products on display out of over a hundred is a simple Apple adapter that was found at the site of the World Trade Center following the attacks of 11 September 2001.

    Thus each iPhone that National Museums Scotland has collected has a story. One was a prize in an online competition, shipped to Scotland in November 2007, just before the launch of the first-generation iPhone in Europe – making it perhaps one of the earliest examples to be used in the UK. Another handset was used by prominent videogame designer Mike Dailly, of YoYo Games, to develop Simply Solitaire (released in 2010), which was once Apple’s most popular free app. Meanwhile, an iPhone 3GS in the collection belonged to the photojournalist David Guttenfelder, who used it to take award-winning photographs of conflict zones for online channels such as Instagram, on which he was an early pioneer.

    Curious Minds and Mighty Machines contains many such stories. Alberti covers the ins and outs of how a science museum functions and the qualifications of curators. He talks about how objects are presented in permanent displays, temporary exhibitions and on the Internet to engage visitors of all ages, including young children. He also examines how science museums can be used to campaign against, for example, climate change and racism. In addition, we discover the treasures of museum storerooms, ranging from the microscopic to the mighty. Throughout, Alberti refers to most sciences and many collections, both well-known and not so familiar, but he keeps mainly to Europe, the US and Canada, with a particular focus on Scotland.

    Given such a rich and varied subject, it is inevitable that certain museums, collections and objects are not included, but some of these omissions seem odd considering their significance. There is, for example, surprisingly little coverage of the history of computing. Indeed, the word “computing” does not even appear in the book’s index. Pioneers of computing such as Ada Lovelace and Alan Turing go unmentioned, while there is no reference to the National Museum of Computing at Bletchley Park.

    The History of Science Museum in Oxford – located in the world’s oldest surviving purpose-built public museum building – is underestimated. Its early history from 1924 is only briefly touched upon, and little is said of its distinguished objects. Indeed, the most famous item in its collection, the blackboard that Albert Einstein used in Oxford in 1931 for his lecture on relativity and the age of the universe is completely neglected in Curious Minds and Mighty Machines.

    The omission is ironic given Alberti’s talk of museums collecting stories because the blackboard is linked to a wonderful tale that began with some Oxford dons wanting to immortalize the legendary scientist’s visit. Their desire prompted historian of science Robert Gunther – who was pivotal in establishing the History of Science Museum – to request the lecture organizers to donate the blackboards. Einstein firmly opposed the idea and was annoyed when two ephemeral blackboards were taken after his second lecture. No doubt he would have found it amusing had he learned that one of them is now blank – having been accidentally cleaned in the museum! You can read about it in my Physics World feature on Einstein’s unique trip to Oxford.

    Despite these questionable omissions, Curious Minds and Mighty Machines is an experienced and enjoyable exploration of diverse aspects of the past and present of science museums. The book lives up to the intriguing photograph on its cover, which a journalist described as looking like “a copper robot from the golden age of sci-fi”. It is in fact a radio-frequency accelerating cavity from CERN’s Large Electron–Positron collider, which operated from 1989 to 1995 and was donated by CERN to National Museums Scotland.

    • 2022 Reaktion Books 269pp £20.00

    The post Behind the scenes of science museums appeared first on Physics World.

    ]]>
    Opinion and reviews Andrew Robinson reviews Curious Devices and Mighty Machines: Exploring Science Museums by Samuel J M M Alberti https://physicsworld.com/wp-content/uploads/2023/05/2023-05-Robinson-review-T.2014.34-PF1024501.jpg
    Swallowable X-ray dosimeter monitors radiotherapy in real time https://physicsworld.com/a/swallowable-x-ray-dosimeter-monitors-radiotherapy-in-real-time/ Wed, 24 May 2023 08:50:21 +0000 https://physicsworld.com/?p=108005 Affordable and ingestible capsule could help improve gastric cancer treatments by enhancing the precision of radiotherapy

    The post Swallowable X-ray dosimeter monitors radiotherapy in real time appeared first on Physics World.

    ]]>
    Researchers from Singapore and China have developed a swallowable X-ray dosimeter the size of a large pill capsule that can monitor gastrointestinal radiotherapy in real time. In proof-of-concept tests on irradiated rabbits, their prototype proved approximately five times more accurate than current standard measures for monitoring the delivered dose.

    The ability to precisely monitor radiotherapy in real time during treatment would allow evaluation of the in situ absorbed radiation dose in dose-limiting organs such as the stomach, liver, kidneys and spinal cord. This could make radiation treatments safer and more effective, potentially reducing the severity of side effects. Measuring the delivered and absorbed dose during radiotherapy of gastrointestinal tumours, however, is a difficult task.

    The new dosimeter, described in Nature Biomedical Engineering, could change this. The 18 x 7 mm capsule contains a flexible optical fibre embedded with lanthanide-doped persistent nanoscintillators. The ingestible device also incorporates a pH-responsive polyaniline film, a fluidic module for dynamic gastric fluid sampling, dose and pH sensors, an onboard microcontroller and a silver oxide battery to power the capsule.

    The components within the capsule dosimeter

    First authors Bo Hou and Luying Yi of the National University of Singapore and co-researchers explain that the nanoscintillators generate radioluminescence in the presence of X-ray radiation, which propagates to the ends of the fibre via total internal reflection. The dose sensor measures this light signal to determine the radiation delivered to the targeted area.

    As well as X-ray dosimetry, the capsule also measures physiological changes in pH and temperature during treatment. The polyalinine film changes colour according to the pH of gastric fluid in the fluidic module; the pH is then measured by the colour contrast ratio of the pH sensor, which analyses light after it passes through the film. Additionally, the afterglow of the nanoscintillators after irradiation can be used as a self-sustaining light source to continuously monitor dynamic pH changes for several hours without the need for external excitation. The researchers point out that this capability is not yet available with existing pH capsules.

    The photoelectric signals from the two sensors are processed by an integrated detection circuit that wirelessly transmits information to a mobile phone app. Once activated, the app can receive data from the capsule in real time via Bluetooth transmission. Data such as the absorbed radiation dose, and the temperature and pH of the tissues, can be displayed graphically, stored locally or uploaded to cloud servers for permanent storage and data dissemination.

    Prior to in vivo testing, the researchers assessed the dose response of the nanoscintillators. They used a neural network-based regression model to estimate the radiation dose from the radioluminescence, afterglow and temperature data. They developed the model using over 3000 data points recorded while exposing the capsule to X-rays at dose rates from 1 to 16.68 mGy/min, and temperatures of 32 to 46℃.

    The team found that both radioluminescence and afterglow intensities are directly proportional to dose variations, suggesting that combining the two will lead to more precise estimates of absorbed dose.

    Next, the researchers validated the dosimeter’s performance in three anaesthetized adult rabbits. Following surgical insertion of a capsule in the stomach of each animal, they performed CT scans to identify the capsule’s precise position and angle. They then irradiated each animal multiple times over a 10 hr time period using a progressive X-ray dose rate.

    “Our wireless dosimeter accurately determined the dose of radiation in the stomach, as well as minute changes in pH and temperature, in real time,” the team reports. “The capsule inserted in the gastrointestinal cavity was capable of rapidly detecting changes in pH and temperature near irradiated organs.”

    Before the dosimeter capsule can be clinically tested, a positioning system needs to be developed to place and anchor it at the target site after being swallowed. Better and more accurate calibration of the conversion from optical signal into absorbed dose is also needed prior to clinical evaluation.

    The potential of the new dosimeter extends beyond gastrointestinal applications. The researchers envision its use for dose monitoring of prostate cancer brachytherapy, for example, using a capsule anchored in the rectum. Real-time measurements of absorbed dose in nasopharyngeal or brain tumours may also be feasible if a smaller sized capsule can be placed in the upper nasal cavity.

    The post Swallowable X-ray dosimeter monitors radiotherapy in real time appeared first on Physics World.

    ]]>
    Research update Affordable and ingestible capsule could help improve gastric cancer treatments by enhancing the precision of radiotherapy https://physicsworld.com/wp-content/uploads/2023/05/NUS-researchers.jpg newsletter1
    Particle physicists get AI help with beam dynamics https://physicsworld.com/a/particle-physicists-get-ai-help-with-beam-dynamics/ Tue, 23 May 2023 16:00:03 +0000 https://physicsworld.com/?p=108003 New machine learning algorithm accurately reconstructs the shapes of particle accelerator beams from tiny amounts of data

    The post Particle physicists get AI help with beam dynamics appeared first on Physics World.

    ]]>
    Researchers in the US have developed a machine learning algorithm that accurately reconstructs the shapes of particle accelerator beams from tiny amounts of training data. The new algorithm should make it easier to understand the results of accelerator experiments and could lead to breakthroughs in interpreting them, according to team leader Ryan Roussel of the SLAC National Accelerator Laboratory.

    Many of the biggest discoveries in particle physics have come from observing what happens when beams of particles smash into their targets at close to the speed of light. As these beams become ever more energetic and complex, maintaining tight control over their dynamics becomes crucial for keeping the results reliable.

    To maintain this level of control, physicists need to predict beam shapes and momenta as accurately as possible. But beams may contain billions of particles, and it would take vast amounts of computing power to calculate the positions and momenta of each particle individually. Instead, experimenters calculate simplified distributions that provide a rough idea of the beam’s overall shape. This makes the problem computationally tractable, but it also means that much useful information contained in the beam is thrown away.

    “In order to develop accelerators that can control beams more precisely than current methods, we must be able to interpret experimental measurements without resorting to these approximations,” Roussel says.

    AI assistance

    For the team at SLAC, the predictive power of AI, plus advanced methods for tracking particle motions, offered a promising potential solution. “Our study introduced two new techniques to efficiently interpret detailed beam measurements,” Roussel explains. “These physics-informed machine learning models need significantly less data than conventional models to make accurate predictions.”

    The first technique, Roussel continues, involves a machine learning algorithm that incorporates scientists’ present understanding of particle beam dynamics. This algorithm allowed the team to reconstruct detailed information about the distributions of particle positions and momenta along all three axes parallel and perpendicular to the beam’s direction of travel, based on just a few measurements. The second technique is a clever mathematical approach that enabled the team to integrate beam simulations into the models used to train the machine learning algorithm. This improved the accuracy of the algorithm’s predictions even further.

    Roussel and colleagues tested these techniques using experimental data from the Argonne Wakefield Accelerator at the US Department of Energy’s Argonne National Laboratory in Illinois. Their objective was to reconstruct the position and momentum distributions of energetic electron beams after the beams pass through the linear accelerator. “We found that our reconstruction method was able to extract significantly more detailed information about the beam distribution from simple accelerator physics measurements than conventional methods,” Roussel says.

    Highly accurate predictions

    After training their model with just 10 samples of data, the researchers found that they could predict the electron beams’ dynamics in a further 10 samples extremely accurately, based on simple sets of measurements. With previous approaches, several thousand samples would have been needed to yield the same quality of results.

    “Our work takes significant steps towards achieving the accelerator and beam physics communities’ goals of developing techniques to control particle beams down to the level of individual particles,” Roussel says.

    The researchers, who report their work in Physical Review Letters, hope the flexibility and detail of the new approach will help future experimenters extract the maximum amount of useful information from experimental data. In time, such tight control could even bring physicists a step closer to answering fundamental questions about the nature of matter and the universe.

    The post Particle physicists get AI help with beam dynamics appeared first on Physics World.

    ]]>
    Research update New machine learning algorithm accurately reconstructs the shapes of particle accelerator beams from tiny amounts of data https://physicsworld.com/wp-content/uploads/2023/05/Particle-accelerator-beam-and-AI.png newsletter1
    Artificial retina enables perception and encoding of mid-infrared radiation https://physicsworld.com/a/artificial-retina-enables-perception-and-encoding-of-mid-infrared-radiation/ Tue, 23 May 2023 13:03:56 +0000 https://physicsworld.com/?p=107989 Optoelectronic retina enables development of smarter and more efficient IR machine vision systems

    The post Artificial retina enables perception and encoding of mid-infrared radiation appeared first on Physics World.

    ]]>
    Mid-infrared optoelectronic retina

    In the push to develop new computing systems that mimic the brain, researchers in Singapore and China have devised an artificial retina device for the perception and recognition of objects emitting mid-infrared radiation (MIR). Inspired by how human eyesight works, the neuromorphic device is a step towards better MIR machine vision, which is an important technology for medical diagnosis, autonomous driving, intelligent night vision and military defence.

    Current infrared machine vision has physically separated sensory and processing units, which creates large amounts of redundant data. This is not ideal because it results in computing and energy  inefficiencies. In contrast, the human visual sensory system is very efficient, with a compact retina that perceives and processes visual data – more than 80% of    our brain’s receive — which is then transmitted to the visual cortex of the brain for further processing. The retina’s photoreceptors receive continuous light stimuli ,which are converted into electrical potentials, and the latter are then encoded into trains of electrical pulses called spikes. A train of spikes containing the stimulus information then travels to the visual cortex.

    Inspired by the biological retina, Fakun Wang and Fangchen Hu at Nanyang Technological University in Singapore, together with colleagues, have invented an optoelectronic retina based on a 2D van der Waals heterostructure. This heterostructure consists of a layer of black arsenic phosphorus (b-AsP) on top of a layer of molybdenum telluride (MoTe2). These materials were chosen for their fast response to light and their high absorption efficiency.

    Optically driven

    Previous studies focused on developing neuromorphic devices that are sensitive to light with visible and near-infrared (NIR) wavelengths. This study extends the range of wavelengths to the MIR. Another important novelty of this latest research is that the encoding function is driven optically, rather than electrically, which is promising for high-speed operation.

    Programmable NIR laser pulses, applied simultaneously with MIR laser pulses, encode the information into spike trains. The stochastic NIR pulses change the MIR-excited current in the device, where a spike is generated when the current exceeds the threshold value. This emulates the encoding in the human retina. The device gives a stable response to light  even for a NIR pulse frequency of 100 kHz, which guarantees high-precision MIR intensity coding.

    Adaptive systems

    Another important feature of intelligent systems is adaptation. To adapt to its visual environment the MIR vision system should have a wide dynamic working range of MIR intensities, and high encoding precision. The researchers tested their device with a metal mask with nine hollow figures of the number “3” illuminated by a MIR laser  . This was used to imitate the real MIR targets   such as a tissue sample. They found excellent encoding precision, with the encoded image matching the original image at a precision of over 97%. The team also showed that the NIR pulse parameters can be used to control the dynamic working range and precision.

    Furthermore, they connected their device to what is considered one of the most efficient and brain-like artificial neural networks (ANNs) called a spiking neural network. In this ANN, neurons communicate by sending and receiving spikes as information carriers, much like in the brain. They used this system to classify MIR images of numerical figures in the MNIST data set, which is used to train image processing systems, and achieved an accuracy greater than 96%.

    Wang, who led the research, says that their artificial retina is compatible with CMOS technology, and suggests two ways to further the research: “One is to improve device functions, such as integrating the memory function into this device, to realize the integration of perception, encoding, memory and processing. The other is to combine the device with guided-wave nanophotonics in order to achieve faster operating speeds and lower energy consumption.”

    The research is described in Nature Communications.

    The post Artificial retina enables perception and encoding of mid-infrared radiation appeared first on Physics World.

    ]]>
    Research update Optoelectronic retina enables development of smarter and more efficient IR machine vision systems https://physicsworld.com/wp-content/uploads/2023/05/Arso-list.jpg
    Gravitational lensing of supernova yields new value for Hubble constant https://physicsworld.com/a/gravitational-lensing-of-supernova-yields-new-value-for-hubble-constant/ Tue, 23 May 2023 07:44:51 +0000 https://physicsworld.com/?p=107957 New measurement adds to the mystery of the universe’s expansion

    The post Gravitational lensing of supernova yields new value for Hubble constant appeared first on Physics World.

    ]]>
    A study of how light from a distant supernova was gravitationally lensed as it travelled to Earth has been used to calculate a new value for the Hubble constant – an important parameter that describes the expansion of the universe. While this latest result has not surprised astronomers, similar observations in the future could help us understand why different techniques have so far yielded very different values for the Hubble constant.

    The universe has been expanding since it was created in the Big Bang 13.7 billion years ago. In the 1920s, the American astronomer Edwin Hubble observed that galaxies further away from Earth appear to be moving away from Earth faster than galaxies that are closer to us. He did this by measuring the redshift of the light from these galaxies — which is the stretching of the wavelength of light that occurs when an object recedes from an observer.

    The linear relationship between distance and speed that he measured is described by the Hubble constant and astronomers have since developed several techniques to measure it.

    Astronomers are puzzled, however, because different measurements have delivered very different values for the Hubble constant. Measurements of the cosmic microwave background (CRB) radiation by the European Space Agency’s Planck satellite give a value of about 67 km/s/Mpc. However, measurements involving observations of the type 1a supernovae done by the SH0ES collaboration give a value of about 73 km/s/Mpc. The uncertainties in these measurements are about 1–2%, so there is a clear tension between the two techniques. Astronomers want to know why, and to find out they are developing new ways to measure the Hubble constant.

    Now, astronomers have measured the Hubble constant using light from a supernova that exploded 9.34 billion years ago. On its way to Earth, the light passed through a galaxy cluster and was deflected by the cluster’s immense gravitational field, which focused the light towards Earth. This effect is called gravitational lensing.

    Lumpy mass distribution

    The lumpy distribution of mass in the cluster created a complex gravitational field that sent the supernova’s light along several different paths towards Earth. When the supernova was first observed in 2014, it appeared as four points of light. As the four points faded, a fifth appeared 376 days later. This light was delayed by the longer path it had taken through the cluster.

    During those 376 days the universe had expanded, which means that the wavelength of the late arriving light was redshifted. By measuring this extra redshift, a team led by Patrick Kelly of the University of Minnesota was able to calculate the Hubble constant. Using several different mass distributions models for the clusters, the team came up with values for the constant of either 64.8 km/s/Mpc or 66.6 km/s/Mpc.

    The supernova time-delay measurement would at first glance seems to favour Planck’s value of the Hubble constant over SH0ES. However, previous time delay measurements of quasar light observed by the H0LiCOW collaboration give a value 73.3 km/s/Mpc – so closer to SH0ES.

    While this might seem confusing, Kelly’s colleague Tommaso Treu of the University of California, Los Angeles points out that the latest results are not surprising.

    “They are not very different,” he says. “Within the uncertainties, this new measurement is consistent with all three [Planck, SH0ES and H0LiCOW].”

    Sherry Suyu of the Max Planck Institute for Astrophysics in Germany, who leads the H0LiCOW project and was not involved in these new time-delay measurements, also doesn’t necessarily see a paradox.

    Future promise

    “This value [from the supernova] is from a single lens system, and given its error bars, the measurement is statistically consistent with the results from H0LiCOW’s lensed quasars,” she says.

    The uncertainty in the supernova time-delay measurement comes is related to how mass is distributed in the galaxy – how much dark matter and baryonic (normal) matter is present and how it is spread throughout the cluster. Kelly and Treu’s team used a variety of models, and the differences between the models forms a large part of the uncertainty in their values for the Hubble constant.

    “The precision of the low Hubble constant measurements presented here just isn’t enough to argue against the higher SH0ES value,” says Daniel Mortlock of Imperial College, London, who was also not involved in the research.

    Still, Mortlock thinks that this calculation of the Hubble constant from the time-delay measurement of a supernova is a landmark. So far, only a couple of lensed supernovae have been discovered, but in the coming years when the Vera C. Rubin Observatory in Chile, which sports a giant 8.4-metre survey telescope, comes online the number of lensed supernova discoveries should dramatically increase.

    “Lovely” work

    “Overall I think it’s a lovely piece of work to make this measurement, but perhaps the most exciting aspect of this is future promise, since surveys like Rubin will discover many more systems of this type,” Mortlock says.

    With increased numbers of lensed supernovae will come greater precision in the measurements of the Hubble constant, which will help reduce the error bars and confirm whether these data support the Planck or SH0ES results. Some theorists have even suggested that new physics may be required to explain the Hubble tension, assuming that it is real and not an unrecognized systematic error in the observations.

    “Clearly more precision is needed to contribute to the resolution of the Hubble tension,” concludes Treu. “But this is an important first step.”

    The research is described in Science.

    The post Gravitational lensing of supernova yields new value for Hubble constant appeared first on Physics World.

    ]]>
    Research update New measurement adds to the mystery of the universe’s expansion https://physicsworld.com/wp-content/uploads/2023/05/Gravitational-lensing.jpg newsletter1
    UK pins ‘net zero’ hopes on carbon capture https://physicsworld.com/a/uk-pins-net-zero-hopes-on-carbon-capture/ Mon, 22 May 2023 16:00:48 +0000 https://physicsworld.com/?p=107951 The UK has announced a £20bn package for carbon-capture projects, but not everyone is convinced it will be enough to deliver the country’s net-zero targets

    The post UK pins ‘net zero’ hopes on carbon capture appeared first on Physics World.

    ]]>
    The UK government has pledged up to £20bn for projects that could capture and store up to 30 million tonnes of carbon dioxide (MtCO2) annually by the end of the decade. The government has also announced details of eight industrial plants that will likely be the first to capture carbon for storage in the UK. Despite the ambitious plans, however, some are unconvinced that the initiative will be enough to help the country meet its net-zero carbon ambitions.

    The UK’s strategy for carbon capture and storage was originally unveiled in an energy security plan published in October 2021. But there are currently no commercial carbon-capture-and-storage facilities in the UK and funding to support the initiative was only announced in the budget in March. As well as aiming to capture and store 20–30 MtCO2 each year by 2030, the government claims the UK’s continental shelf could store 78,000 MtCO2, equating to around 200 years of the UK’s annual emissions. 

    The government’s plans are based on four industrial clusters that will link different carbon-capture sites and be responsible for transporting and storing carbon dioxide. The first two clusters – HyNet in north-west England and North Wales, and the East Coast Cluster in Teesside and Humber – have already been approved and are due to start up by the mid-2020s. In its updated energy security plan released in March, the government announced details of eight projects that could be the first to capture carbon in these two clusters.

    The eight projects cover industries such as hydrogen production, energy from waste, a gas-fired power station, and cement and lime works. The projects will now enter negotiations to agree contracts with the government, but funding is not yet guaranteed for any of them. If approved, however, they would capture carbon dioxide from these industrial processes and put it into pipelines for transport to offshore storage sites, such as depleted oil and gas fields. 

    According to the UK government, its plans could – when combined with private investment – create up to 50,000 jobs and lead to a new sustainable industry. The strategy also includes a process to identify and establish the two other industrial clusters, which are due to come online by 2030. It is thought that these are likely to be the Acorn cluster in north-east Scotland and the Viking project based around the port of Immingham in Lincolnshire. 

    The government hopes the four clusters will eventually capture and store up to 6 MtCO2 from industry every year. They will also aim to remove at least 5 MtCO2 from the atmosphere per year through techniques such as direct air capture with carbon storage (DACSS) and bioenergy with carbon capture and storage (BECCS). As part of these objectives, the government says that power plants and hydrogen-production facilities equipped with carbon-capture technology will help decarbonize the UK’s electricity system and create a source of low-carbon “blue” hydrogen. 

    It is not clear, however, where the remaining 9–19 MtCO2 will come from to make up the 20–30 MtCO2 target. There is also concern that the UK will continue to extract oil and gas from the North Sea. In March almost 700 scientists signed an open letter written by Emily Shuckburgh from the University of Cambridge and Bob Ward from the Grantham Research Institute on Climate Change and the Environment, which warned that new oil and gas fields would undermine the UK government’s claimed global leadership on net-zero emissions and make it harder for the world to limit warming to 1.5 °C.

    It is hard to see how the UK can meet its 2050 net-zero target if new oil and gas sites are licensed

    Naomi Vaughan, a climate scientist from the University of East Anglia, told Physics World that carbon capture and storage is “needed” to get to net zero in the UK. But she believes it is hard to see how the UK can meet its 2050 net-zero target if new oil and gas sites are licensed. Vaughan claims that the £20bn investment “looks like a bit of a headline-grabbing figure” and that the government’s plans lack detail on timelines and how that figure will be reached.

    She also believes that even when everything technologically feasible to decarbonize has been achieved, net zero is still likely to require re-engineered forms of atmospheric greenhouse-gas removal like BECCS and DACSS. Vaughan adds that good oversight and regulation will be needed to ensure carbon capture is only used where needed, and not as a shortcut.

    Stuart Haszeldine, a climate scientist from the University of Edinburgh, however, thinks the UK’s plans are “terrific”. He believes the UK has no choice but to use carbon capture and storage, but believes the target of 30 MtCO2 by the end of the decade is “only about half of what we need to do” to tackle climate change. While Haszeldine welcomes the range of projects put forward for approval, he wants a shift from government to industry so that progress can accelerate. 

    The post UK pins ‘net zero’ hopes on carbon capture appeared first on Physics World.

    ]]>
    Analysis The UK has announced a £20bn package for carbon-capture projects, but not everyone is convinced it will be enough to deliver the country’s net-zero targets https://physicsworld.com/wp-content/uploads/2023/05/News-C0301863-CO2_cold_capture_facility.jpg
    New type of quasiparticle emerges to tame quantum computing errors https://physicsworld.com/a/new-type-of-quasiparticle-emerges-to-tame-quantum-computing-errors/ Mon, 22 May 2023 14:00:18 +0000 https://physicsworld.com/?p=107945 The existence of so-called non-Abelian anyons has been reported by four independent teams, though plenty of challenges remain, writes Philip Ball

    The post New type of quasiparticle emerges to tame quantum computing errors appeared first on Physics World.

    ]]>
    Errors are the Achilles’ heel of quantum computation, cropping up at random and threatening to ruin calculations. But they might, in principle, be tamed by encoding quantum information in a type of quasiparticle called a non-Abelian anyon. Evidence that such quasiparticles may exist has now been reported independently by teams at Google, Microsoft, the quantum-computing firm Quantinuum, and Zhejiang University in China.

    The new reports “make a very intriguing advance in quantum computing”, says Jiannis Pachos, a physicist the University of Leeds, UK. “They are all things we have been waiting a long time to see,” agrees Steven Simon, a theorist at the University of Oxford, UK. “It’s a very exciting time for the field.” Still, Simon cautions that none of the results will transform quantum computing yet. “They all have shortcomings, which means there is lots of room for further work,” he says.

    Errors in quantum computing

    Quantum computers encode binary information in quantum bits, or qubits, which can take on values of 1, 0 or a superposition of the two. Various physical systems can act as qubits, including ions, photons and tiny components made from superconducting material. Quantum computations are performed by entangling many qubits so that their states are interdependent, then manipulating them according to some algorithm before reading out their final states.

    The trouble is that random noise in the environment can change qubit states during the computation, making the calculation less reliable. This can happen in classical computers too, but there the problem is relatively easy to solve, for example by keeping several copies of each bit and assigning its value by majority rule. That won’t work for qubits, however, because quantum mechanics prohibits the copying of unknown quantum states, while the very nature of quantum computing requires that the qubit states remain unknown during the computation. As a result, researchers have had to search for more complicated methods of error correction.

    Majorana particles to the rescue

    An alternative approach is to use qubits that are more resistant to errors. In a paper written in 1997 (and published in 2003), physicist Alexei Kitaev showed that it might be possible to create error-protected qubits from hypothetical entities called Majorana particles. First proposed in 1937 by the physicist Ettore Majorana, these particles have a peculiar property: they are their own antiparticle.

    No-one knows whether Majorana particles exist as fundamental particles. However, Kitaev showed that they can, in principle, be created from collective states of electrons called quasiparticles. He also showed that, if used as qubits, these states would be “topologically protected”, meaning that they can’t be randomly flipped by noise without “breaking” the quasiparticle, just as you can’t get rid of the twist in a Möbius strip without cutting it. Kitaev later proposed that these Majorana quasiparticles might be engineered as electronic defect states at the ends of quantum nanowires made from (for example) a semiconductor situated close to a superconductor.

    Photo of Ajuan Cui with her long black hair pulled back, wearing safety glasses and a fabric face mask, peering into the port of a vacuum system in a laboratory

    These defect states are known as Majorana zero modes (MZMs), and they belong to a class of hypothetical (quasi)particles termed non-Abelian anyons. The “anyon” part signifies that the particles are neither fermions nor bosons – a property hypothesized by physicist Frank Wilzcek in 1982 and eventually observed (by teams in the US and France) in 2020 in electronic quasiparticles. Those observations, however, were of Abelian anyons, which are different from the non-Abelian variety Kitaev proposed for error-protected qubits. Specifically, if two non-Abelian anyons exchange places, their quantum states change in a detectable way even though the particles themselves are identical. Abelian anyons, in contrast, acquire no such observable change – only their quantum phase is altered.

    The tangled web we weave

    To make qubits from non-Abelian anyons, Kitaev suggested moving the anyons around to weave their trajectories together. This weaving process is called braiding, and it lets the anyons swap places in a way that alters their observable states. This can then be used to perform a logic operation. Enacting a quantum algorithm thus entails braiding non-Abelian anyons in a particular way, then reading out the result.

    The main benefit of this set-up is that, in effect, the topology of the braiding “remembers” the qubit states. Errors can only arise if a braid is cut, which requires a lot of energy. Under these circumstances, the computation is said to be topologically error-corrected.

    In 2018, researchers in China and the UK (including Pachos) simulated the braiding of non-Abelian MZMs in an optical system. In that experiment, the quantum states corresponded to photon polarization states. The most recent results take this a step further by showing how to implement the idea in real many-qubit quantum circuits.

    The difficult task of making MZMs

    While many groups have pressed ahead with making quantum computers from conventional, non-Majorana qubits supported by quantum error-correcting codes, Microsoft has pinned its quantum computing hopes on making topologically protected qubits from MZMs. “Microsoft’s longstanding belief is that engineering an error-protected topological qubit is the path to delivering quantum computing at scale”, says Chetan Nayak of Microsoft Quantum in Santa Barbara, California. “In engineering a topological qubit directly in the hardware, we will be able to create a new type of qubit that is fast, small, and controllable.”

    Photo of Chetan Nayak, a South Asian man in a blue button-down shirt, standing in front of a green chalkboard full of equations and diagrams

    But this has proved extremely hard, and the field has been dogged with claims that have subsequently crumbled under scrutiny. Part of the challenge has been to identify a clear signature of MZMs that distinguishes them from other quasiparticles. In their latest paper, Nayak and colleagues report evidence for what they think is a discriminating criterion for MZMs. This criterion is known as the topological gap protocol, and the Microsoft researchers say that their system – a thin film of semiconducting indium arsenide sitting below a 120-nanometre-wide strip of superconducting aluminium – passes it.

    Making MZMs in such tiny objects would come with a big advantage. “In principle you could put a hell of a lot of them on a chip,” says Simon. However, the Microsoft paper is not yet peer-reviewed, and the claim will need to be checked carefully. “The hunt for MZMs has over-promised many times,” Simon adds. Though the new result looks promising, he suspects that, ultimately, claims for MZMs will only be accepted “if you can make and manipulate a qubit” from them. “Show me a qubit and then we’ll talk,” he says.

    Anyons or simulations of anyons?

    Meanwhile, teams at Google Quantum AI in the US, Zhejiang University in Hangzhou, China, and Quantinuum (formerly Honeywell Quantum Solutions and Cambridge Quantum Computing) in Germany and the US all claim to have made non-Abelian anyons from clever combinations of more conventional qubits – superconducting circuits for the Google and Zhejiang groups, and trapped ions for Quantinuum.

    Photo of the Zhejiang University chip

    The question here is the meaning of the word “made”. In some sense, one could say that all three results involve not anyons as such but quantum simulations of them – a bit like simulating atoms on a classical computer. But the distinction is blurry. Because qubits are themselves quantum objects, they can be used to build exactly the same quantum wavefunction that an anyon would have. “It’s a fuzzy distinction between simulating matter and having matter,” says Simon. “But what they can say for sure is that they’ve made the wavefunction they want.”

    Photo of a Google engineer working on the team's dilution refrigerator

    The Google and Zhejiang teams used similar approaches on, respectively, a 25-qubit chip and a 68-qubit array. In both experiments, the anyons correspond to defects in a square lattice of interacting qubits, a little like dislocations in a crystal lattice. The lowest-energy (ground) state of this lattice structure consists of wavefunctions corresponding to Abelian anyons – and as Kitaev explained, this in itself can be used to construct an error-correcting qubit protected with an error-correcting code called the surface code.

    To make defects corresponding to error-protected non-Abelian anyons, the researchers manipulated the interactions between neighbouring qubits in a controlled sequence. Both teams show that the resulting quasiparticles can be braided by moving them around one another. The Google researchers, led by Pedram Roushan and Trond I Andersen, also created a well-known quantum-entangled state (a Greenberger-Horne-Zeilinger or GHZ state) from their anyons – a proof of principle for how the quasiparticles can be manipulated.

    Another route to non-Abelian anyons

    The Quantinuum group, meanwhile, made non-Abelian anyons in a different way. Using the Honeywell 32-qubit H2 quantum processor, which holds ytterbium ions in an electromagnetic trap and alters their quantum states using lasers, they created a quasi-one-dimensional chain of interacting trapped-ion qubits.

    Here, the anyons correspond to natural excitations of the ground state of the qubit system – which technically means they are not quasiparticles, since quasiparticles must be excited states. “The Majorana zero modes at the end of superconducting wires in the Microsoft experiment and the lattice defects in the Google experiment are non-Abelian defects,” emphasizes Ashvin Vishwanath of Harvard University, who collaborated with the Quantinuum team. “Unlike our experiment, they are not realized on top of true non-Abelian topological order.”

    Photo of the Quantinuum H2 chip, a rainbow-coloured square with an hourglass-shaped structure in the centre and fine wires connecting to the edges

    Nevertheless, all three results are “neat quantum simulations”, says Chris Monroe, a physicist at Duke University in North Carolina, US and chief scientist of the ion-trap quantum-computing company IonQ. He adds that such experiments could probably have been done on quantum-computing platforms years ago, but they have likely been motivated by recent interest in (and controversy about) MZMs in solid-state systems. “It’s a pretty interesting confluence of science and sociology,” he says.

    Not yet universal

    In Roushan’s view, the two approaches to braiding non-Abelian anyons – using superconducting qubits and trapped ions – are complementary. “There are advantages and disadvantages to each approach, but it is too early to say exactly how they will impact the respective developments,” he says.

    In any event, both approaches are perhaps best regarded as proofs that non-Abelian anyon states – that is, the corresponding wavefunctions – exist.  But Pachos warns that this is just a first step. “It is not clear yet how this can be moved to fault tolerance,” he says.

    What is more, because these anyon states are made from conventional, error-prone qubits, no-one yet knows whether they can be made stable enough to truly act as topologically protected qubits. “To achieve fault tolerance, active error-correcting techniques will be required for our quantum simulation method,” acknowledges Song Chao, a physicist in the Zhejiang team. “A true breakthrough on the technology side will require showing that these protected qubits are more robust than the underlying qubits they are built from,” Vishwanath adds.

    A further caveat is that none of the current experiments makes anyons with the right properties to provide qubits for “universal” quantum computing, which can embody any quantum algorithm. “They do have computational abilities, but none are universal,” Simon explains.

    Still, all of these approaches “give a clear way to move forward”, says Pachos. He adds that “the field is very tense, as the conclusive discovery of [physical] non-Abelian anyons will lead to a Nobel prize”. But we are not there yet.

    • This article was amended on 23/5/2023 to clarify Pedram Roushan’s comment on complementary approaches, and on 30/05/2023 to correct a reference to work done in 2018 by Jiannis Pachos and colleagues.

    The post New type of quasiparticle emerges to tame quantum computing errors appeared first on Physics World.

    ]]>
    Analysis The existence of so-called non-Abelian anyons has been reported by four independent teams, though plenty of challenges remain, writes Philip Ball https://physicsworld.com/wp-content/uploads/2023/05/nonAbelian_braiding.png newsletter
    FLASH radiotherapy creates a stir at ESTRO trade show https://physicsworld.com/a/flash-radiotherapy-creates-a-stir-at-estro-trade-show/ Mon, 22 May 2023 10:00:59 +0000 https://physicsworld.com/?p=107935 The ESTRO 2023 congress saw exhibitors highlight a range of electron beam-based FLASH radiotherapy systems

    The post FLASH radiotherapy creates a stir at ESTRO trade show appeared first on Physics World.

    ]]>
    Valeria Preda and David White

    FLASH radiotherapy, in which radiation is delivered at ultrahigh dose rates (40 Gy/s or above), offers promise to spare healthy tissue while still effectively killing cancer cells. This so-called FLASH effect has been demonstrated extensively in preclinical studies over the past few years, with the first patient treatment taking place in 2019 and results from a first in-human trial reported last year.

    At the recent ESTRO 2023 congress in Vienna, FLASH featured heavily among the scientific presentations. And the technique was also making an impact at the trade show. “At ESTRO, we noticed that interest in FLASH is really high,” said Valeria Preda from Italian radiotherapy specialist SIT. “Ninety percent of the people who have visited us are more interested in FLASH.”

    SIT was showcasing its ElectronFlash system, a dedicated research accelerator for FLASH radiotherapy. Preda noted that the first system was installed at Institut Curie in France (the site at which Vincent Favaudon first reported the FLASH effect back in 2014), with further systems installed in Antwerp University, Pisa University and shortly also in Madrid (where the first patients will be treated with ElectronFlash within the frame of a clinical trial).

    Designed for pre-clinical studies on cells, organoids and small animals, the ElectronFlash comes in three versions, with energy ranges of 5–7, 7–9 and 10–12 MeV, and a dose rate adjustable between 0.005 and 10,000 Gy/s. The system allows for modification of the dose-per-pulse, pulse width and pulse repetition frequency, and can be installed in any standard radiotherapy bunker.

    Alongside its pre-clinical offering, SIT is also developing a clinical device – the LIAC FLASH – designed for both clinical and research applications. Preda explained that SIT’s LIAC system was originally developed for intra-operative electron radiotherapy (IOeRT), in which radiation is delivered during a tumour excision operation using a beam of electrons.

    IOeRT works by delivering a single dose of irradiation, or as a boost to reduce the number of fractions, to a surgically exposed tumour or tumour bed, whilst the normal tissues are protected by retraction or using a temporary inserted shield. The FLASH workflow will be similar to IOeRT with a new developed device, explained SIT/Vertec Scientific’s David White, but much faster, with irradiation times reduced from minutes to milliseconds.

    The team is now working on CE certification for the LIAC FLASH, which will offer both conventional IOeRT and FLASH dose rates, and market launch is predicted for the first half of 2025. The new system will address all the IOeRT indications currently included within ESTRO, ASTRO and NCCN guidelines. “I see FLASH technology being the next big step forward for IOeRT,” White told Physics World.

    Deep treatment

    Also exhibiting its latest FLASH technology offerings was THERYQ, a spin-off of French manufacturer PMB-ALCEN. PMB developed the Oriatron electron linear accelerator, employed for early FLASH studies and used at Lausanne University Hospital (CHUV) for the first FLASH treatment of a patient.

    Ludovic Le Meunier

    Building on this expertise, THERYQ developed FLASHKNiFE, a mobile treatment system that combines an ultrahigh dose rate (up to 350 Gy/s) electron linac with an interactive robot. The system delivers electron beams with energies of 6 to 10 MeV, and can treat at depths of up to 3 cm. The company released the first prototype machine in 2021 and is soon to start clinical trials in four European centres, including CHUV.

    The initial trial is designed to demonstrate the safety of the system when used in external-beam treatments of skin cancer, with CE certification slated for 2025. A second trial next year will study the use of the FLASHKNiFE for intra-operative radiotherapy of head-and-neck and visceral tumours.

    Alongside, THERYQ is developing a second system, FLASHDEEP, that will be able to treat solid tumours anywhere in the body. “We are now working in collaboration with CHUV and CERN to develop a FLASH system capable of targeting any tumour at any depth,” explained the company’s CEO Ludovic Le Meunier.

    The FLASHDEEP accelerator is based on compact linear collider (CLIC) technology developed by CERN, which generates very high-energy electron (VHEE) beams with energies of 100 to 200 MeV and enables treatment of tumours at depths of up to 20 cm. The VHEE beams are distributed into three beamlines, which converge towards the patient isocentre to provide conformal treatment.

    The system will deliver doses of 2 to 30 Gy and employ real-time pulse-to-pulse control to enable treatment times of less than 100 ms. Since the system will not involve a gantry, THERYQ intends to use an upright patient positioning system (from Leo Cancer Care) for treatments.

    The system is being developed at CHUV, with full installation expected in mid-2025. After that, the company hopes to install a second machine in Institut Gustave Roussy in Paris, followed in 2027 by a system at IUCT, the Cancer University Institute of Toulouse, and another at a US site. “If we do what we say we’re going to do, I think there will be a big shift,” Le Meunier told Physics World.

    Adapting an established platform

    Elsewhere on the ESTRO show floor, US electron therapy specialist IntraOp presented the application of its Mobetron electron-beam linear accelerator for preclinical and investigational FLASH radiotherapy studies. The company points out that Mobetron is the first to provide ultrahigh-dose rate electron therapy for FLASH research using an established clinical radiotherapy platform, and the first being used in human clinical trials of FLASH radiotherapy with electrons, with two clinical protocols (IMPulse and LANCE) already approved.

    Philip von Voigts-Rhetz

    The IntraOp Mobetron is a mobile, self-shielded machine designed to deliver intra-operative radiotherapy (IORT) to cancer patients during surgery. Now in established clinical use at dozens of cancer centres, clinics and teaching hospitals around the world, the system delivers beam energies of 6, 9 and 12 MeV to treat at depths of up to 4 cm.

    Philip von Voigts-Rhetz, clinical application specialist at IntraOp, explained that to move to FLASH irradiation, the company used the identical clinical platform but modified key beam parameters.

    “It took some time to adjust key parameters and get a stable and reproducible beam, with large field sizes that could be used for patient irradiation,” he said. “Then three years ago we delivered the first FLASH system and we now have 10 installations at leading cancer centres and universities worldwide.”

    One major obstacle in the use of FLASH radiotherapy is the difficulty in performing accurate dosimetry at ultrahigh dose rates, where conventional ion chambers exhibit beam perturbation and saturation effects. To overcome this, the Mobetron incorporates two beam-current transformers (BCTs) to provide real-time monitoring of the output and energy of pulsed electron FLASH beams.

    A study performed at the MD Anderson Cancer Center demonstrated that BCTs can accurately monitor the FLASH beams, quantify accelerator performance and capture essential physical beam parameters on a pulse-by-pulse basis. In the future, IntraOp proposes that BCTs could also be used for active control of electron FLASH beams.

    Von Voigts-Rhetz noted that the modified Mobetron can operate at both conventional and ultrahigh dose rates. “One system can be used for clinical IORT in the day for patient treatments, and then turn into a FLASH system for research, ensuring the fastest path to clinical implementation of FLASH radiotherapy,” he told Physics World.

    The post FLASH radiotherapy creates a stir at ESTRO trade show appeared first on Physics World.

    ]]>
    Analysis The ESTRO 2023 congress saw exhibitors highlight a range of electron beam-based FLASH radiotherapy systems https://physicsworld.com/wp-content/uploads/2023/05/FLASH-featured.jpg newsletter
    A co-ordinated measurement system is one of humanity’s greatest achievements – we must stick with it https://physicsworld.com/a/a-co-ordinated-measurement-system-is-one-of-humanitys-greatest-achievements-we-must-stick-with-it/ Fri, 19 May 2023 14:00:25 +0000 https://physicsworld.com/?p=107048 Steven Judge argues that it would be wrong for countries to revert to imperial measures

    The post A co-ordinated measurement system is one of humanity’s greatest achievements – we must stick with it appeared first on Physics World.

    ]]>
    When I was at primary school in the 1960s, measurements were performed using traditional, or imperial, units. The ounce, pound, stone, inch, foot and so on were combined in multiples of 3, 4, 12, 14, 16 and…1760 (one mile being 1760 yards) and often had strange definitions. The furlong, for example, originated from the length that an ox-drawn plough could cover, being 220 yards.

    From 1974 onwards, a welcome change occurred when it became compulsory for UK schools to teach metric units – a measurement system that made sense, based on multiples of 10. The bizarre vocabulary of different units was replaced by prefixes that were the same whether you were measuring length, time, mass or radioactivity. It is a system that is simple and works from the very small to the very large.

    For this concept, we can thank the Northamptonshire-born clergyman and natural philosopher Reverend John Wilkins (1614–1672). One of the greatest thinkers of his generation, in 1668 he proposed a system of measurement based on a universal standard of length and a decimal scheme. It was one of the first concrete proposals for the metric system of measurement.

    His ideas were not adopted straightaway, but Wilkins knew that, in the words of Ecclesiastes, there is a time for every matter under heaven. For the metric system, that time was the French Revolution. Measurements in France in the 18th century had been a mess, with hundreds of local systems leading to countless frauds. Fair weights and measures were one demand of the revolutionaries.

    Indeed, in 1790 Talleyrand, the Bishop of Autun contacted the British Parliament to propose adopting a unified system of measurements. This putative Anglo-French co-operation was rejected by the British parliament but the French pressed ahead anyway. The advantages of the new metric system were clear and on 20 May 1875 an international treaty – the Metre Convention – was signed that established the metric system of measurements.

    The convention also established the International Weights and Measures Bureau (BIPM) to co-ordinate the new scheme. One of its first jobs was to construct a standard kilogram – a metal artefact that would serve as the reference point for the world. Countries would hold a copy of the artefact, allowing industry to compare their weights to the copy. Constructing the standard kilogram proved difficult and, in fact, a British engineering firm, Johnson Matthey, was commissioned to help. The first standard kilogram, known as the International Prototype Kilogram, is at the BIPM to this day.

    Yet the metric system – known as the Système International d’Unités or simply “SI” – had to expand to meet the needs of industry. It was found that only seven base units were needed (mass, length, time, electrical current, temperature, luminous intensity and amount of substance); everything else could be expressed in terms of these units. Methods were developed to realize standard units that relied only on the underlying physics.

    The UK government should realize that the metric system has strong British roots and should applaud the contribution that British scientists have made, rather than perceiving it as something ‘foreign’ 

    Although the kilogram remained stubbornly difficult to replace, it was a British scientist – Bryan Kibble – who helped find a solution. Based at the National Physical Laboratory in Teddington, UK, he developed an ingenious balance that linked a measurement of mass to the force produced by an electrical current in a coil, and hence to the Planck constant. The standard kilogram could retire gracefully and, from 20 May 2019, all measurements were based on constants that describe the natural world. Wilkins’ vision had come to pass.

    ‘Foolishness beyond measure’

    I mention all this because tomorrow, 20 May 2023, marks World Metrology Day. It celebrates the anniversary of the signing of the Metre Convention, which the UK signed in 1884, having legalized the use of the metric system several years earlier. The theme this year is metrology to support the global food system. This includes the rapid measurements of mass to ensure pre-packaged foods are labelled correctly, determining the isotopic composition of high-value foods (such as honey) to confirm their origin, and detecting chemical or biological contamination.

    Despite the success of the metric system, there remain some politicians in the UK who – in this post-Brexit age – are actually considering whether it would be better for shops to revert to imperial units. The government even conducted a survey to gauge public opinion on a return to historical weights and measures, which received over 100,000 responses. However, existing legislation already allows shops to use traditional units, so long as metric units are also displayed.

    Of course, there is nothing wrong with using the old units alongside the SI, and there is nothing in current UK regulations to forbid it. The pound is defined to be exactly 0.45359237 kg and an inch is exactly 2.54 cm, so the two systems are joined. My local pub sells beer in pints, I buy petrol in litres, give my height in feet and inches, and I use centimetres or inches when cutting up wood for DIY projects, whatever is most convenient.

    Having the two systems is a compromise that has worked well for decades. Apart from the time and money wasted on the survey, the UK government is simply stirring discontent between those who want to hark back to the “good old days” and a younger generation who want to keep with the times.

    The UK government should realize that the metric system has strong British roots and should applaud the contribution that British scientists have made, rather than perceiving it as something “foreign”. It should be celebrating the science of metrology and looking to the opportunities offered by developing innovative instruments based on quantum mechanics and improving productivity by introducing digitization. A harmonized measurement system is one of humanity’s greatest achievements and for any government to promote a return to the old ways is foolishness beyond measure.

    In the words of the French mathematician and philosopher Marquis de Condorcet in 1791, the metric system is “for all people, for all time”.

    The post A co-ordinated measurement system is one of humanity’s greatest achievements – we must stick with it appeared first on Physics World.

    ]]>
    Opinion and reviews Steven Judge argues that it would be wrong for countries to revert to imperial measures https://physicsworld.com/wp-content/uploads/2023/05/2023-05-Forum-fruit-and-veg-stall-Borough-Market-London-919291084-iStock_Paolo-Paradiso-editorial-use-only.jpg newsletter
    Optical frequency combs in space: ready for take-off https://physicsworld.com/a/optical-frequency-combs-in-space-ready-for-take-off/ Fri, 19 May 2023 12:53:28 +0000 https://physicsworld.com/?p=107522 Available to watch now, Menlo Systems looks at bringing ultimate precision to space by qualifying the optical frequency comb for in-orbit operation

    The post Optical frequency combs in space: ready for take-off appeared first on Physics World.

    ]]>
    Optical frequency comb technology and its capability to directly measure and convert optical frequencies has revolutionized the field of high-precision metrology. While it is enabling novel technologies such as optical clocks and quantum applications, the demand is growing to exploit these technologies in space missions. To meet the requirements set by the harsh space environment, Menlo Systems has developed the Space Comb, with low size, weight and power (SWAP) characteristics, with high robustness against shock and vibration, radiation-tolerant components, standardized interfaces and autonomous operation.

    In this webinar, we will illuminate the technology of optical frequency combs and the path to a space qualified product, including precursor missions on sounding rockets. We will walk you through the characterization of the system and stress the crucial aspects of its development for space-readiness. The designated application of the Space Comb in the COMPASSO project of the German Aerospace Center (DLR) will be presented, with the aim to enhance the precision performance of global navigation satellite systems (GNSS). Finally, we will give an overview over the potential landscape of future space applications for optical frequency combs.

    Frederik Böhle joined Menlo Systems in 2017, working as a project manager on the development of space-qualified optical frequency combs.

    Matthias Lezius joined Menlo Systems in November 2010, and was senior scientist from 2010–2019. Since 2019 he has been group manager development and custom projects (solutions for space).

    Benjamin Sprenger joined Menlo Systems in June 2015, and was sales engineer for frequency combs and optical reference systems from 2015–2019. Since 2020 he has been regional manager in Berlin and quantum technology and metrology expert.

    The post Optical frequency combs in space: ready for take-off appeared first on Physics World.

    ]]>
    Webinar Available to watch now, Menlo Systems looks at bringing ultimate precision to space by qualifying the optical frequency comb for in-orbit operation https://physicsworld.com/wp-content/uploads/2023/05/2023-07-06-webinar-image.jpg
    Multilegged robots crawl over rough terrain, building houses with used diapers https://physicsworld.com/a/multilegged-robots-crawl-over-rough-terrain-building-houses-with-used-diapers/ Fri, 19 May 2023 11:25:49 +0000 https://physicsworld.com/?p=107932 Excerpts from the Red Folder

    The post Multilegged robots crawl over rough terrain, building houses with used diapers appeared first on Physics World.

    ]]>

    First up in this week’s Red Folder are robots that have been inspired by the humble centipede and created by researchers at Georgia Tech in the US. You might be wondering why they have gone to the bother of creating a robot with lots of legs, if many animals are happy having just four – or even two in the case of us humans.

    To explore the benefits of many legs, the team developed a new model of multilegged locomotion, which suggested that multilegged robots would be very good at travelling over uneven surfaces. This was predicted to occur even if some of the legs were “redundant” in the sense that they did not have any independent sensing or control capabilities.

    They confirmed this by building robots with redundant legs and you can watch one in action in the above video.

    Georgia Tech’s Baxi Chong explains, “With an advanced bipedal robot, many sensors are typically required to control it in real time. But in applications such as search and rescue, exploring Mars, or even micro robots, there is a need to drive a robot with limited sensing. There are many reasons for such a sensor-free initiative. The sensors can be expensive and fragile, or the environments can change so fast that it doesn’t allow enough sensor-controller response time.”

    Their research is described in Science.

    Nappy construction

    According to a study led by Siswanti Zuraida at Japan’s University of Kitakyushu, up to 8% of the sand and mortar used to build a house could be made from used disposable diapers that have been cleaned and shredded.

    They made their concrete and mortar samples by combining washed, dried and shredded disposable diaper waste with cement, sand, gravel and water. The samples were then cured for one month. The samples contained different proportions of diaper waste, which allowed the researchers to find an optimal blend of materials.

    They measured the pressure that the samples could withstand without breaking. This allowed them to calculate the maximum proportion of sand that could be replaced with disposable diapers in a range of building materials that would be needed to build a house.

    Today, most disposable diapers are put in landfill or incinerated so this seems like a sustainable use of the material. But as the parent of three (now grown) children, I certainly wouldn’t want to be responsible for washing the diapers.

    The team reports its results in Scientific Reports.

    The post Multilegged robots crawl over rough terrain, building houses with used diapers appeared first on Physics World.

    ]]>
    Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/05/Centipede-robot.jpg
    Commercializing quantum technologies: the risks and opportunities https://physicsworld.com/a/commercializing-quantum-technologies-the-risks-and-opportunities/ Fri, 19 May 2023 10:30:57 +0000 https://physicsworld.com/?p=107929 Conference in the City of London reveals a growing interest in all things quantum among businesses

    The post Commercializing quantum technologies: the risks and opportunities appeared first on Physics World.

    ]]>
    This week, the Economist hosted the “Commercialising Quantum Global” conference in the UK and I was very pleased to attend in person on Wednesday. The meeting was held in the heart of the City of London, one of the world’s great financial centres. This was no coincidence, because this was not a conference primarily about science, or even technology – business was at the centre of most discussions.

    The conference centre was in a part of the City called Houndsditch, which is just outside of what had been London’s medieval wall. I’m probably making too much of the symbolism of this location, but it seemed appropriate for the upstart quantum industry to be camped just outside of a citadel of commerce, plotting its entry.

    After the first few talks at the conference, it became clear to me that most people there believed that quantum computing and other quantum technologies could bring great business opportunities as well as threats. As I scanned the speaker list for the day, I decided that one way of getting a broad understanding of how quantum could affect business was to attend two talks by people in the insurance industry.

    Optimizing reinsurance

    Those two speakers were Roland Scharrer, who is group chief data and emerging technology officer at AXA, and Andreas Nawroth who is leading expert for artificial intelligence at Munich Re.

    Scharrer says that AXA started exploring quantum technologies in 2020. Indeed, many of the speakers at the conference said that their companies have been investigating quantum computing for about two to three years. And like many other companies, one of AXA’s main interests in quantum computing is using it for optimization.

    For Scharrer a primary interest is using quantum algorithms to minimize the risk, and maximize the profit, associated with AXA’s use of reinsurance. Reinsurance is a product that one insurance company buys from another insurance company to cover losses in certain circumstances. This allows an insurance company to share risk with others and it is often used to cover so-called “black swan” events. These are very rare events that are extremely difficult to predict and can be very costly for insurers

    Heuristic approach

    Striking a balance between using reinsurance and insuring risk internally is a classic optimization problem that is very important for an insurance company to get right. Getting things wrong, even by a tiny bit, can be very costly. Scharrer explains that optimization is currently done using a heuristic approach that relies on human expertise.

    While reinsurance optimization could be done better on a conventional computer, Scharrer says that it would take decades to do the calculations. And that is where a quantum computer could come in handy – because some quantum computers are predicted to be very good at solving certain optimization problems that could be relative to reinsurance. But like a lot of the technology being discussed at the conference, such a quantum computer does not yet exist.

    In his talk, Munich Re’s Nawroth talked about how insurers could use quantum computers to do simulations that could help them better understand a wide range of phenomena that affect risk. These include climate change, green technologies, financial markets, pandemics, cyber security and so on.

    Insuring for quantum effects

    But for me, the most interesting thing that he spoke about was the need for insurers to understand the risks associated with the peculiar nature of quantum computing itself. This because their customers will want to ensure against these risks. One of these risks is associated with the no-cloning theorem of quantum mechanics, which states that it is impossible to create an exact copy of a quantum state. This, says Nawroth, would make it difficult for a quantum information system to recover after a cyber attack.

    Another risk is that quantum algorithms are currently poorly understood, so it is difficult to insure against risks associated with their use. Finally, Nawroth pointed out that a move to quantum computing would mark a shift from deterministic to probabilistic algorithms – which again pose new challenges when it comes to insurance.

    The simple fact that I was able to attend two talks on insurance and quantum computing, makes it clear that discussions around quantum technology have “moved beyond physics”. Indeed, I would say that this was an overriding theme of the conference. While I understand why progressing from basic science is a milestone in the commercialization of any technology, I’m not convinced that quantum computing is quite there yet.

    Artificial comparison

    For example, several speakers compared quantum computing to artificial intelligence (AI) in terms of its potential disruptive effects on business and on society. While it’s tempting to draw parallels between the two, I think it’s important to keep in mind that AI is a fully fledged technology that is already seeing widespread commercial use. And, in the case of ChatGPT, AI can be accessed from any smartphone. In contrast, quantum computing is a much more nascent technology that is only now seeing a few green shoots of commercial application.

    Jay Gambetta, who leads IBM’s quantum computing initiative, is one who embraced this idea of moving on. He said that we are beyond the “quantum is cool” phase and have moved into the “utility” phase in the development of quantum computers. IBM’s 2023 generation of quantum processors will have 100–1000 quantum bits or qubits and the company intends to scale this up to 100,000 qubits in the next decade – creating machines that could address a range of practical computing problems. While much of this effort will be focused on engineering, I’m sure physicists will play important roles in making this happen – so perhaps it is a bit early to say that the industry has moved away from physics and into a truly commercial world

    The post Commercializing quantum technologies: the risks and opportunities appeared first on Physics World.

    ]]>
    Blog Conference in the City of London reveals a growing interest in all things quantum among businesses https://physicsworld.com/wp-content/uploads/2023/05/Quantum-insurance.jpg newsletter
    Ultrafast imaging sheds light on the earliest stages of vision https://physicsworld.com/a/ultrafast-imaging-sheds-light-on-the-earliest-stages-of-vision/ Fri, 19 May 2023 08:45:37 +0000 https://physicsworld.com/?p=107883 Researchers use ultrafast time-resolved crystallography to determine the molecular events that first occur in the retina of the eye once a photon is absorbed

    The post Ultrafast imaging sheds light on the earliest stages of vision appeared first on Physics World.

    ]]>
    Rhodopsin, the protein that enables humans and other vertebrates to sense light, belongs to the family of light-sensitive G protein-coupled receptors (GPCRs). It comes first in the signal transduction pathway for vision to begin. Once it absorbs a photon, an immediate (within 200 fs) conformational change occurs in the retinal, a chromophore located inside rhodopsin. This early structural change initiates the cellular signal transduction processes that set early stages of vision. However, details of the real-time intramolecular events through which the photoactivated retinal induces the activation events inside rhodopsin remain unclear.

    To fill this knowledge gap, researchers at the Paul Scherrer Institute (PSI) in Switzerland used ultrafast time-resolved crystallography to study conformational changes in rhodopsin after it absorbs a photon. Their findings, reported in Nature, explain how the retinal only absorbs part of the photon energy, storing the remaining energy to fuel the conformational changes associated with the formation of the G protein-binding signalling state.

    To record and analyse the activation mechanism of the retinal chromophore at the atomic scale, and with ultrafast (picosecond) temporal resolution, the team used time-resolved serial femtosecond crystallography (TR-SFX) at room temperature.

    For their experiments, the researchers first grew high-quality rhodopsin microcrystals, and then used TR-SFX to generate series of diffraction pattern images of the crystals. More precisely, they used an optical laser pulse to photoactivate the protein molecules in the crystal and then – after a specified time delay – probed the structure with an X-ray pulse from an X-ray free electron laser (XFEL). Recording with the XFEL, effectively a very high-speed camera, the researchers collected serial frames from tens of thousands of crystals oriented in random manner.

    PSI researcher Valérie Panneels

    The analyses performed by the team included modelling the rhodopsin structure for electron density changes together with structural refinement against crystallography observations. This revealed that light-induced isomerization (in which a molecule switches between two distinct conformations) with a bend in the retinal chromophore persists for 1 ps, given that the first metastable intermediate of rhodopsin appears 200 fs after photoactivation. Then 100 ps later, the rhodopsin structure adopts a more relaxed conformation. Thereby the results suggest that the protein utilizes active (or functional) zones of the GPCR structural pathways for energy dissipation.

    One highlight of this new study is that the room-temperature structure reveals electron density for all previously described functional and structural water molecules, including those that have a role later in the photoactivation process. The researchers note that previous structures resolved under cryogenic conditions failed to achieve this. Consequently, the new high-resolution SFX structure of rhodopsin showcases the entirety of the water-mediated hydrogen bond network within the protein.

    The investigation sheds light on the earliest stages of vision, revealing that ultrafast energy dissipation in rhodopsin occurs through conserved residues of GPCR activation pathways, paving the way to study the early activation events in this largest family of GPCRs (class A).

     

    The post Ultrafast imaging sheds light on the earliest stages of vision appeared first on Physics World.

    ]]>
    Research update Researchers use ultrafast time-resolved crystallography to determine the molecular events that first occur in the retina of the eye once a photon is absorbed https://physicsworld.com/wp-content/uploads/2023/05/psi_Gebhard-Schertler.jpg newsletter1
    Cerca Magnetics bags qBIG Prize for quantum innovation https://physicsworld.com/a/cerca-magnetics-bags-qbig-prize-for-quantum-innovation/ Thu, 18 May 2023 15:51:16 +0000 https://physicsworld.com/?p=107925 Aquark Technologies and Quantopticon are runners up for inaugural award from IOP

    The post Cerca Magnetics bags qBIG Prize for quantum innovation appeared first on Physics World.

    ]]>
    Earlier this week I had the pleasure of attending the first day of the Economist’s Commercialising Quantum Global conference in London. It was a thoroughly enjoyable experience to be out and about again, rubbing shoulders with people interested in all things quantum.

    The conference was also a milestone for my colleagues at the Institute of Physics (IOP), which publishes Physics World, because it was there that they announced the winner of the first IOP qBIG Prize for quantum innovation – which is Nottingham-based Cerca Magnetics.

    The inaugural winner bagged the £10,000 prize for their development of first wearable magnetoencephalography scanner, which measures human brain function in health and disease. The prize recognizes small and medium-sized companies that are taking quantum technology products or solutions to market. It is sponsored by Quantum Exponential, which is the UK’s first enterprise venture capital fund focused on quantum technology. The prize also includes access to Quantum Exponential’s business network as well as support from the IOP’s quantum-industry networks and access to its Accelerator workspace in central London.

    The IOP also announced two runners up, which will also benefit from greater support from the IOP. These are Aquark Technologies and Quantopticon.

    There is much more about Cerca Magnetics in this Physics World feature article by medical-imaging researcher Hannah Coleman and Matthew Brookes, who is chairman of the company.

    The post Cerca Magnetics bags qBIG Prize for quantum innovation appeared first on Physics World.

    ]]>
    Blog Aquark Technologies and Quantopticon are runners up for inaugural award from IOP https://physicsworld.com/wp-content/uploads/2023/05/Cerca-Meg.jpg
    Charting the evolution of scientific measurement over the past century and looking to the future https://physicsworld.com/a/charting-the-evolution-of-scientific-measurement-over-the-past-century-and-looking-to-the-future/ Thu, 18 May 2023 15:30:04 +0000 https://physicsworld.com/?p=107920 We celebrate 100 years of Measurement Science and Technology

    The post Charting the evolution of scientific measurement over the past century and looking to the future appeared first on Physics World.

    ]]>
    In this episode of the Physics World Weekly podcast we celebrate the 100th anniversary of Measurement Science and Technology, which is the world’s first scientific instrumentation and measurement journal. I am joined by the journal’s editor-in-chief Andrew Yacoot to chat about a century of metrology and look forward to the future of the discipline.

    Yacoot is principal scientist at the UK’s National Physical Laboratory, where he leads the lab’s dimensional nanotechnology programme. He also talks about his research efforts and about recent changes to the definitions of SI units.

    We are running this podcast this week because Saturday 20 May is World Metrology Day, marking the 148th anniversary of the Metre Convention, which began the international standardization of the metre and the kilogram.

    The post Charting the evolution of scientific measurement over the past century and looking to the future appeared first on Physics World.

    ]]>
    Podcast We celebrate 100 years of Measurement Science and Technology https://physicsworld.com/wp-content/uploads/2023/05/Andrew-Yacoot-list.jpg newsletter
    Concentrated solar reactor generates unprecedented amounts of hydrogen https://physicsworld.com/a/concentrated-solar-reactor-generates-unprecedented-amounts-of-hydrogen/ Thu, 18 May 2023 09:30:16 +0000 https://physicsworld.com/?p=107881 Photoelectrochemical device also produces usable heat and oxygen and could be commercialized in the near future

    The post Concentrated solar reactor generates unprecedented amounts of hydrogen appeared first on Physics World.

    ]]>
    A new solar-radiation-concentrating device produces “green” hydrogen at a rate of more than 2 kilowatts while maintaining efficiencies above 20%. The pilot-scale device, which is already operational under real sunlight conditions, also produces usable heat and oxygen, and its developers at the École polytechnique fédérale de Lausanne (EPFL) in Switzerland say it could be commercialized in the near future.

    The new system sits on a concrete foundation on the EPFL campus and consists of a parabolic dish seven metres in diameter. This dish collects sunlight over a total area of 38.5 m2, concentrates it by a factor of about 1000 and directs it onto a reactor that comprises both photovoltaic and electrolysis components. Energy from the concentrated sunlight generates electron-hole pairs in the photovoltaic material, which the system then separates and transports to the integrated electrolysis system. Here, the energy is used to “split” water that is pumped through the system at an optimal rate, producing both oxygen and hydrogen.

    Putting it together at scale

    Each of these processes has, of course, been demonstrated before. Indeed, the new EPFL system, which is described in Nature Energy, builds on previous research from 2019, when the EPFL team demonstrated the same concept at laboratory scale using a high-flux solar simulator. However, the new reactor’s solar-to-hydrogen efficiency and hydrogen production rate of around 0.5 kg per day are unprecedented in large-scale devices. The reactor also produces usable heat at a temperature of 70°C.

    The versatility of the new system forms a big part of its commercial appeal, says Sophia Haussener, who leads the EPFL’s Laboratory of Renewable Energy Science and Engineering (LRESE). “This co-generation system could be used in industrial applications such as metal processing and fertilizer manufacturing,” Haussener tells Physics World. “It could also be used to produce oxygen for use in hospitals and hydrogen for fuels cells in electric vehicles, as well as heat in residential setting for heating water. The hydrogen produced could also be converted to electricity after being stored between days or even inter-seasonally.”

    Haussener and colleagues are now busy scaling up their system further in an environment where individual reactors are deployed in a modular fashion, like trees in an artificial garden. A LRESE spin-off, SoHHytec SA, is deploying and commercializing the technology, and is working with a Switzerland-based metal production facility to build a demonstration plant on the multi-100-kilowatt scale.

    Another future direction for the team could be to develop a similar system to convert CO2 into CO, ethylene or other products plus oxygen. “This would allow us to valorize CO2 and produce other precursors for industrial processes,” Haussener explains. “For example, ethylene could be used in green plastic production, and CO together with hydrogen for liquid fuel production.”

    The post Concentrated solar reactor generates unprecedented amounts of hydrogen appeared first on Physics World.

    ]]>
    Research update Photoelectrochemical device also produces usable heat and oxygen and could be commercialized in the near future https://physicsworld.com/wp-content/uploads/2023/05/Low-Res_Reactor-029_LRESE_crop.jpg
    Mechanical nanosurgery attacks aggressive brain cancer https://physicsworld.com/a/mechanical-nanosurgery-attacks-aggressive-brain-cancer/ Wed, 17 May 2023 15:00:43 +0000 https://physicsworld.com/?p=107831 New magnetic nanomaterial-based technique could treat tumours that resist existing therapies

    The post Mechanical nanosurgery attacks aggressive brain cancer appeared first on Physics World.

    ]]>
    A new nanosurgery technique could help treat glioblastoma, one of the most common and aggressive of all primary brain cancers. The technique, which relies on injecting nanotubes containing iron particles into a tumour site, could be used against cancers that are resistant to existing therapies and those located at vital and currently inoperable regions of the central nervous system.

    Glioblastoma is among the most dangerous types of brain cancer. Although it is currently uncommon, affecting between 0.59 and 5 people per 100 000, its incidence is increasing around the world.

    Standard techniques for treating glioblastoma are based on removing the tumour surgically, followed by radiotherapy and chemotherapy using drugs such as temozolomide. The problem is that glioblastoma develops resistance to this and other therapeutics that target the tumour’s biomolecule signalling pathways, leading to treatment failure, relapse and – all too often – death for the patient.

    A new “Trojan horse” approach

    Researchers at the University of Toronto and The Hospital for Sick Children (SickKids) recently made an intriguing discovery: glioblastoma cells respond to external mechanical forces. Led by Yu Sun and Xi Huang, the researchers have now used this insight to develop a new “Trojan horse” approach for treating glioblastoma using magnetic carbon nanotubes (mCNTs). These nanotubes are rolled-up sheets of carbon filled with iron nanoparticles that can be magnetized by applying an external magnetic field.

    Sun, Huang and colleagues coated the mCNTs with an antibody that recognizes a specific protein (CD44) on glioblastoma tumour cells. When they inject these coated mCNTs into glioblastoma tumours in mice, the nanostructures “seek out” these proteins and attach to the cells. At this point, the researchers apply a rotating magnetic field that precisely targets the tumour region. This magnetic field mobilizes mCNTs to damage the internal structures of glioblastoma cells and destroy them.

    “Our nanomaterials function as swarms of ‘nano-scalpels’ to physically treat tumours by applying mechanical torque and force to the structures of cancer cells,” says study lead author Xian Wang. “These nano-scalpels are precisely controlled to mobilize through the application of a tumour-targeting rotating magnetic field.”

    This “mechanical nanosurgery” technique, as the researchers call it, is completely different from conventional approaches. Because it uses brute mechanical force to disrupt tumour cellular structures rather than targeting specific bio-signalling pathways, it could help overcome therapy resistance of this biologically plastic disease, the researchers write in Science Advances.

    According to the team, the technique could be adapted for treating brain tumours not usually accessible to resection. “Such tumours not only include primary glioblastoma,” explains Wang, “but also recurrent glioblastoma, multifocal brain tumours, and tumours situated at vital and inoperable central nervous regions – for example, diffuse intrinsic pontine glioma (DIPG) in the brainstem.”

    In the present work, the researchers employed mCNTs with iron oxide particles inside the tubes. Their next aim is to tune the percentage of iron in the nanotubes and optimize their protocol to improve treatment efficacy. “Another advantage of mechanically mobilizing mCNTs is that besides physically disrupting cellular structures, they can modulate specific biochemical pathways, based on which we are developing combination therapy to tackle untreatable brain tumours,” Wang concludes.

    The post Mechanical nanosurgery attacks aggressive brain cancer appeared first on Physics World.

    ]]>
    Research update New magnetic nanomaterial-based technique could treat tumours that resist existing therapies https://physicsworld.com/wp-content/uploads/2023/05/mechanical-nanosurgery.jpg newsletter1
    Radiotherapy innovation on show at ESTRO https://physicsworld.com/a/radiotherapy-innovation-on-show-at-estro/ Wed, 17 May 2023 11:10:33 +0000 https://physicsworld.com/?p=107874 The ESTRO 2023 congress saw over 8000 delegates head to Vienna to check out the latest product developments and research breakthroughs

    The post Radiotherapy innovation on show at ESTRO appeared first on Physics World.

    ]]>
    ESTRO 2023, the annual meeting of European Society for Radiotherapy and Oncology, saw over 8000 delegates head to Vienna last week. And judging by the crowds on the exhibit floor, all were keen to check out the latest product developments, hear research updates at the exhibitor’s booths, or just join the long queue for GE Healthcare’s ice cream. Here are just a few of the products that caught my eye at this year’s trade show.

    Heating up treatments

    Hyperthermia, heating tumours to around 41–43°C, can enhance the effects of both chemo- and radiotherapy. Italian manufacturer Med-Logix has developed a dedicated deep hyperthermia system – the ALBA 4D – that uses a multibeam phased array of four waveguide antennas to precisely focus radiofrequency fields onto the tumour to raise its temperature.

    The ALBA 4D can focus energy onto targets at any depth and location in the pelvis, abdomen and extremities. The system, which includes a robotic gantry for precise patient positioning, automatically tracks the temperature and location of the focal zone, heating the target in 5–10 min while avoiding overheating of healthy tissues.

    Hyperthermia impacts radiotherapy via three mechanisms: inhibiting repair of DNA damage; reoxygenation via increased tissue perfusion; and direct cell killing. These effects can be used to create a higher equivalent radiation dose, or alternatively, to deliver a lower dose while maintaining the same tumour killing effect.

    For use with radiotherapy, Med-Logix’s Sara Baghaei explained, the hyperthermia should be delivered within one hour of the radiation treatment. The ALBA system can also be employed to enhance chemotherapy, where it can be used simultaneously with drug delivery.

    “The technology allows users to perform fast hyperthermia with high temperatures,” said Baghaei. “It’s like we are giving a higher dose of radiation without any extra toxicity.”

    MRI coils made for radiotherapy

    With researchers investigating the feasibility of MRI-only radiotherapy planning, GE Healthcare highlighted its AIR Coils – the first MRI coils designed specifically for radiation oncology patient scans. Available for brain, head-and-neck or body imaging, the AIR Coils are light, flexible and comfortable for the patient.

    Michael Mian demonstrates GE's AIR Coils

    According to GE HealthCare’s global product manager Michael Mian, one big advantage of the AIR Coils is that they do not require supports, which are usually placed between rigid MRI coils and the patient to avoid anatomy distortion during imaging.

    “This opens up space for patient immobilization devices,” Mian explained. It also means that the coils can be placed closer to the patient, thereby improving the quality of the image. “For the first time, you can use the coil suite to deliver diagnostic image quality with the patient in the immobilization device,” he said.

    The AIR Coils are also easy are to use, an important factor when bringing together radiology and radiation oncology departments that may be used to different patient set-up procedures. “The coil design is so simple, you don’t require extensive training to learn how to use it,” said Mian.

    Enhancing target visibility

    Danish medical device company Nanovi showcased BioXmark, its unique liquid fiducial marker. Fiducial markers, used as target reference points to guide radiation therapy and increase treatment accuracy, usually consist of small metal implants. BioXmark is different, consisting of a biocompatible long-chain carbohydrate containing iodine for contrast.

    To create the fiducial marker, a small volume (around 80 µl) of BioXmark is injected into the body, where it changes viscosity from a liquid into a consistency similar to chewing gum. This soft marker can then be visualized on X-ray, CT or MRI scans for treatment planning, radiotherapy guidance or follow-up.

    Once in place, the fiducial is highly stable, with studies revealing that it is still visible up to 69 months after implantation. “We have not seen it disappear yet,” said Nanovi’s Dan Calatayud. He noted that the liquid fiducial is easier to implant than metal markers, and that the team had demonstrated “significantly shorter implantation times”.

    BioXmark liquid fiducial marker

    The non-metallic composition of BioXmark leads to a low level of artefacts on X-ray based imaging; it also offers low dose perturbation when used with proton therapy. But BioXmark’s main advantage, said Calatayud, is that it can be used within thin-walled, hollow organs – such as the oesophagus, stomach and bladder, for example – where it is extremely difficult to implant a piece of metal. Currently, fiducial markers are only established in prostate and breast treatments.

    “We want to open up the possibility of taking this precision into new indications,” he explained.

    Surface-guided radiation therapy

    Brainlab unveiled its ExacTrac Dynamic Surface system for radiotherapy patient positioning and monitoring. Surface-guided radiotherapy enables precise tracking of the patient’s surface and breathing motion. This in turn allows gating of the radiation beam so that the tumour is only irradiated when in the planned position.

    The system is based around two 3D cameras housed in a centralized camera unit, which emit a structured blue light pattern on the patient surface. The cameras project 300,000 points onto the patient, these are then matched to a heat signal obtained by thermal camera within the same unit.

    “The thermal signal gives an additional fourth dimension for extra precision,” explained Brainlab’s Carsten Sommerfeldt. He noted that with a three-camera positioning system, a moving gantry can often block one of the cameras. The single unit, however, is always in the line-of-sight to the patient and can constantly track the patient’s surface during beam delivery.

    ExacTrac Dynamic Surface is designed to operate with Varian’s Edge and TrueBeam radiotherapy systems, as well as Elekta’s Versa HD.

    Surface guided radiotherapy

    The post Radiotherapy innovation on show at ESTRO appeared first on Physics World.

    ]]>
    Blog The ESTRO 2023 congress saw over 8000 delegates head to Vienna to check out the latest product developments and research breakthroughs https://physicsworld.com/wp-content/uploads/2023/05/ESTRO-ALBA.jpg newsletter
    Climate-change ‘fingerprint’ is identified in the upper atmosphere https://physicsworld.com/a/climate-change-fingerprint-is-identified-in-the-upper-atmosphere/ Wed, 17 May 2023 07:17:18 +0000 https://physicsworld.com/?p=107869 Observations agree with computer simulations of global warming

    The post Climate-change ‘fingerprint’ is identified in the upper atmosphere appeared first on Physics World.

    ]]>
    A new study has confirmed that the predictions of climate-change models agree with observations of the atmosphere made at altitudes up to 50 km above Earth’s surface. The research focusses on an important “fingerprint” of human-driven climate change, whereby the lower part of the atmosphere gets hotter as carbon dioxide levels rise, whereas the upper part gets colder. The findings do not surprise experts in the field, but they provide further confirmation that climate change is caused by humans, as well as detailed information that can be used to refine future models.

    Earth could not support life if atmospheric gases such as carbon dioxide and water vapour did not raise its black-body temperature by trapping infrared radiation, behaving much like greenhouse glass. The first concerns that fossil fuel-derived carbon dioxide might enhance this greenhouse effect came in 1896 from Svante Arrhenius (who would later win the 1903 Nobel Prize for Chemistry for unrelated work), but these were largely speculative.

    In 1967, however, Syukuro Manabe at the Geophysical Fluid Dynamics Laboratory in Washington DC – who shared the 2021 Nobel Prize in Physics for his work on modelling global warming – used an early computer model to make concrete predictions about the effect of rising carbon dioxide levels.

    Famous paper

    “In a famous paper that was specifically called out by the Nobel Prize committee, he raised carbon dioxide levels from 150 to 300 to 600 parts per million, and he saw this very curious phenomenon in which the lower atmosphere – the troposphere – warmed, whereas the upper atmosphere – the stratosphere – cooled,” explains Benjamin Santer of Woods Hole Oceanographic Institution and University of California, Los Angeles. This is principally because most of the carbon dioxide remains in the troposphere and – wrapped in a thicker blanket – Earth radiates less heat into the stratosphere.

    Data from weather balloons and, more recently, satellites showed warming in the troposphere and limited cooling in the lower stratosphere (above around 16 km). Most weather balloons burst above 25 km, however, and early satellite datasets diverged markedly. This made it difficult to compare models and observations above 25 km, where Manabe had predicted that cooling would be strongest. Now, however, agreement is better.

    In the new work, Santer and colleagues around the world compared satellite observations from three groups made between 1986 and 2022 with state-of-the-art computational climate models, before using a “vertical fingerprinting” technique developed by Klaus Hasselmann – the founding director of the Max Planck Institute for Meteorology in Germany and one of Manabe’s co-recipients of the 2021 Physics Nobel Prize – to determine whether the data showed clear evidence of anthropogenic carbon dioxide emissions or whether they were consistent with other explanations.

    Stronger signal

    The higher-altitude data assisted the researchers not just because the signal is stronger at higher altitudes, but also because the noise from other sources such as sulfate emissions from coal burning and the dramatic effect of the eruption of Mount Pinatubo in 1991 becomes weaker. Moreover, ozone is a greenhouse gas, which has been drastically depleted in the lower stratosphere by CFCs. These were phased out by the Montreal protocol in 1987 and the ozone layer is now recovering. This can confound measurements in the lower stratosphere.    “Above 25 km, though, you’re looking predominantly at human-caused carbon dioxide changes,” says Santer.

    Including the higher-altitude data increased the signal-to-noise ratio by a factor of around five relative to previous studies, providing incontrovertible evidence of anthropogenic climate change.  The effects appear smaller than current computer models predict, but even if the average warming trend was subtracted, a statistically significant signal could be detected from the difference between the temperatures of the two atmospheric layers. The research is described in Proceedings of the National Academy of Sciences.

    “It’s a very nice paper, but I wasn’t surprised by the results,” says Keith Shine of the University of Reading in the UK; “If you go back 15 or 20 years you could separate models into those that simulated the troposphere well and those that simulated the stratosphere well. More recently models – particularly the ones used in this study – are more unified. It just reinforces what was already in the literature.” He suggests that future work could focus on separating out the contributions of different greenhouse gases, which are not treated separately in all the models available to the researchers.

    Consilience of evidence

    “In theory, someone could have done this detection and attribution study for at least the past decade, but this is the first study that’s really tried to look at it,” agrees Peter Thorne of Maynooth University in Ireland. “The more pieces of evidence you bring in, the more and more damning the evidence becomes. There have been detection and attribution studies on deep ocean heat content, on humidity, on a whole host of variables. So it’s really the consilience of evidence of who the perpetrator of the crime is. This is just one more indelible fingerprint that leaves you in no doubt whatsoever that humans are responsible.”

    The post Climate-change ‘fingerprint’ is identified in the upper atmosphere appeared first on Physics World.

    ]]>
    Research update Observations agree with computer simulations of global warming https://physicsworld.com/wp-content/uploads/2023/05/blue-sky-5418354-iStock_magann.jpg
    Mystery of bright-white shrimp solved https://physicsworld.com/a/mystery-of-bright-white-shrimp-solved/ Tue, 16 May 2023 13:00:34 +0000 https://physicsworld.com/?p=107854 Optical nanostructure in the antennae, cuticle, tail, and jaw of Pacific cleaner shrimp explains the animal’s striking pigmentation

    The post Mystery of bright-white shrimp solved appeared first on Physics World.

    ]]>
    Researchers in Israel have uncovered the unique optical nanostructure that gives an ocean-going scavenger its brilliant white colouring. Using a range of imaging techniques, a team led by Benjamin Palmer at Ben-Gurion University of the Negev, Israel, showed that spherical particles in Pacific cleaner shrimp scatter incoming light in all directions, while avoiding any overlap in the scattering patterns they produce. The discovery could lead to new bio-inspired white pigments.

    Many organisms have evolved the ability to manipulate light in unique and fascinating ways. Mimicking these mechanisms has led researchers to new designs for several optical devices, including lenses and mirrors. Structures such as butterfly wings and bird feathers have likewise inspired new coatings that produce vivid colours through the light scattered by their nanostructures.

    So far, however, one colour has proven particularly challenging to produce via these structural means – that is, without relying on chemical pigments. “One of the most intriguing problems is the search for alternatives to the inorganic materials that give white paints and food colourings their whitish hues,” explains team member Dan Oron of the Weizmann Institute of Science. “This is because the inorganic material most commonly used in these products – nanocrystalline titania – is suspected as harmful.”

    Overcoming optical crowding

    The nub of the problem is that to generate white hues, photons of all optical wavelengths need to be scattered multiple times, such that they lose their directional information completely. For this to happen, the nanostructures responsible for scattering need to be packed very tightly. Such tight packing, however, creates the problem of “optical crowding”, where scattering patterns overlap – reducing the scattering structure’s overall reflectance.

    Despite these challenges, one animal has proven that the complexities of optical crowding are not insurmountable. Inhabiting coral reefs across the tropics, the Pacific cleaner shrimp is easily recognized by the striking white colouring of its antennae, cuticle, tail, and jaw, which reflect up to 80% of incoming light.

    Advanced imaging and simulation

    In their study, Palmer and colleagues focused on nanostructures in the cleaner shrimp’s chromatophore cells, which are known to be responsible for their brilliant white hue. Using a combination of cryo-electron microscopy and optical imaging, they characterized the structure, organization and optical properties of the spherically-shaped particles that form the scattering layer within the cells. They also used numerical simulations of electromagnetic field propagation to understand the optical response of the scattering medium as a whole.

    The team’s analysis revealed that these particles scatter light in many directions thanks to the unique structure and arrangement of the flat molecules that constitute their building-blocks. “The particles are liquid crystalline arrangements of these planar molecules,” Oron explains. “All these molecules are arranged such that their flat side is perpendicular to the sphere’s radius.”

    Altogether, this structure significantly reduces the amount of material needed to make the shrimp’s antennae and bands appear white. This enables the cleaner shrimp’s chromatophore cells to eliminate the effects of optical crowding, while also scrambling the polarization of incident photons as they scatter from the particles – destroying their directional information. “In a sense, this optical anisotropy makes the ensemble of spheres scatter light as if they were made of a material with a higher refractive index than they really have,” Oron explains.

    Safer white paints and food colourings

    The results are a good example of how evolutionary solutions of organisms like the cleaner shrimp can inspire optimized technologies. By mimicking the shrimp’s mechanism for optical anisotropy, Palmer’s team hope that researchers in future studies could design advanced, ultra-white organic nanostructures that are safe for use in products like paint and food colouring.

    “More generally, the findings point at the role that strong optical anisotropy can take as a design parameter in the construction of artificial optical devices, provided we can master the growth of similar crystalline arrangements of the right organic molecules,” Oron concludes.

    The research is described in Nature Photonics.

    The post Mystery of bright-white shrimp solved appeared first on Physics World.

    ]]>
    Research update Optical nanostructure in the antennae, cuticle, tail, and jaw of Pacific cleaner shrimp explains the animal’s striking pigmentation https://physicsworld.com/wp-content/uploads/2023/05/Lysmata_en_Nephtygorgia_web.jpg newsletter1
    Freeman Dyson: the visionary thinker and maverick scientist who challenged authority https://physicsworld.com/a/freeman-dyson-the-visionary-thinker-and-maverick-scientist-who-challenged-authority/ Tue, 16 May 2023 10:00:52 +0000 https://physicsworld.com/?p=107388 Hamish Johnston looks into the radical thoughts and wide-ranging work of mathematical physicist Freeman Dyson

    The post Freeman Dyson: the visionary thinker and maverick scientist who challenged authority appeared first on Physics World.

    ]]>
    Freeman and Imme Dyson at the Baikonur Cosmodrome

    In the pantheon of famous physicists, the late Freeman Dyson holds a special place. Often described as a maverick, radical and pioneering scientist, Dyson made significant contributions to the foundations of modern physics. He spent vast swathes of his career on highly speculative projects across a wide range of fields, from space exploration to the origins of life. Despite having a lifelong distain for authority, Dyson found a place in the US military–industrial complex. He also wrote numerous popular books on science and left a noteworthy scientific legacy.

    Dyson died on 28 February 2020 at the age of 96, and shortly afterwards the physicist, author and historian David Kaiser at the Massachusetts Institute of Technology was approached to write a book that would explore his extraordinary life. As it happens, Kaiser had already written about Dyson in his 2009 book Drawing Theories Apart. He had interviewed him in depth for the book and was also given access to letters that Dyson had written to his parents early on in his career – letters that provide marvellous insights into Dyson from a young age.

    But given the multifaceted nature of Dyson’s life, Kaiser quickly realized just how challenging it would be for him to write such a biography on his own. Instead Kaiser gathered a team of 10 contributors and the result is the thoroughly entertaining and fascinating collection of essays titled “Well, Doc, You’re In”: Freeman Dyson’s Journey through the Universe. I cracked open the book just as I was jetting off on holiday and was instantly hooked. It is a genuine vacation read for those interested in the history of science and influential scientists, as Dyson worked with some of the best – and each contributing author paints a vivid picture of an aspect of Dyson’s life.

    Young rebel

    Dyson was born on 15 December 1923 in the southern English village of Crowthorne, Berkshire. His was a comfortable childhood – at least in terms of material needs. His mother had a law degree and worked as a social worker after Dyson was born. His father was a composer of some note and taught at the Royal College of Music and at Winchester College. Founded in 1382, Winchester is one of the country’s most prestigious private schools and Dyson himself would later be a pupil there.

    A mathematical prodigy from an early age, Dyson once joked that he worked out the concept of the infinite mathematical series while lying in his crib. He was also a voracious reader, who developed a keen interest in science at a very young age. The science writer Amanda Gefter, who has contributed a chapter to “Well, Doc, You’re In” on Dyson’s formative years, says this love for science was fortified by the healthy disdain for authority that he developed early in life.

    The problem for the young Dyson was that science was not taught at the preparatory school he attended before going to Winchester. In Britain, such prep schools are generally private institutions designed to prepare children for entry to elite secondary schools. But when Dyson was a child, a good education still focussed on classics with some mathematics, and science was seen as being too practical to be of use to the next generation of gentleman. Undeterred, Dyson and several of his classmates created a science society – a group that he later referred to as a persecuted minority. Club members read books on science and explained concepts to each other – lessons that Dyson felt could not be learned in the classroom.

    He describes his time at prep school as the worst of his life – the regime was brutal and, to add insult to injury, the school where he boarded was only a short stroll from the family home. But the solace he found in science lit the spark of a remarkable career. “Science is a conspiracy of brains against ignorance, that science is a revenge against oppressors, that science is a territory of freedom and friendship in the midst of tyranny and hatred,” he later wrote.

    Once Dyson arrived at Winchester in 1936, science was on the curriculum. But it was not taught well so Dyson was able to maintain his status as an outsider, despite being a star pupil. In 1941 he went on to study mathematics at the University of Cambridge, only to find the university emptied by the Second World War. He ended up graduating in just two years and would spend many nights clandestinely climbing the exteriors of the university’s ancient buildings with friends.

    The war dominated the next phase of Dyson’s life, which is described in the chapter “Calculation and reckoning: navigating science, war, and guilt” by William Thomas, a science policy analyst from the American Institute of Physics. After leaving Cambridge, Dyson did operational research for the Royal Air Force Bomber Command, which Kaiser describes as “refined statistical analyses” looking for patterns that military commanders may have overlooked. Among other things, he calculated the probability with which bombers will collide with each other when they fly in a tight formation – a configuration that was known to make missions less susceptible to a successful enemy attack.

    Given his disdain for authority, it is no surprise that Dyson decried the “muddle and mendacity” of Bomber Command. It is likely that Dyson was extremely frustrated that many of his findings were not acted upon, leading him to feel guilty that lives were lost despite his best efforts. Thomas suggests that this incompetence was a symptom of an important problem within the British establishment at the time – that it did not seem to value the nation’s scientists.

    America bound

    This disregard for science appears to be at the heart of Dyson’s decision to settle in the US – a country with a then booming economy that was embracing science and technology as engines of growth. This move is described in the chapter by Kaiser entitled “First apprentice”, which opens in post-war England with Dyson’s decision to switch from mathematics to physics.

    According to Kaiser, Dyson had long been torn between mathematics and physics, and while still at Bomber Command had set himself the challenge of proving a conjecture of number theory. If he succeeded, he told himself he would become a mathematician; if he failed he would pursue a career in physics. Dyson failed and in 1946 returned to Cambridge to become a physicist.

    It was the lack of a suitable doctoral adviser at Cambridge that also drove Dyson to the US, where he arrived in 1947 as a Commonwealth Fellow to do a PhD at Cornell University. His supervisor was the theoretical physicist Hans Bethe, who had left Germany in the mid-1930s because of Nazi persecution and worked on the development of nuclear weapons on the Manhattan Project.

     

    Freeman Dyson relaxing amid diapers

    Dyson spent a year at Cornell, and another at the Institute for Advanced Study (IAS) in Princeton, where he worked with its director Robert Oppenheimer on quantum electrodynamics (QED). He also collaborated closely with Richard Feynman, who was at Cornell at the time, and Dyson was an early user of Feynman’s famous diagrams. Indeed, Kaiser describes Feynman as Dyson’s “private tutor”. Dyson was assigned to improve on a “rough and ready” calculation that Bethe had published in 1947 concerning QED. He got stuck in and made short work of the pesky divergences that had plagued Bethe’s calculation. Dyson breathed new life into the field, which Kaiser says “had ground to a halt before Dyson arrived”.

    Going into a sort of semi stupor as one does after 48 hours of bus riding, I began to think very hard about physics, and particularly about the rival radiation theories of [Julian] Schwinger and Feynman

    Kaiser found that Dyson’s letters provide a crucial understanding of the thought processes that led to his epiphany in QED, which famously happened as he was on a long bus journey. He recounts how Dyson had a “flash of illumination on the Greyhound bus”, as he came up with the equivalence of the two competing formulations of QED. “On the third day of the journey a remarkable thing happened; going into a sort of semi stupor as one does after 48 hours of bus riding, I began to think very hard about physics, and particularly about the rival radiation theories of [Julian] Schwinger and Feynman,” Dyson wrote. “Gradually my thoughts grew more coherent, and before I knew where I was, I had solved the problem that had been in the back of my mind all this year, which was to prove the equivalence of the two theories. Moreover, since each of the two theories is superior in certain features, the proof of equivalence furnished a new form of the Schwinger theory that combines the advantages of both.” Despite claiming that this work was “neither difficult nor particularly clever”, Dyson says he “became quite excited over it when I reached Chicago and sent off a letter to Bethe announcing the triumph”.

    After two years in the US, Dyson’s Commonwealth Scholarship required that he return to the UK, so he moved to the University of Birmingham in 1949. However, he did not last long there. Kaiser says that Dyson had found Cornell “alive with ideas” and that he travelled extensively while he was there – describing his journeys in “almost anthropological detail” in his letters to his family. It is perhaps no surprise that Dyson quickly found his way back to the US.

    Lifelong professorship

    By 1951 Feynman had moved from Cornell to the California Institute of Technology (Caltech) – and, according to Kaiser, Bethe convinced Cornell that Dyson was the only person who could replace Feynman. So Dyson was granted a Cornell professorship, despite having not finished his doctorate – something that Dyson relished for the rest of his life.

    In 1952 Dyson moved again, accepting a lifelong professorship at the IAS, where he remained until his death nearly 70 years later. That long tenure is described in the chapter “A frog among birds” by Robbert Dijkgraaf, a mathematical physicist who ran the IAS from 2012 until he stepped down last year to become Minister of Education, Culture and Science of the Netherlands.

    Dijkgraaf writes that Dyson’s arrival at the IAS corresponded to a growing rift between mathematics and theoretical physics. Theoretical physics was becoming increasingly messy as researchers pushed theories to breaking point in order to describe nature, while mathematics was becoming more abstract and rigorous. Dijkgraaf suggests that Dyson was happy to be part of both worlds. Dyson had said that “some mathematicians are birds and some are frogs”, meaning that some fly high and have an overview of their field while others are deep in the mire of a particular problem, solving it before moving on to another. Dyson saw himself as a frog, hopping from one intellectual pond to another.

    Safe reactors and spaceships

    Perhaps the most fascinating pond that Dyson swam in was that of the US’s burgeoning military–industrial complex. In the 1950s he joined the newly formed General Atomics and would spend his summers on leave from IAS working for the firm in California. According to Kaiser, General Atomics was formed to develop non-military uses of nuclear technologies, with Dyson “thrilled” to be able to apply his mathematical prowess to solve engineering problems.

    A scan of the TRIGA patent made directly from Freeman Dyson's copy

    In a chapter titled “Single stage to Saturn”, Dyson’s son George describes how his father’s first contribution was to help design a small, intrinsically safe fission reactor that would shut down quickly, without human or mechanical intervention. This became the Training, Research, Isotopes, General Atomics (TRIGA) reactor, which was an astonishing success – 66 were built around the world and some of them are still running today.

    However, Dyson’s most intriguing project at General Atomics never got off the ground. This was Project Orion, which aimed to build a spaceship powered by successive nuclear explosions. According to George Dyson, Project Orion began in late 1957 as a response to the Soviet Union’s successful launch of the Sputnik satellite in October that year. His father took a year’s leave of absence from IAS to work on Project Orion, in part because he saw nuclear pulse propulsion as a viable way of exploring the solar system. Kaiser suggests that Dyson, like many of his generation, had childhood fantasies of space travel that were inspired by authors like Jules Verne.

    The original plan was for a 4000 tonne spaceship – powered by 2600 nuclear bombs – that could deliver a 1600 tonne payload to Earth orbit. The idea was that a bomb would be detonated under the spaceship, sending it upwards. Before the spaceship had a chance to fall back, another bomb would be dropped from the spaceship and detonated – and so on. While such a scheme sounds astonishing today, Kaiser says that Dyson produced a large number of technical reports that evaluated the plan in terms of “real physics and real engineering”.

    Pencil drawing of physicists looking into the cosmos

    The main question that Dyson and colleagues had to answer was could such a spaceship operate without blowing itself to smithereens? And if the structure of the ship could survive, how could the crew be protected from repeated blasts of radiation? That is where Dyson’s calculations came in. Dyson and his colleagues designed a system whereby a detonating bomb would vaporize a propellent material, blasting it upwards towards the spaceship in a relatively tight jet. When the material in the jet struck the bottom of the spaceship, it would create a plasma reaching temperatures hotter than the surface of the Sun.

    A crucial design consideration was the plate on the spacecraft that absorbs kinetic energy from the bombs. Would it withstand repeated assaults by the plasma? Another important question was how radiation in the jet would behave when it hit the plate – would it travel straight through, or be absorbed, or reflected? On both these matters, Dyson and colleagues were able to show that their design was sound. General Atomics even built a one-metre-diameter prototype that was tested using a conventional explosion in 1959.

    Preventing stupid decisions

    Despite their work, Project Orion was ultimately terminated in 1965, as several things conspired against it. One was the ascendency of the National Aeronautics and Space Administration (NASA), which was not interested in nuclear-powered space. The other was the 1963 nuclear test ban treaty, which made testing Project Orion impossible. As the science writer Ann Finkbeiner points out in her chapter “Dyson, warfare and the Jasons”, Dyson was initially against a test ban – probably because it would mean the end of progress on Project Orion. However, he changed his mind by 1963 and supported the ban because he realized that the rising number of tests being done was unsustainable.

    The JASON defence-advisory panel consisted of a group of scientists that were assembled in about 1960 to provide scientific and technical advice to the US Department of Defense. Dyson joined at the very beginning and remained a member until his death. Initially, I found this surprising give Dyson’s damning verdict on how the RAF’s Bomber Command responded to the scientific advice it was given. However, Finkbeiner, who Kaiser describes as “the world expert” on the JASON panel, points out that Dyson’s Bomber Command experience galvanized him to make “a lifelong commitment” to help prevent military commanders from making stupid decisions with lethal consequences.

    Dyson watching preparations for a tethered test of a flying model propelled by high explosives at General Atomic’s Point Loma test site

    Dyson worked on more than 200 studies during his six decades as an adviser on the panel. While most of his work remains classified, Finkbeiner says that a lot of it was related to test bans, missile defence and submarine warfare. One task that she says he revelled in was “lemon detection” – spotting bad ideas and stopping them from being acted on. A famous example is the “Neutrino Detection Primer, a panel report that was handed to anyone who suggested that a nuclear-powered submarine could be detected by the copious neutrinos that its reactor emitted. Indeed, Dyson reckoned that the advisory group saved the US government hundreds of billions of dollars by helping it avoid such dud projects.

    Dyson also had a great interest in the origins of life, as the chemist and science writer Ashutosh Jogalekar explains in his chapter “A warm little pond”, which discusses the “metabolism-first” hypothesis. Unlike the more familiar replicator-first hypotheses, which focus on understanding how molecules can create copies of themselves, metabolism-first looks at how networks of chemical reactions (such as those essential for life) can emerge and increase in complexity over time.

    Like a true physicist, Dyson looked at the emergence of life as a phase transition between thermodynamic states – in this case a state he dubbed “Garden of Eden” and another that he called “hot sulphide soup”. According to Jogalekar, Dyson was an advocate of metabolism-first because it did not require the accuracy that self-replication would need. Ironic for a scientist famous for the mathematical accuracy of his work – perhaps Dyson realized that nature could never be as accurate as himself.

    Extraterrestrial energy

    No account of Dyson’s life would be complete without a chapter on what is perhaps his most famous notion – the Dyson sphere. This is described in a chapter called “Cosmic seer”, by astrobiologist and writer Caleb Scharf.

    Dyson developed the idea of his sphere in 1960, as he pondered the evolution of a technologically advanced society. He reckoned that such a civilization’s energy consumption would grow until it outstripped the total stellar irradiance received by its planet. He therefore concluded that such a civilization would satisfy its appetite for energy by surrounding its star with a megastructure he dubbed a Dyson sphere. Dyson first wrote about the sphere in Science magazine in 1960, describing a hollow shell surrounding a star that would capture all of the star’s energy.

    While the sphere was originally inspired by a 1937 science-fiction story that Dyson had read, the idea is taken very seriously by astronomers involved in the Search for Extraterrestrial Intelligence (SETI). As Dyson pointed out, the presence of a sphere would have a significant impact on the light we observe coming from a star – shifting its output into the infrared, which is something that could be observed from Earth.

    Freeman Dyson surrounded by his six children and 16 grandchildren at his 90th birthday celebration

    For many people who look up to Dyson as a hero of science, there is one aspect of his life that is puzzling – his divergence from the scientific consensus on climate change. In 2006 Dyson published The Scientist as Rebel, in which his view of how humans should respond to global warming diverged from the scientific consensus. Kaiser addresses this thorny issue head on in his introduction. Kaiser told me that Dyson had engaged with the topic for 50 years and his position changed significantly over that time.

    Dyson began work on climate change in the early 1970s, when he identified potential impacts of rising carbon dioxide levels and suggested solutions including tree planting and a modest carbon tax. He continued to engage with the topic until the 1990s, when he began to disagree with the growing emphasis on computer simulations and top-down government policies for reducing greenhouse-gas emissions. Fast forward to the 2000s and Kaiser says that Dyson had disengaged from doing climate science and was mainly commenting from the side-lines. This was when he started to say that concerns about global warming were “grossly exaggerated” and that the world’s citizenry have been deluded by climate-model experts.

    Kaiser suggests that Dyson’s hostility stemmed from his view that our current response to climate change is nature-first rather than human-first. “He was to the end a techno-optimist who thought that human ingenuity will get us out of this,” Kaiser told me. He also says that Dyson did not correct the record when “flat out climate-change deniers” misrepresented his views.

    While Dyson’s latter views on climate change seem unfortunate to people who otherwise have a great respect for him, Kaiser points out that Dyson was doing what he did best: challenging authority and championing a contrary view.

    The post Freeman Dyson: the visionary thinker and maverick scientist who challenged authority appeared first on Physics World.

    ]]>
    Feature Hamish Johnston looks into the radical thoughts and wide-ranging work of mathematical physicist Freeman Dyson https://physicsworld.com/wp-content/uploads/2023/05/2023-05-feat-Dyson_birthday.jpg newsletter
    A transistor made from wood https://physicsworld.com/a/a-transistor-made-from-wood/ Tue, 16 May 2023 08:30:13 +0000 https://physicsworld.com/?p=107737 Delignified piece of balsa wood incorporates a conductive polymer to modulate electrical current

    The post A transistor made from wood appeared first on Physics World.

    ]]>
    Researchers in Sweden have built a transistor out of a plank of wood by incorporating electrically conducting polymers throughout the material in a way that retains space for an ionically conductive electrolyte. The new technique makes it possible, in principle, to use wood as a template for numerous electronic components, though the Linköping University team acknowledge that wood-based devices cannot compete with traditional circuitry on speed or size.

    Led by Isak Engquist of Linköping’s Laboratory for Organic Electronics, the researchers began by removing the lignin from a plank of balsa wood (chosen because it is grainless and evenly-structured) using a NaClO2 chemical and heat treatment. Since lignin typically constitutes 25% of wood, removing it creates considerable scope for incorporating new materials into the structure that remains.

    The researchers then placed the delignified wood in a water-based dispersion of an electrically conducting polymer called poly(3,4-ethylenedioxythiophene)–polystyrene sulfonate, or PEDOT:PSS. Once this polymer diffuses into the wood, the previously insulating material becomes a conductor with an electrical conductivity of up to 69 siemens per metre – a phenomenon the researchers attribute to the formation of PEDOT:PSS microstructures inside the 3D wooden “scaffold”.

    Next, Engquist and colleagues constructed a transistor using one piece of this treated balsa wood as a channel and additional pieces on either side to form a double transistor gate. They also soaked the interface between the gates and channel in an ion-conducting gel. In this arrangement, known as an organic electrochemical transistor (OECT), applying a voltage to the gate(s) triggers an electrochemical reaction in the channel that makes the PEDOT molecules non-conducting, and therefore switches the transistor off.

    Transistor performance

    Writing in PNAS, the researchers report that the new wooden transistor modulates electrical current in a 1-mm-thick transistor channel with an on/off ratio of 50. Compared to typical modern transistors, it operates with a considerable delay: switching the power on takes about five seconds, while switching off takes one second.

    “Our wood transistor operates according to a different principle to conventional silicon transistors that switch using an electric field,” Engquist explains.  “Compared to these transistors, it is really slow and bulky and we don’t expect it to ever compete with traditional microprocessors and circuits.”

    The new device does respond well to gate voltage modulation, performing on a par with other OECTs in this respect. However, the researchers stress that they didn’t develop the wood transistor with any specific applications in mind. “We did it because we could,” Engquist says.

    Things to do with a wooden transistor

    When pressed, Engquist suggests that possible applications could include regulating electronic plants and any devices in which, for some reason, electrical functionality is needed inside wood.

    “Since the channel of our transistor is so big, it could possibly tolerate higher currents than regular organic transistors,” he tells Physics World. “We could imagine, for example, regulating the current to/from future sensors, solar cells, displays or batteries incorporated into wood.”

    The researchers are now exploring ways to improve the electric properties of their conductive wood. “We also hope to be able to create new devices together with our colleagues at the Laboratory of Organic Electronics, who are among the pioneers in the area of electronic plants.”

    The post A transistor made from wood appeared first on Physics World.

    ]]>
    Research update Delignified piece of balsa wood incorporates a conductive polymer to modulate electrical current https://physicsworld.com/wp-content/uploads/2023/05/utf-8BTG93LVJlc19UcmHMiC10cmFuc2lzdG9yLTIwMjMtMDQtMjYtVEItX0RTQzE2OTQuanBnLnBuZw.jpg newsletter1
    Strange-matter observation points to existence of diquarks in baryons https://physicsworld.com/a/strange-matter-observation-points-to-existence-of-diquarks-in-baryons/ Mon, 15 May 2023 16:25:47 +0000 https://physicsworld.com/?p=107848 Lambda baryon production is seen in data gathered almost 20 years ago

    The post Strange-matter observation points to existence of diquarks in baryons appeared first on Physics World.

    ]]>
    Extensive analysis of data gathered almost 20 years ago has led to a surprising discovery: that strange matter can be formed when a single photon is absorbed simultaneously by two quarks. The research was led by Lamiaa El Fassi at Mississippi State University and poses fundamental questions about the nature of the strong nuclear force.

    Strange-matter particles called lambda baryons contain one each of an up, down, and strange quark. Their quark composition means that these particles are an especially appealing target for physicists studying the strong interaction – the fundamental force that binds quarks together.

    Yet due to their fleeting lifetimes, lambda baryons cannot be observed directly. Instead, researchers can identify them by detecting their decay products. These are a pion, and either a proton or a neutron.

    Exotic baryons

    In 2004, experiments at the Continuous Electron Beam Accelerator Facility (CEBAF), part of Jefferson Lab in Virginia, aimed to gain a better understanding of these elusive particles. The accelerator produces a steady stream of energetic electrons, making it ideal for studying exotic baryons formed through a process called semi-inclusive deep-inelastic electron-nucleon scattering (SIDIS).

    In this particular process, CEBAF’s electrons were scattered by protons and neutrons in targets made from deuterium, carbon, iron, and lead. “Because the proton or neutron is totally broken apart, there is little doubt that the electron interacts with the quark inside,” El Fassi explains.

    Following this disintegration, the affected up or down quark – which interacts with a beam electron via a virtual photon – moves around briefly as a free particle, before binding together with other quarks it encounters to form a new hadron. In some exceptional cases, it may bind together with another up or down quark and a strange quark – forming a lambda baryon.

    Decay products

    In the CEBAF experiment, these particles could only be identified by a combination of their decay products and the scattered electrons. The challenges presented by such an indirect measurement have meant that conclusive results have been a long time coming. Yet after over a decade of thorough analysis, beginning when El Fassi was a postdoctoral researcher, she and her team have finally been able to observe lambda baryons in the collisions.

    “These studies help build a story, analogous to a motion picture, of how the struck quark turns into hadrons,” El Fassi explains. “In a new paper [in Physical Review Letters], we report the first-ever observations of such a study for the lambda baryon in the forward and backward fragmentation regions.” These regions refer to the direction of motion of the detected proton or neutron following the lambda’s decay, relative to the incoming electron beam.

    The team’s analysis unveiled an especially surprising outcome. Unlike when SIDIS produces lighter particles with longer lifetimes, CEBAF’s electrons did not seem to interact with single quarks in this case, but with a pair of quarks (called a diquark) – which goes on to bind with a strange quark.

    Different mechanism

    “This quark pairing suggests a different mechanism of production and interaction than the case of the single quark interaction,” Hafidi says.

    Indeed, the implications of this discovery could be particularly striking for quantum chromodynamics (QCD), which is the theoretical framework describing the strong nuclear force.

    “There is an unknown ingredient that we don’t understand,” says team member William Brooks at Federico Santa María Technical University in Chile. “This is extremely surprising, since the existing theory can describe essentially all other observations, but not this one. That means there is something new to learn, and at the moment, we have no clue what it could be.”

    In the future, the team hopes that upcoming improvements to CEBAF and its detectors could bring them a step closer to answering these fundamental questions. As El Fassi explains, “any new measurement that will give novel information toward understanding the dynamics of strong interactions is very important”.

    The post Strange-matter observation points to existence of diquarks in baryons appeared first on Physics World.

    ]]>
    Research update Lambda baryon production is seen in data gathered almost 20 years ago https://physicsworld.com/wp-content/uploads/2023/05/CEBAF.jpg
    Patient QA for SBRT and SRS treatment with IBA SRS detector https://physicsworld.com/a/patient-qa-for-sbrt-and-srs-treatment-with-iba-srs-detector/ Mon, 15 May 2023 14:19:02 +0000 https://physicsworld.com/?p=107654 Available to watch now, IBA Dosimetry explores the benefits associated with using myQA® SRS for patient QA

    The post Patient QA for SBRT and SRS treatment with IBA SRS detector appeared first on Physics World.

    ]]>

    myQA® SRS is a unique solution providing film-class digital resolution for SRS/SBRT patient QA. myQA SRS combines the best of both worlds: unrivalled accuracy and film-class resolution of film QA, with the proven efficiency of the digital detector array workflow. Stephan Dröge, a chief medical physicist from DGD Lungenklinik Hemer, will share his clinical experience with myQA SRS and the benefits associated with its use for patient QA.

    Benefits of attending:

    • Gain an overview of stereotactic patients treatment deliveries and QA methodologies
    • Learn about clinical SRS/SBRT cases
    • Explore CMOS technology
    • Understand the importance of the QA equipment specifications for the SRS/SBRT treatments

    Stephan Dröge MSc, is chief medical physicist at DGD Lungenklinik Hemer, Germany, where he has been involved in the implementation of SBRT and SRS since 2001 and is a member of the German Working Group for SBRT and SRS Treatments. Stephan is co-author of the 2022 article, Planning Benchmark Study for Stereotactic Body Radiation Therapy of Liver Metastases that was published in International Journal of Radiation Oncology, Biology, Physics.

    The post Patient QA for SBRT and SRS treatment with IBA SRS detector appeared first on Physics World.

    ]]>
    Webinar Available to watch now, IBA Dosimetry explores the benefits associated with using myQA® SRS for patient QA https://physicsworld.com/wp-content/uploads/2023/05/2023-06-06-webinar-image2.jpg
    Australia sets out A$1bn national quantum strategy https://physicsworld.com/a/australia-sets-out-a1bn-national-quantum-strategy/ Mon, 15 May 2023 11:29:55 +0000 https://physicsworld.com/?p=107830 The country aims to become a global player in quantum technologies by the end of the decade

    The post Australia sets out A$1bn national quantum strategy appeared first on Physics World.

    ]]>
    Australia has launched its first national quantum strategy with the aim of becoming a global player in quantum technologies by the end of the decade. Released by the Department of Industry, Science and Resources, the A$1bn initiative aims to boost Australia’s economy, protect the country’s national security and prevent a brain drain of top people heading abroad.

    The strategy has five central “themes” to boost quantum technologies, including investing in research, securing access to infrastructure, and growing a skilled workforce. It also focuses on three main categories of quantum technology, namely computing, communication and sensing. Quantum sensors could, for example, be useful by Australia’s mining industry to locate mineral deposits.

    The quantum strategy also aims to ensure the country does not lose out in the talent race. Australia already has a thriving quantum community, including four nation-wide quantum-focused research centres of excellence. Companies such as Microsoft have also poured millions of dollars into quantum engineering research at the University of Sydney, while several quantum startups have been founded, the oldest of which is cybersecurity firm QuintessenceLabs.

    Australia now joins other leaders in quantum technology, including China, the EU, the UK and the US, in having its own formal quantum strategy. Australia’s Commonwealth Scientific and Industrial Research Organisation projects that the country’s quantum industry could be worth A$4.6bn by the end of the decade and may employ as many people by 2045 as the oil and gas sector does today.

    “We are in the top handful of countries embarking on a quantum ambition,” says Australia’s chief scientist, the physicist Cathy Foley. But we have to act now, as there is intense global attention on the promise of quantum.” Foley believes the strategy will let Australia grow a thriving deep‑tech industry, built out of co-ordinated, long‑term government investment and a critical mass of world‑class Australian‑trained quantum specialists”.

    The post Australia sets out A$1bn national quantum strategy appeared first on Physics World.

    ]]>
    News The country aims to become a global player in quantum technologies by the end of the decade https://physicsworld.com/wp-content/uploads/2023/05/quantum-circuit-concept-1206098096-iStock_Quardia.jpg newsletter
    Celebrating 10 years of IOP ebooks https://physicsworld.com/a/celebrating-10-years-of-iop-ebooks/ Mon, 15 May 2023 10:04:27 +0000 https://physicsworld.com/?p=107635 Rumours of the death of academic books have been greatly exaggerated

    The post Celebrating 10 years of IOP ebooks appeared first on Physics World.

    ]]>

    In our frenetic world of 24/7 news and algorithm-powered content jostling for our attention, there is something reassuring about the continuing popularity of books. Especially within the academic community, the book format continues to be deeply valued as a means of cutting through the noise and summarizing the latest thinking on a diverse range of topics.

    Indeed, the IOP ebooks programme turns 10 this year and shows no sign of diminishing, having already surpassed 800 titles and more than 16 million chapter downloads. “The demise of the scholarly book is something that’s been predicted for several decades now because there are so many other competing sources of information out there,” says David McDade, head of IOP ebooks. “And yet, here we are in 2023, hundreds of authors want to write books for us, and those books are downloaded hundreds of thousands of times a year.”

    McDade was speaking in a recent episode of the Physics World Weekly podcast, as both IOP ebooks and Physics World are both produced by IOP Publishing. He praised the tenacity of his colleagues in building the ebooks programme from the ground up during the last decade. Looking to the future, McDade would like to see ebooks incorporate more interactive features, while being careful not to lose the essence of what makes a book unique. He also considers how the open access movement is starting to shake up academic book publishing models.

    Available in multiple digital formats and full colour print, IOP’s ebooks are primarily aimed at researchers and students in postgraduate courses. To date, more than 1500 authors have contributed to books, spanning 17 different subject areas – from quantum science to environment and energy, and even venturing into culture, history and society. Within the catalogue, the three main categories of book are: research and reference texts; course texts; and broad interest titles.

    Evolving formats, human support

    “The IOP’s profit model goes right back into science. So as a scientist, I think you can feel good about a book – which may or may not be a bestseller – but will be used for the right things,” says Lincoln Carr, a theoretical physicist at the Colorado School of Mines, US. Carr is an editor in a current IOP series in quantum technology and has been involved in the IOP ebooks programme since its inception.

    Carr, who appears in the video at the top of this article (filmed at the APS March Meeting 2023) predicts there will always be a place for traditional printed books, but within academic publishing the ebook format will eventually take over completely. “It’s not going to be about having a beautiful book from the 1880s, or even 1960s. It’s going to be about having books that incorporate digital content, that one day work with VR and AR,” he says.

    The film also includes a testimonial from José María De Teresa, editor of the 2020 IOP ebook Nanofabrication: Nanolithography techniques and their applications. “[Seeing the book published] was a great joy that I shared with my friends, my colleagues and my family because I thought that I was making an impact in the field,” says De Teresa, based at the Institute of Nanoscience and Materials of Aragon, Spain. “It was a pleasant collaboration, there was a fluent communication between the IOP office and myself.”

    Visit the IOP ebooks website to learn about the process for becoming an author or for accessing the titles – as an individual or as an institution.

     

    The post Celebrating 10 years of IOP ebooks appeared first on Physics World.

    ]]>
    Blog Rumours of the death of academic books have been greatly exaggerated https://physicsworld.com/wp-content/uploads/2023/05/IOP-ebooks-at-10-scaled.jpg
    Photonic time crystal amplifies microwaves https://physicsworld.com/a/photonic-time-crystal-amplifies-microwaves/ Sat, 13 May 2023 14:03:23 +0000 https://physicsworld.com/?p=107809 New 2D metamaterial could boost 6G telecoms

    The post Photonic time crystal amplifies microwaves appeared first on Physics World.

    ]]>
    A major barrier to creating photonic time crystals in the lab has been overcome by a team of researchers in Finland, Germany and the US. Sergei Tretyakov at Aalto University and colleagues have shown how the time varying properties of these exotic materials can be realized far more easily in 2D than in 3D.

    First proposed by Nobel laureate Frank Wilczek in 2012, time crystals are a unique and diverse family of artificial materials. You can read more about them and their broader implications for physics in this Physics World article by Philip Ball – but in a nutshell, they possess properties that vary periodically in time. This is unlike conventional crystals, which have properties that vary periodically in space.

    In photonic time crystals (PhTCs), the varying properties are related to how the materials interacts with incident electromagnetic waves. “The unique characteristic of these materials is their ability to amplify incoming waves due to the non-conservation of wave energy within the photonic time crystals,” Tretyakov explains.

    Momentum bandgaps

    This property is a result of “momentum bandgaps” in PhTCs, in which photons within specific ranges of momenta are forbidden from propagating. Owing to their unique properties of PhTCs, the amplitudes of electromagnetic waves within these bandgaps grow exponentially over time. In contrast, the analogous frequency bandgaps which form in regular, spatial photonic crystals PhTCs, cause waves to attenuate over time.

    PhTCs are now a popular subject of theoretical study. So far, calculations suggest that these time crystals possess a unique set of properties. These include exotic topological structures, and an ability to amplify radiation from free electrons and atoms.

    In real experiments, however, it has proven very difficult to modulate the photonic properties of 3D PhTCs  throughout their volume. Among the challenges include the creation of overly complex pumping networks – which themselves create parasitic interferences with electromagnetic waves propagating through the material.

    Reduced dimensionality

    In their study, Tretyakov’s team discovered a simple fix to this problem. “We have reduced the dimensionality of photonic time crystals from 3D to 2D, because it is much easier to construct 2D structures compared to 3D structures,” he explains.

    Key to the success of the team’s approach lies within the unique physics of metasurfaces, which are materials made from 2D arrays of sub-wavelength sized structures. These structures can be tailored in size, shape, and arrangement in order to manipulate properties of incoming electromagnetic waves in highly-specific and useful ways.

    After fabricating their new microwave metasurface design, the team showed that its momentum bandgap amplified microwaves exponentially.

    These experiments clearly demonstrated that time-varying metasurfaces can preserve the key physical properties of 3D PhTCs, with one key additional benefit. “Our 2D version of photonic time crystals can provide amplification for both free-space waves and surface waves, while their 3D counterparts cannot amplify surface waves,” Tretyakov explains.

    Technological applications

    With their host of advantages over 3D time crystals, the researchers envisage a wide ray of potential technological applications for their design.

    “In the future, our 2D photonic time crystals could be integrated into reconfigurable intelligent surfaces at microwave and millimetre wave frequencies, such as those in the upcoming 6G band,” Tretyakov says. “This could enhance wireless communication efficiency.”

    While their metamaterial is designed specifically for manipulating microwaves, the researchers hope that further adjustments to their metasurface could extend its use to visible light. This would pave the way for the development of new advanced optical materials.

    Looking further into the future, Tretyakov and colleagues suggest that 2D PhTCs could provide a convenient platform for creating the even more esoteric “space–time crystals”. These are hypothetical materials that would exhibit repeating patterns in time and space simultaneously.

    The research is described in Science Advances.

    The post Photonic time crystal amplifies microwaves appeared first on Physics World.

    ]]>
    Research update New 2D metamaterial could boost 6G telecoms https://physicsworld.com/wp-content/uploads/2023/05/Photonic-time-crystal.jpg
    The physics of espresso coffee, build a LEGO quantum computer https://physicsworld.com/a/the-physics-of-espresso-coffee-build-a-lego-quantum-computer/ Fri, 12 May 2023 16:31:38 +0000 https://physicsworld.com/?p=107827 Excerpts from the Red Folder

    The post The physics of espresso coffee, build a LEGO quantum computer appeared first on Physics World.

    ]]>
    OK, I know that asking how to make a better cup of coffee will often result in a tedious argument about the relative merits of various appliances, beans and grinds. Now, researchers at the University of Huddersfield in the UK have weighed in with a study of the physics of coffee making. In particular they looked at a curious feature of espresso makers – which force hot water through a cylindrical filter containing finely ground coffee.

    In 2020, researchers discovered that using a finely ground coffee can sometimes produce a weaker tasting cup than using coffee ground to a larger particle size. This seems odd because the surface-to-volume ratio of a finer grind is greater than that of a coarser grind, so I would have thought that more flavour would be extracted from the finer grind.

    William Lee and his Huddersfield colleagues reckoned that effect is caused by the uneven extraction of coffee from different regions within an espresso filter. To investigate this hypothesis, the team did computer simulations of a simplified system that comprised two different coffee-making regions through which water can flow. The coffee was packed at two different densities in either region to simulate the variations that would occur in a real-life filter.

    Dynamic extraction

    They found that the difference in density along with the dynamic extraction of coffee led to different flow rates in each region.

    “Our model shows that flow and extraction widened the initial disparity in flow between the two regions due to a positive feedback loop, in which more flow leads to more extraction, which in turn reduces resistance and leads to more flow,” explains Lee.

    One consequence of this phenomenon is that coffee is not fully extracted from one region before all the water has flowed through it. And, the amount of this unextracted coffee increases with decreasing particle size.

    Active effect

    “This effect appears to always be active, and it isn’t until one of the regions has all of its soluble coffee extracted that we see the experimentally observed decrease in extraction with decreasing grind size.”

    The researchers believe that gaining a better understanding of this effect could lead to a better cup of coffee – as well as reducing waste. This is because there is an optimal way to extract coffee from grounds. If the grounds are exposed to too little water, the taste of the coffee is what experts call “underdeveloped”. However, if the grounds are exposed to too much hot water, the taste becomes overly bitter. So even if the overall level of coffee extraction seems fine, the resulting beverage could be a combination of these two less desirable fluids.

    You can read more in Physics of Fluids.

    Quantum LEGO

    There has already been a LEGO Large Hadron Collider, a LEGO James Webb Space Telescope and even a LEGO Kibble balance, but now a LEGO quantum computer can be added to the list. LEGO user SupersonicEmmet098 has created a 403-piece “IBM Q Quantum Computer System” set and uploaded the design onto the LEGO ideas website where it already has over 150 supporters.

    With a scale of 1 to 18, the design features a light-blue IBM server cabinet of microwave electronics with a Bluefors cryostat support frame suspending a golden dilution refrigerator with an IBM 433-qubits Osprey quantum processor at the bottom.

    “Kids and adults alike can use this LEGO set to discover and learn about the composition of a quantum computer system while recreating a slice of a real-life quantum computer data centre,” notes SupersonicEmmet098, who will now be hoping to hit the next supporter milestone of 1000 votes.

    The post The physics of espresso coffee, build a LEGO quantum computer appeared first on Physics World.

    ]]>
    Blog Excerpts from the Red Folder https://physicsworld.com/wp-content/uploads/2023/05/Espresso-machine.jpg
    New horizons beckon for UK quantum computing https://physicsworld.com/a/new-horizons-beckon-for-uk-quantum-computing/ Fri, 12 May 2023 15:10:42 +0000 https://physicsworld.com/?p=107790 University of Edinburgh lab will validate real-world use cases for quantum computing

    The post New horizons beckon for UK quantum computing appeared first on Physics World.

    ]]>
    As leading figures in the UK’s quantum community gathered in Edinburgh to mark the launch of the  first research centre in the country to be devoted to quantum software, there was a palpable sense that the development of quantum computing in the UK is entering a new and expansive phase. Held in April, the Edinburgh event came just a month after the release of the UK’s National Quantum Strategy, which commits £2.5bn of new funds to the development of quantum technologies over the 10 years from 2024.

    That additional investment more than doubles the UK’s ongoing support for quantum research and innovation, with the current National Quantum Technologies Programme (NQTP) already delivering government funding of about £1bn since 2014. The new strategy also aims to capitalize on the rapid progress that has been made over the last 10 years – both in terms of technical achievements and the emergence of a vibrant and collaborative quantum ecosystem – by placing greater emphasis on translating breakthrough science into practical quantum computers that deliver real value for society and the economy.

    “We have some important questions to answer,” said Sir Peter Knight of Imperial College London, a leading architect of the new strategic framework as well as the NQTP. “What is a quantum computer good for? How do we benchmark and validate performance? Where should we focus our efforts for fast and valuable outcomes?”

    The Quantum Software Lab (QSL) aims to address some of those questions, with a key focus on investigating practical ways to exploit quantum computing for solving problems that are beyond the reach of classical machines. The lab, which is being hosted by the University of Edinburgh’s School of Informatics, has been established in a collaboration with the National Quantum Computing Centre (NQCC), and aims to accelerate the development and adoption of quantum computing by working with industry partners to translate their most vexing computational challenges into use cases that can be addressed through quantum computing.

    Image 2 NQCC

    “We want to understand the pain points in different industries,” said Elham Kashefi, the director of the QSL. Kashefi is a professor of quantum computing at the University of Edinburgh and a CNRS director of research at the Sorbonne University in Paris, and was appointed chief scientist of the NQCC in November 2022. “That will allow us to develop use cases and applications for quantum computing that solve real problems.”

    Those ambitions align with the NQCC’s user engagement programme, called SparQ, that aims to explore practical uses of quantum computing by providing access to the technology, alongside training and networking opportunities. The QSL team will work closely with the NQCC’s innovation specialists and applications engineers to identify and develop use cases where quantum computing can deliver a demonstrable benefit over classical solutions. “This joint endeavour will create a core research capability to address some of the key challenges in developing quantum software, paving the way towards practical applications of quantum computing that can have a real impact on the industry,” commented Michael Cuthbert, director of the NQCC. “The expertise within QSL will help to drive user adoption and provide a pathway to demonstrating quantum advantage.”

    By creating a focal point for quantum software development in the UK, the QSL aims to attract new research talent, provide education and training for the next generation of quantum developers, and provide a source of scientific expertise for the wider quantum community. In some ways it fills a gap in the UK quantum landscape, with the early years of the NQTP focusing largely on demonstrating novel qubit architectures and developing quantum algorithms for performing specific computational tasks. Now that the emphasis has shifted to building practical quantum computers, there is a greater need for software to control the core quantum processors, characterize and mitigate for the errors caused by noise, and provide the critical connections between the quantum hardware and classical computing infrastructure.

    “Quantum software is the glue that brings together all the different elements of a quantum computer,” commented Matthias Christandl of the University of Copenhagen, a prime mover in the new European Quantum Software Institute, speaking at the launch event. “It requires the ingenuity of quantum software scientists to harness the remarkable power of quantum hardware, while co-development of hardware and software will also be crucial as different qubit architectures continue to evolve.”

    While the QSL will work in collaboration with the NQCC to explore specific use cases across different industry sectors, it will also develop generalized theoretical and mathematical approaches that can be applied across different hardware platforms and applications. Research at the lab will provide the foundational knowledge for follow-up phases of the NQCC programme in quantum software and applications, and will also pave the way for the UK’s first secure and verifiable distributed cloud platform for quantum computing. “We need to keep an open mind and allow blue sky research,” said Kashefi. “Advances in the science may enable new applications, while new applications may inspire new research directions.”

    Image 3 NQCC

    One key goal for the lab’s research programme is to develop the tools needed to prove whether a quantum-enabled solution achieves a genuine performance advantage over a traditional supercomputer. “We need formal methods to test whether an approach addresses the problem and delivers quantum advantage,” said Kashefi. “We want to explore the universe of possible applications, and find out where quantum advantage can be achieved and where it cannot.”

    Kashefi believes that the outcomes from the QSL’s discovery science will help to guide the development of novel software solutions that can be used to solve real-world problems. “We want to be the engine that brings everything together,” she said. Within the lab’s overall framework there is a clear focus on translating specific use cases into practical solutions, with senior researchers in the team responsible for establishing initial use cases, translating the requirements into a research problem, developing and optimizing appropriate quantum algorithms, and then benchmarking the solution to make sure it meets requirements of the application. “Our aim is to create a start-up culture within an academic environment,” added Kashefi.

    Located in the University of Edinburgh’s School of Informatics – by some margin the largest of its kind in the UK – the QSL will have access to valuable expertise in all areas of computer science. Around 30 researchers are already involved with the lab, while the team also has a direct link with EPCC, the university’s centre of excellence in supercomputing and data science. “To get the best out of current quantum computers they need to operate within a classical computing environment,” said Kashefi. “We need expertise in high-performance computing to help optimize system architectures and control systems, and to create distributed platforms that combine quantum hardware with classical computing resources.”

    The QSL team is also in a perfect position to engage with scientists and engineers at the university who are working on research problems that can be addressed with quantum computing, such as molecular simulations in chemistry or many-body problems in physics. More generally, the aim is to create an open environment that fosters collaboration with both academic groups and industry partners. “We want our research to have the widest possible impact,” said Craig Skeldon, the QSL’s business development manager. “Our aim is to connect with end users in different industries to develop practical solutions, and to work with hardware and software providers who are developing innovative products.”

    The lab’s strategic partnership with the NQCC will also help to create a community of quantum software specialists who can work with other stakeholders, including hardware developers and end users across government, academia and industry, to drive the development and adoption of practical quantum computers. “It takes a whole ecosystem to develop a useful quantum computer,” concluded Kashefi at the end of the launch event. “The NQCC is the dream partner for the QSL, and we are ready for the challenge.”

    The post New horizons beckon for UK quantum computing appeared first on Physics World.

    ]]>
    Analysis University of Edinburgh lab will validate real-world use cases for quantum computing https://physicsworld.com/wp-content/uploads/2023/05/Image-1-NQCC.jpg newsletter
    Entangled ions set long-distance record https://physicsworld.com/a/entangled-ions-set-long-distance-record/ Fri, 12 May 2023 12:00:42 +0000 https://physicsworld.com/?p=107772 Two ions entangled over a distance of 230 m make a solid foundation for quantum networking

    The post Entangled ions set long-distance record appeared first on Physics World.

    ]]>
    Using light and optical fibres to send information from point A to B is today a standard practice, but what if we could skip the “sending and carrying” steps entirely and simply read information instantaneously? Thanks to quantum entanglement, this idea is no longer a work of fiction, but a subject of ongoing research. By entangling two quantum particles such as ions, scientists can put them into a fragile joint state where measuring one particle gives information about the other in ways that that would be impossible classically.

    Researchers from the University of Innsbruck, Austria, have now performed this tricky entanglement process on two calcium ions trapped in optical cavities 230 m apart – equivalent to around two football pitches – and connected via a 520 m long optical fibre. This separation is a record for trapped ions and sets a milestone in quantum communication and computation systems based on these quantum particles.

    Towards a quantum network

    Quantum networks are the backbone of quantum communication systems. Among their attractions is that they could link the world with unprecedented computing power and security while enhancing precision sensing and time measurement for applications ranging from metrology to navigation. Such quantum networks would consist of quantum computers – the nodes – connected through the exchange of photons. This exchange can be done in free space, similarly to how light travels through space from the Sun to our eyes. Alternatively, the photons can be sent through optical fibres similar to those used to transmit data for Internet, television and phone services.

    Quantum computers based on trapped ions offer a promising platform for quantum networks and quantum communication for two reasons. One is that their quantum states are relatively easy to control. The other is that these states are robust against external perturbations that can disrupt the information carried between and at the nodes.

    Trapped calcium ions

    In the latest work, research teams led by Tracy Northup and Ben Lanyon at Innsbruck trapped calcium ions in Paul traps – an electric field configuration that produces a force on the ion, confining it in the centre of the trap. Calcium ions are appealing because they have a simple electronic structure and are robust against noise. “They are compatible with technology needed for quantum networks; and they are also easily trapped and cooled, therefore suited for scalable quantum networks,” explains Maria Galli, a PhD student at Innsbruck who was involved in the work, which is described in Physical Review Letters.

    The researchers began by placing a single trapped ion inside each of two separate optical cavities. These cavities are spaces between pairs of mirrors that allow precise control and tuning of the frequency of light that bounces between them (see image above). This tight control is crucial for linking, or entangling, the information of the ion to that of the photon.

    After entangling the ion-photon system at each of the two cavities – the nodes of the network – the researchers performed a measurement to characterize the entangled system. While the measurement destroys the entanglement, the researchers had to repeat this process multiple times to optimize this step. The photons, each entangled with one of the calcium ions, are then transmitted through the optical fibre that connects the two nodes, which are located in separate buildings.

    Members of the Innsbruck team form a human chain, holding hands, between Tracy Northup (holding a Universitat Innsbruck sign) and Ben Lanyon (holding an IQOQI sign)

    Exchanging information

    While the researchers could have transferred the photons in free space, doing so would have risked disrupting the ion-photon entanglement due to several noise sources. Optical fibres, in contrast, are low loss, and they also shield the photons and preserves their polarization, allowing longer separation between the nodes. However, they are not ideal. “We did observe some drifts in the polarization. For this reason, every 20 minutes we would characterize the polarization rotation of the fibre and correct for it.” says Galli.

    The two photons exchange the information of their respective ion-photon systems through a process known as a photon Bell-state measurement (PBSM). In this state-selective detection technique, the photons’ wavefunctions are overlapped, creating an interference pattern that can be measured with four photodetectors.

    By reading the measured signals on the photodetectors, the researchers can tell whether the information carried by the photons – their polarization state – is identical or not. Matching pairs of outcomes (either horizontal or vertical polarization states) consequently herald the generation of entanglement between the remote ions.

    Trade-offs for successful entanglement

    The researchers had to balance several factors to generate entanglement between the ions. One is the time window in which they do the final joint measurement of the photons. The longer this time window is, the more chance the researchers have of detecting photons – but the trade-off is that the ions are less entangled. This is because they aim to catch photons that arrive at the same time, and allowing a longer time window could lead them to detect photons that actually arrived at different times.

    The researchers therefore needed to carefully check how much entanglement they managed to achieve for a given time window. Over a time window of 1 microsecond, they repeated the experiment more than 13 million times, producing 555 detection events. They then measured the state of the ions at each node independently to check the correlation, which was 88%. “Our final measurement step is in fact to measure the state of both ions to verify that the expected state correlation is there,” Galli says. “This confirms that we have succeeded in creating entanglement between the two ions.”

    From a sprint to a marathon

    Two football pitches may seem like a large distance over which to create a precarious quantum entangled state, but the Innsbruck team has bigger plans. By making changes such as increasing the wavelength of photons used to transmit information between the ions, the researchers hope to cover a much greater distance of 50km – longer than a marathon.

    While other research groups have previously demonstrated entanglement over even longer distances using neutral atoms, ion-based platforms have certain advantages. Galli notes that the fidelities of quantum gates performed with trapped ions are better than those of quantum gates performed on atoms, mainly because interactions between ions are stronger and more stable than interactions between atoms and the coherence time of ions is much longer.

    The post Entangled ions set long-distance record appeared first on Physics World.

    ]]>
    Research update Two ions entangled over a distance of 230 m make a solid foundation for quantum networking https://physicsworld.com/wp-content/uploads/2023/05/Cavity_UIBK_2.jpg newsletter1
    Ask me anything: Masako Yamada – ‘With quantum, the challenge is not so much solving the problem but defining the problem’ https://physicsworld.com/a/ask-me-anything-masako-yamada-with-quantum-the-challenge-is-not-so-much-solving-the-problem-but-defining-the-problem/ Fri, 12 May 2023 10:00:22 +0000 https://physicsworld.com/?p=107373 Masako Yamada is the director of applications at quantum-computing company, IonQ

    The post Ask me anything: Masako Yamada – ‘With quantum, the challenge is not so much solving the problem but defining the problem’ appeared first on Physics World.

    ]]>
    Masako Yamada

    What skills do you use every day in your job?

    The quantum-computing industry is a dynamic, fast-moving space, and I find myself using a mix of different skills to manage the personnel and technical aspects of my job.

    As part of my daily role at IonQ, I manage a team of applications researchers – both scientists and engineers – and ensure that everyone’s work contributes to our goal of developing impactful quantum applications for quantum computers. Our work is truly multidisciplinary, and involves sales, products, marketing, operations, and even legal and finance. Our success depends on me clearly communicating with team members, clients and leadership; fostering a culture of collaboration across functions; and making time for one-to-one sessions to brainstorm ideas, offer feedback or even exchange a joke or two.

    I’m also responsible for bringing to the table novel technical problems that either our company or our customers are looking to solve. With quantum, the challenge is not so much solving the problem but defining the problem. To do this I have to think outside the box and lean on a creative and diverse team.

    What do you like best and least about your job?

    The things I like most about my job are how truly disruptive quantum systems are becoming, and the wealth of talented people I get to meet and work with.

    I came to IonQ from GE Research, where for more than two decades I led teams focused on fields like experimental optics, high-performance computing and industrial AI. Growing up within a company like GE was amazing, but I felt that at IonQ I could make a greater proportional impact. IonQ is a growing company in an exploding field, and rubbing shoulders with some of the pioneers in the space probably feels a bit like working with Thomas Edison when he was founding GE. I interviewed a candidate last week for a position at IonQ and in his follow-up email, he wrote, “I can tell you love what you do.” That was the best compliment.

    As for the most challenging thing about my job, I’d have to say it’s the speed of innovation. Whereas other technologies have had decades to establish their foothold upon which future developments could be built, quantum computing is still fairly new – we’re basically trying to crawl, run and fly at the same time. It can be difficult work, but I’m excited to be part of a company like IonQ that has steadily kept to product roadmaps and business objectives, heading towards mass-producing scalable, commercial quantum systems.

    What do you know today that you wish you knew when you were starting out in your career?

    There’s a common misconception that once you get a PhD in something, that’s it – you’ll be defined by that one topic. I wish I had known back when I was first starting my career that this is simply not the case. People can always reinvent themselves as they enter a new domain, as long as they are in a culture that embraces a growth mindset, and individuals take every opportunity to learn.

    My PhD was in high-performance computing and materials modelling, but when I joined GE Research after graduate school, I set aside those interests to work in experimental optics, as that’s where the need was at the time. My boss trusted that I would learn quickly, and I did. It wasn’t until a decade later, when I moved to the advanced computing group, that I was able to revisit those early interests. I even got to run simulations on the world’s most powerful computer systems at Oak Ridge National Laboratory. Fast forward to where I am today, my role at IonQ would never have existed had I and everyone at the company not believed in our capability to reinvent ourselves to pursue this burgeoning field of quantum computing.

    The post Ask me anything: Masako Yamada – ‘With quantum, the challenge is not so much solving the problem but defining the problem’ appeared first on Physics World.

    ]]>
    Careers Masako Yamada is the director of applications at quantum-computing company, IonQ https://physicsworld.com/wp-content/uploads/2023/04/2023-05-AMA-masako_feature.jpg
    Cosmic generosity: a selfless investment in the future of physics https://physicsworld.com/a/cosmic-generosity-a-selfless-investment-in-the-future-of-physics/ Fri, 12 May 2023 09:47:04 +0000 https://physicsworld.com/?p=107793 Dame Jocelyn Bell Burnell won a $3m prize and is giving it all to physics PhD students from under-represented groups

    The post Cosmic generosity: a selfless investment in the future of physics appeared first on Physics World.

    ]]>
    If you were awarded $3m prize money for your scientific excellence and hard graft, would you give it all away to strangers? That’s what the Northern Irish astrophysicist Dame Jocelyn Bell Burnell did in 2018 after winning the Special Breakthrough Prize in Fundamental Physics for her 1967 discovery of pulsars and her inspiring scientific leadership. She used the cash – topped up with more personal money from a separate prize – to launch the Bell Burnell Graduate Scholarship Fund, which supports PhD students in the UK and Ireland from groups under-represented in physics.

    In this episode of the Physics World Stories podcast, we look at the impacts the award is already having on the lives of early-career physicists. Our first guest is Helen Gleeson, a liquid crystals and soft matter researcher at the University of Leeds, who is chair of the selection panel for the fund. She talks about the importance of providing opportunities for physics students from non-traditional backgrounds, who may face multiple barriers – both personal and structural within the physics community.

    Later in the episode, we also hear from a fund awardee. Joanna Sakowska, a PhD student at the University of Surrey, is studying the formation and evolution of the Magellanic Clouds galaxies, while searching for neighbouring ultra-faint dwarf galaxies believed to contain large quantities of dark matter. Sakowska offers inspiring, practical advice to anyone interested in a career in physics, emphasizing the importance of reflecting on your personal achievements, even if self-promotion does not come naturally!

    Want to know more about the Bell Burnell Graduate Scholarship Fund and how to apply? Listen to the episode or read this recent Physics World article by Helen Gleeson.

     

    The post Cosmic generosity: a selfless investment in the future of physics appeared first on Physics World.

    ]]>
    Dame Jocelyn Bell Burnell won a $3m prize and is giving it all to physics PhD students from under-represented groups Dame Jocelyn Bell Burnell won a $3m prize and is giving it all to physics PhD students from under-represented groups Physics World Cosmic generosity: a selfless investment in the future of physics 38:24 Podcast Dame Jocelyn Bell Burnell won a $3m prize and is giving it all to physics PhD students from under-represented groups https://physicsworld.com/wp-content/uploads/2023/05/Bell-Burnell-Students_home.jpg newsletter
    Pseudorandomness enhances X-ray microscopy https://physicsworld.com/a/pseudorandomness-enhances-x-ray-microscopy/ Fri, 12 May 2023 08:45:35 +0000 https://physicsworld.com/?p=107612 New technology could help overcome spatial resolution limitations of existing X-ray imaging

    The post Pseudorandomness enhances X-ray microscopy appeared first on Physics World.

    ]]>
    An X-ray diffuser made from a metal film speckled with tiny holes, and its single-shot spatial resolution of 14 nm

    A new X-ray microscopy technique could make it possible to image objects in finer detail, overcoming the spatial resolution limits of today’s X-ray imaging technologies. Developed by researchers in Korea, the technique relies on an X-ray diffuser made from a metal film speckled with tiny holes, and its single-shot spatial resolution of 14 nm is already smaller than the size of the holes. According to the researchers, the resolution could be improved still further by using next-generation X-ray light sources and high-performance X-ray detectors.

    Because X-rays can penetrate most objects, they are a popular tool for characterizing materials as well as imaging bones and other biological structures. The resolution of X-ray imaging is, however, limited by the difficulty of constructing optics for very short wavelengths of light.

    Unlike optical microscopy, which uses refractive lenses to focus and manipulate visible light, X-ray microscopy typically relies on circular gratings known as zone plates. The quality of the nanostructures in these plates determines the spatial resolution of the image, but manufacturing such structures to the desired tolerances is challenging, and once built, they are prone to collapse because of their thin, comb-like nature. The result is that the resolution of X-ray microscopy has never approached its theoretical (diffraction) limit and is instead restricted by practicalities.

    A new X-ray “lens”

    Physicists KyeoReh Lee and YongKeun Park of the Korea Advanced Institute of Science and Technology (KAIST) have now overcome this limitation by cleverly exploiting the random nature of diffraction. Working with colleagues at the Pohang Accelerator Laboratory (PAL), they constructed their X-ray diffuser by punching numerous holes in a thin tungsten film. When this diffuser is placed behind the sample being imaged, it diffracts the light, generating a pattern of speckles. At a first glance, this speckle pattern may appear unrelated to the incident light, but Lee explains that a high-resolution sample image can nevertheless be retrieved from it by exploiting the mathematical properties of random diffraction.

    Randomness-based X-ray imaging

    The team first demonstrated this randomness-based imaging technique using visible light in 2016, and the contrast with traditional X-ray microscopy methods is striking, Lee says. “In conventional zone-plate-based X-ray microscopy, the finer outermost zone width is used to collect higher-angle diffracted photons that contain higher-resolution features,” he tells Physics World. “On the contrary, in this work, we do not collect the photons at all. Instead, we measure the phase of high-angle diffracted photons by exploiting the pseudorandomness, and reconstruct the high-resolution image computationally.”

    Lee and colleagues say their new technique could substantially improve the resolution of X-ray microscopy, especially at very high energies (the so-called hard X-ray regime) where making high-resolution zone plates is more difficult. Possible applications could include non-invasive observations of the fine structures present in nanoscale samples of battery materials, ceramics, semiconductors and more.

    While the technique shows promise, the researchers acknowledge that its current resolution of 14 nm is “not very impressive” comparted to alternatives. Lee, however, argues that using next-generation X-ray light sources such as diffraction-limited storage rings along with high-performance X-ray detectors could pave the way for much higher resolutions. The ultimate goal, he adds, is to reach sub-nanometre image resolution using X-rays.

    “In principle, increasing the size of the diffuser and detector in the demonstrated setup could potentially attain the desired resolution, but for non-crystalline samples, it may be difficult to obtain sufficient diffracted photons for detection,” Lee concludes.

    The present work is detailed in Light: Science and Applications.

    The post Pseudorandomness enhances X-ray microscopy appeared first on Physics World.

    ]]>
    Research update New technology could help overcome spatial resolution limitations of existing X-ray imaging https://physicsworld.com/wp-content/uploads/2023/05/X-ray-microscopy-figure-featured.jpg
    Machine-learning innovation in RayStation: prioritizing speed, automation, efficiency https://physicsworld.com/a/machine-learning-innovation-in-raystation-prioritizing-speed-automation-efficiency/ Thu, 11 May 2023 16:10:39 +0000 https://physicsworld.com/?p=107727 RaySearch Laboratories and its clinical customers are leveraging advances in machine learning to reimagine the radiotherapy workflow

    The post Machine-learning innovation in RayStation: prioritizing speed, automation, efficiency appeared first on Physics World.

    ]]>
    Machine-learning technologies are unleashing a wave of data-driven innovation and transformation in radiation oncology, yielding step-function improvements in automation, workflow efficiency and consistency of treatment – both for individual clinics and across multicentre healthcare systems. Writ large, the end-game of data-driven oncology represents a compelling narrative – one that will be elaborated in detail for visitors to the booth of RaySearch Laboratories, the Stockholm-based oncology software company, at the annual congress of the European Society for Radiotherapy and Oncology (ESTRO) in Vienna, Austria, later this week.

    “Clinical collaboration and model validation are essential for the successful deployment of machine learning in the planning, delivery and management of radiotherapy treatment programmes,” explains Fredrik Löfman, director of machine learning at RaySearch. What Löfman is alluding to, specifically, is the at-scale collection and aggregation of data for model development from RaySearch’s international user base, while partnering closely with clinical experts on tasks like data enrichment and data curation to ensure robust validation of machine-learning models for the vendor’s flagship RayStation treatment planning system (TPS).

    Front-and-centre on the RayStation innovation roadmap are automated deep-learning segmentation (DLS) and deep-learning-enabled automation in treatment planning. “The priority is to work with medical physicists and radiation oncologists to improve, optimize and generalize the machine-learning models in RayStation over their life-cycle,” adds Löfman. “After all, it’s the clinics that provide the real-world evaluation and validation of machine learning measured in terms of treatment quality and patient outcomes.”

    Streamlined segmentation

    That process of clinical validation is already well under way – and accelerating. Consider the commercial roll-out and clinical trajectory of DLS, with a growing number of treatment centres fast-tracking the clinical adoption of RayStation’s catalogue of DLS models for automated segmentation of diverse disease indications spanning head-and-neck/brain, thorax and breast, abdomen and pelvis – in some cases, reducing the time spent on patient contouring by as much as 75% versus manual or semi-automatic methods.

    Put simply, RayStation’s DLS functionality – trained and validated on large-scale patient data sets – automatically creates contours of critical structures in the tumour near-environment. Clinical teams are then able to review and fine-tune the segmentation in order to optimize tumour control and reduce radiation toxicity.

    Fredrik Löfman

    Last year, a case study in this regard saw the training, validation and clinical implementation of RayStation DLS models for radiotherapy of loco-regional breast cancer – a collaboration between RaySearch, St Olavs Hospital (Trondheim, Norway) and Ålesund Hospital (Ålesund, Norway). The joint team trained DLS models for 18 structures (including breast lymph nodes) on 170 left-sided breast-cancer cases; another 30 patient cases were used for validation. Based on the first two months of clinical experience, the treatment centres reduced total delineation time from roughly one hour to 15 minutes per patient, while the DLS models also out-performed manual segmentation methods in terms of the consistency and standardization of contouring.

    “The DLS methodology, algorithms and ‘infrastructure’ are an integral part of RayStation,” explains Löfman. “As such, DLS is a natural extension of the TPS and modelling of patients, with patient data always remaining within RayStation and no need for users to export image data and import results.” What’s more, the DLS catalogue is growing with every model release and will ultimately cover all of the main disease sites for radiotherapy treatment. An enhanced prostate model will go live in June, for example, while models for head-and-neck lymph nodes are another development priority this year.

    Planning horizon

    Downstream from DLS in the RayStation workflow, Löfman and his cross-disciplinary team – 20 scientists and engineers split across planning, imaging and analytics subgroups – are also pressing ahead with the clinical roll-out of deep-learning-enabled automation in treatment planning. Here, RayStation’s machine-learning models are used to predict and optimize 3D spatial dose, with in-built strategies to automatically generate a set of deliverable treatment plans across key modalities, including intensity-modulated radiotherapy (IMRT), volumetric modulated arc therapy (VMAT), helical tomotherapy and pencil-beam-scanning treatment systems.

    Operationally, fast-track comparison of those candidate plans is followed by selection of the optimal plan for each patient in terms of tumour coverage, conformality and tissue-sparing. In this way, the radiation oncology team can quickly review the plans for each patient, pick the most suitable, and then fine-tune (automatically, semi-automatically or manually) if needed.

    “We now have over a dozen centres using RayStation’s deep-learning-enabled treatment planning clinically on a regular basis – saving lots of time and effort in the process,” explains Löfman. “Working with our customers, we have proved that deep-learning technology delivers robust, high-quality plans automatically – for both photon and proton treatment systems and a range of disease sites spanning prostate, lung, breast, head-and-neck and rectum.”

    Individualized treatment planning

    RaySearch, for its part, works closely with end-users to configure deep-learning planning models to local treatment protocols and clinical preferences, while deployment into the radiotherapy workflow is a multistep process designed to streamline the path to clinical translation. In the first instance, RaySearch engineers will validate the model prior to release (a mix of quantitative and qualitative assessment), with the model scope and limitations subsequently shared with the customers. After which the clinic will commission the model (evaluating its performance on local data) ahead of approval and live implementation in the radiotherapy treatment chain.

    “It is essential to consider the full life-cycle of the clinically deployed deep-learning models,” notes Löfman. “Right now, for example, we are collaborating with key clinical partners to initiate a systematic programme of evaluation looking at model performance over time.”

    Meanwhile, delegates attending the ESTRO exhibition will be able to see the latest RayStation innovations up close – with one eye-catching product demonstration highlighting the operational upside of integrating DLS and deep-learning-enabled planning within a unified TPS environment. Starting with the CT image of a prostate case, the demonstration will show how DLS can fast-track segmentation of all critical structures and the prostate to automatically generate the target volumes. The DLS output then feeds seamlessly into the VMAT plan set-up, using a deep-learning model to automatically generate a deliverable, high-quality plan. “This is a game-changer,” claims Löfman. “The DLS and treatment planning take approximately 2 minutes end-to-end with only a single user-click to initiate the process.”

    Clinical intelligence

    Notwithstanding the headline focus on machine learning, Löfman is also pushing the importance of “big data” as an enabler of clinical best practice in radiation oncology – and specifically the availability, accessibility and standardization of patient and workflow data to support optimized treatments and enhanced patient outcomes. At the heart of that collective conversation is RayIntelligence, the vendor’s cloud-based oncology analytics system, which combines consolidated data warehousing as well as structuring, transformation and dashboarding of the resulting centralized data repository for easier consumption and analysis.

    “RayIntelligence is all about helping clinics to become more data-driven,” notes Löfman. “In other words: using data collected during the ‘patient journey’ to deliver personalized care that’s grounded in real-world evidence.” There is a gap, he argues, for this sort of data warehousing capability, such that users will be able to visualize and drill down into their patient and workflow data in near-real-time to facilitate benchmarking, outlier detection and continuous process improvement.

    Long term, Löfman also sees opportunities for RayIntelligence to provide the infrastructure and tools needed for evaluation of machine-learning models on relevant patient cohorts. He concludes: “Innovation in machine learning requires large-scale data sets that researchers, clinics and industry can access in an unbiased and representative way. RayIntelligence provides the building blocks needed to centralize – and allow models to learn from – the vast amounts of data generated by multicentre clinical trials.”

    The post Machine-learning innovation in RayStation: prioritizing speed, automation, efficiency appeared first on Physics World.

    ]]>
    Analysis RaySearch Laboratories and its clinical customers are leveraging advances in machine learning to reimagine the radiotherapy workflow https://physicsworld.com/wp-content/uploads/2023/05/RayStation-Deep-Learning-segmentation-models.jpg
    Portable imaging system targets eye diseases, pondering the mysteries of dark matter https://physicsworld.com/a/portable-imaging-system-targets-eye-diseases-pondering-the-mysteries-of-dark-matter/ Thu, 11 May 2023 15:50:09 +0000 https://physicsworld.com/?p=107780 Meet the new president of the Australian Institute of Physics and the CEO of a medical start-up

    The post Portable imaging system targets eye diseases, pondering the mysteries of dark matter appeared first on Physics World.

    ]]>
    This episode of the Physics World Weekly podcast features interviews with the chief executive of a UK-based medical start-up and the new president of the Australian Institute of Physics.

    First up is Alasdair Price of the medical-imaging company Siloton, which is using photonic integrated circuits to develop a portable imaging system that can monitor the progression of eye diseases such as age-related macular degeneration.

    He is followed by the theoretical physicist Nicole Bell of the University of Melbourne who talks about her research into dark matter and other aspects of her work at the intersection of particle physics, astrophysics and cosmology. She also chats about her recent appointment as president of the Australian Institute of Physics and her vision for that organization.

    The post Portable imaging system targets eye diseases, pondering the mysteries of dark matter appeared first on Physics World.

    ]]>
    Podcast Meet the new president of the Australian Institute of Physics and the CEO of a medical start-up https://physicsworld.com/wp-content/uploads/2023/05/Siloton-list.jpg newsletter
    Environmental groups sue US aviation watchdog following the failed launch of SpaceX’s Starship craft https://physicsworld.com/a/environmental-groups-sue-us-aviation-watchdog-following-the-failed-launch-of-spacexs-starship-craft/ Thu, 11 May 2023 14:12:02 +0000 https://physicsworld.com/?p=107744 The launch on 20 April caused significant damage to the launchpad and the surrounding area

    The post Environmental groups sue US aviation watchdog following the failed launch of SpaceX’s Starship craft appeared first on Physics World.

    ]]>
    Five environmental and cultural-heritage groups are suing the US Federal Aviation Administration (FAA) following the maiden launch of SpaceX’s Starship. The launch, which took place on 20 April in Boca Chica, Texas, caused significant damage to the launchpad and the surrounding area. The groups say that by permitting take-off without a comprehensive environmental review, the FAA violated the US National Environmental Policy Act.

    The maiden launch of SpaceX’s Starship atop the Super Heavy Rocket lasted for barely four minutes before it exploded. While SpaceX initially called the flight’s end “a rapid unscheduled disassembly”, it transpired that the launch team had sent a self-destruct command to the rocket as it started to lose altitude and tumble. SpaceX boss Elon Musk, however, declared the launch a success as the rocket reached an altitude of about 39 km.

    Telemetry data indicated that six or seven of the rocket’s 33 engines were damaged, possibly by material torn from the pad during the launch. “[The launch] was 70% success, 30% failure,” says systems engineer Olivier de Weck from the Massachusetts Institute of Technology. “This was a very first test flight and a big success from a rocket development perspective – it reached maximum pressure and sent back a lot of telemetry.”

    Any little thing that goes wrong can cause a zipper effect that creates a giant problem

    Philip Metzger

    Damage to the launchpad was hardly unexpected. Three months before the launch, Musk noted that SpaceX had begun work on a water-cooled steel plate that would be placed beneath the concrete pad to help it deal with the heat and force of the engines’ firing. As SpaceX thought that the pad would survive the launch, based on an earlier test, it went ahead despite the plate not being ready.

    The damage, however, was far worse than expected. According to Philip Metzger from the University of Central Florida, the concrete in the pad cracked and gases from the engines splayed into them, which split the concrete further. This resulted in pieces of concrete being catapulted across the launch site together with the ejection of huge amounts of dust over several square kilometres.

    “Launch and landing pads are touchy,” Metzger noted on Twitter. “Any little thing that goes wrong can cause a zipper effect that creates a giant problem.”

    Other damage included the destruction of multiple cameras set up to snap the lift-off as well as a 14,000 m2 fire in a state park near the launch pad. The five groups suing the FAA say that “catastrophic damage” was caused on the ground nearby while the US Fish and Wildlife Service found debris scattered over 1.5 km2 of SpaceX property and the state park.

    ‘Agile development’

    The organizations suing the FAA say it should have carried out an in-depth environmental impact statement before approving the launch. They argue that the agency used “a considerably less thorough analysis” than originally planned “based on SpaceX’s preference”. The analysis did not, for example, consider the possible closure of the road leading to the launch site and the public beach next to the site.

    The impact of the test launch also indicates the difference in approach between SpaceX and NASA – and between governmental and commercial space programmes generally.

    “In systems engineering we talk about the waterfall process, carried out in a measured, rather slow, tedious step by step way,” de Weck told Physics World. “With SpaceX, it’s now agile development – a fast, test-driven process that is used in software coding.” In other words, SpaceX is more prepared to lose rockets and crewless spacecraft to perfect the technology as quickly as possible.

    The next launch of the Starship and the Super Heavy Rocket now awaits installation of the cooled steel plate on the launchpad as well as the completion of an FAA investigation into the mishap. “Success comes from what we learn,” SpaceX states on its website. “We learned a tremendous amount about the vehicle and ground systems…that will help us improve on future flights of Starship.”

    The post Environmental groups sue US aviation watchdog following the failed launch of SpaceX’s Starship craft appeared first on Physics World.

    ]]>
    News The launch on 20 April caused significant damage to the launchpad and the surrounding area https://physicsworld.com/wp-content/uploads/2023/05/Ft6b_vUaQAEwYG4-small.jpg newsletter
    Two-in-one gel suppresses aggressive brain tumours https://physicsworld.com/a/two-in-one-gel-supresses-aggressive-brain-tumours/ Thu, 11 May 2023 08:45:24 +0000 https://physicsworld.com/?p=107709 A novel hybrid chemo- and immunotherapy technique could help treat glioblastoma

    The post Two-in-one gel suppresses aggressive brain tumours appeared first on Physics World.

    ]]>
    the mixture self-assembles into a gel when injected into a saline solution

    A new gel made by combining molecules routinely employed in chemotherapy and immunotherapy could help treat aggressive brain tumours known as glioblastomas, according to new work by researchers at Johns Hopkins University in the US. The gel can reach areas that surgery might miss and it also appears to trigger an immune response that could help suppress the formation of a future tumour.

    Glioblastomas are the most common, and most dangerous, type of brain tumour. Conventional treatment typically involves a combination of surgery, radiation therapy and chemotherapy, but patient outcomes are generally poor.

    In the new work, the researchers, led by bioengineer Honggang Cui, made their gel by converting the small-molecule, water-insoluble anticancer drug paclitaxel into a molecular hydrogelator. They then added aCD47, a hydrophilic macromolecular antibody, in solution to this hydrogelator. To be able to do this, the researchers first used a special chemical design to assemble the paclitaxel into filamentous nanostructures.

    When loaded into the resection cavity left behind after a tumour has been surgically removed, the mixture spontaneously forms into a gel and seamlessly fills the minuscule grooves in the cavity, covering its entire uneven surface. The gel can reach areas that may have been missed during surgery and that current anticancer drugs struggle to reach. The result: lingering cancer cells are killed and tumour growth suppressed, say the researchers. They describe their technique, which they tested in mice, in PNAS.

    The gel releases the paclitaxel over a period of several weeks. During this time, the gel remains close to the injection site, reducing any “off-target” side effects. It also appears to trigger a macrophage-mediated immune response that sensitizes the tumour to the “don’t eat me” signal induced by the aCD47.This, in turn, promotes tumour cell phagocytosis (one of the main methods by which cells, particularly white blood cells, defend our body from external invaders) by immunity-promoting macrophages and also triggers an antitumour T cell response. In this way, the aCD47/paclitaxel filament hydrogel effectively suppresses the recurrence of a future brain tumour.

    In tests on mice with brain tumours, the gel prolonged the overall survival rate of animals that hadn’t undergone tumour surgery to 50%. This figure increased to a striking 100% survival in mice that also had surgical removal of the tumour.

    “The gel could supplement the current and only FDA-approved local treatment for brain tumours, the Gliadel wafer,” Cui tells Physics World. “The current formulation also has the potential to treat other types of human cancer.”

    The Johns Hopkins team now plans to test its gel in other animals to further confirm its therapeutic efficacy. “We also plan to undertake more studies to assess its potential toxicity and determine dose regimens,” says Cui.

    The post Two-in-one gel suppresses aggressive brain tumours appeared first on Physics World.

    ]]>
    Research update A novel hybrid chemo- and immunotherapy technique could help treat glioblastoma https://physicsworld.com/wp-content/uploads/2023/05/hydrogel-featured.jpg newsletter1
    Exchange bias in a single-layer film is created using ion implantation https://physicsworld.com/a/exchange-bias-in-a-single-layer-film-is-created-using-ion-implantation/ Wed, 10 May 2023 15:34:56 +0000 https://physicsworld.com/?p=107667 New approach could be used to create antiferromagnetic spintronics devices

    The post Exchange bias in a single-layer film is created using ion implantation appeared first on Physics World.

    ]]>
    A new method for creating and controlling exchange bias in an antiferromagnetic single-layer film has been developed by researchers in the US and Switzerland. The method involves low-energy ion implantation into the film, and it has the potential to advance the development of antiferromagnetic spintronics devices.

    Exchange bias refers to a shift in the position of the hysteresis loop of a ferromagnet along the horizontal axis (applied magnetic field). This can occur when an antiferromagnet layer and a ferromagnet layer are brought close together, pinning the magnetization of a ferromagnetic layer. Exchange bias materials play important roles in spintronic technologies such as the read head sensors in hard disk drives; spin valves; and the magnetic tunnel junctions used in digital memories.

    Generating a strong exchange bias field between a ferromagnet and an antiferromagnet requires a good quality interface between the two materials. However, achieving this in thin films can be challenging. Moreover, to obtain exchange bias, the system must be heated above the antiferromagnet’s Neel temperature and then cooled in the presence of a magnetic field.  This thermal treatment can cause diffusion between the two layers, which reduces significantly the exchange bias. To prevent this, diffusion barriers are generally incorporated into the stack structure. However, this increases the complexity and thickness of the device, making it unsuitable for the miniaturization of spintronic devices.

    Material for the future

    Now, Cory Cress  and Steven Bennett at the US Naval Research Laboratory, and researchers there and at the Paul Scherrer Institute and Oak Ridge National Laboratory have used low-energy ion implantation to create an appreciable exchange bias in the antiferromagnetic material iron–rhodium (FeRh). And, they have done this without the need for a ferromagnetic layer.

    Ion implantation is a common technique in materials science and engineering and is used to modify the properties of a material by introducing ions into its surface or bulk. The process involves creating a beam of ions that is then directed at a target material.

    In their experiments, the team first deposited a single-crystal FeRh film onto a magnesium oxide substrate (see figure below). The sample was then heated to 730 °C and cooled in a process called annealing. This put the FeRh film into a face-centred cubic-type crystal structure. This phase exhibits a high-speed meta-magnetic transition from an antiferromagnetic to a ferromagnetic state when the temperature exceeds about 400 K. Additionally, the FeRh film possesses a switchable magnetic moment that can quickly flip direction, making it promising for various spintronic applications.

    Exchange bias image

    Taking a cue from their previous study of tuning a meta-magnetic transition temperature in FeRh through ion implantation, the researchers implanted low-energy helium or iron ions into FeRh films at room temperature. This created ferromagnetic surface layers that were adjacent to the antiferromagnet layer. This created a relatively large exchange bias effect of 41 Oe for the iron-implanted film and 36 Oe for the helium implanted film.

    According to the team, these findings are promising because the exchange bias effects previously observed in FeRh have been typically very low or only seen below room temperature. As a result, the team believes creating exchange bias by implantation in FeRh could lead to the development of advanced antiferromagnetic devices.

    The origin of exchange bias has remained a long-standing puzzle since its discovery, despite the development of various models that try to explain it. In this study, Cress, Bennett and colleagues aimed to shed further light on the origin of this phenomenon by subjecting ion implanted FeRh films to one hour of annealing either at 400 °C or 700 °C. After the annealing process, the team noted a substantial decrease in the exchange bias field of both implanted films. This was attributed to the healing of defects formed during implantation and restoration of the film’s antiferromagnetic state near the surface.

    Defects are responsible

    By using annealing to regain the pristine properties of the helium implanted film, the researchers found compelling evidence that any exchange bias present in the film is due to the defects caused by the ion implantation and not because of any changes in the FeRh composition due to the addition of iron ions. Furthermore, the team used a domain state model to explain these results, proposing that non-magnetic defects within the antiferromagnetic layer play crucial role in forming and stabilizing the exchange bias field.

    To gain further insight into the exchange bias of FeRh films, the team employed polarized neutron reflectometry. This technique measures magnetization as a function of depth by directing polarized neutrons on to the surface of the film and measuring the intensity of the reflected neutrons.

    Using this technique, the researchers observed the pinned uncompensated magnetic moments within the antiferromagnetic region of the film. They found that these uncompensated moments result from defects, such as vacancies created by ion implantation. When coupled with an adjacent ferromagnetic layer, the vacancies result in the exchange bias phenomenon. According to the researchers, these findings provide a convincing proof for the domain state model of exchange bias.

    The team’s findings are reported in the Journal of Materials Chemistry C and could potentially lead to the application of the domain state model in diverse magnetic spin systems.

    The post Exchange bias in a single-layer film is created using ion implantation appeared first on Physics World.

    ]]>
    Research update New approach could be used to create antiferromagnetic spintronics devices https://physicsworld.com/wp-content/uploads/2023/05/Magnetic-layers.jpg
    Germany reveals €3bn plan to build a quantum computer by 2026 https://physicsworld.com/a/germany-reveals-e3bn-plan-to-build-a-quantum-computer-by-2026/ Wed, 10 May 2023 13:31:19 +0000 https://physicsworld.com/?p=107671 The move is part of an initiative to be competitive with countries that have already taken steps to build a quantum computer

    The post Germany reveals €3bn plan to build a quantum computer by 2026 appeared first on Physics World.

    ]]>
    The German government says it will spend €3bn over the next three years to build a universal quantum computer. The project is part of a new initiative to make Germany competitive with countries that have already built or are taking steps to construct such a device. It is hoped the cash will boost the German economy and place the country at the top of quantum developments in the European Union.

    Set to be built by 2026, Germany’s quantum computer will exploit current quantum technology. It will have a capacity of at least 100 qubits but this could later be expanded to 500 qubits. Funding for the device includes €2.2bn split among several government ministries, including €1.37bn for the research ministry. National research institutes will receive another €800m.

    The initiative also includes the commitment to build a quantum ecosystem and foster a quantum industry. Several major German companies and institutions are already active in quantum technology. The automotive supplier Bosch, for example, is working with IBM to see if quantum computing simulations could help to replace rare-earth metals in electric motors.

    German laser giant Trumpf, meanwhile, is developing quantum computer chips as well as quantum sensors, while semiconductor manufacturer Infineon has developed the first quantum-encrypted computer chips. The German Aerospace Center DLR has also launched its first test satellites for quantum-key distribution.

    According to German education minister Bettina Stark-Watzinger, quantum technology is  crucial for Germany’s technological sovereignty. She expects that by 2026, “at least 60 end users of quantum computing should be active in Germany”, adding that the country should “be among the top three within the EU and at least reach the level of the US or Japan [in quantum computing]”.

    The post Germany reveals €3bn plan to build a quantum computer by 2026 appeared first on Physics World.

    ]]>
    News The move is part of an initiative to be competitive with countries that have already taken steps to build a quantum computer https://physicsworld.com/wp-content/uploads/2023/05/quantum-computer-web-45117792_iStock_Devrimb.jpg newsletter
    Giant tunnelling magnetoresistance appears in an antiferromagnet https://physicsworld.com/a/giant-tunnelling-magnetoresistance-appears-in-an-antiferromagnet/ Wed, 10 May 2023 13:00:52 +0000 https://physicsworld.com/?p=107507 New spin-filter magnetic tunnel junction could make a promising platform for spintronic devices

    The post Giant tunnelling magnetoresistance appears in an antiferromagnet appeared first on Physics World.

    ]]>
    Researchers in China have observed giant tunnelling magnetoresistance (TMR) in a magnetic tunnel junction made from the antiferromagnet CrSBr. When cooled to a temperature of 5 K, the new structure exhibited a magnetoresistance of 47,000% – higher than commercial magnetic tunnel junctions – and it retained 50% of this TMR at 130 K, which is well above the boiling point of liquid nitrogen. According to its developers, the structure can be manufactured in a way that is compatible with the magnetron sputtering process used to make conventional spintronics devices. These qualities, together with the fact that CrSBr is stable in air, make it a promising candidate platform for spintronic devices, they say.

    Standard magnetic tunnel junctions (MTJs) consist of two ferromagnets separated by a non-magnetic barrier material. They are found in a host of spintronics technologies, including magnetic random-access memories, magnetic sensors and logic devices.

    Junctions based on A-type van der Waals (vdW) antiferromagnets such as CrSBr and other chromium halides are an attractive alternative to conventional MTJs thanks to their unusually high tunnelling magnetoresistance. They work thanks to the spin-filter effect, in which the electron spins (or magnetic moments) of the chromium atoms in CrSBr are ferromagnetically coupled to other atoms in their layer and antiferromagnetically coupled to atoms in neighbouring layers. In other words, the spins align parallel to each other in the single layers and antiparallel to each other between neighbouring layers.

    While the high tunnelling resistance of these so-called spin-filter MTJs (sf-MTJs) makes them good candidates for magnetic memories, they do have certain drawbacks. Notably, the materials they are made from tend to be unstable and prone to losing their magnetism at high temperatures. This makes it hard to use them in practical spintronic devices.

    Overcoming fabrication challenges

    In the latest study, researchers led by Guoqiang Yu of the Beijing National Laboratory for Condensed Matter Physics developed a new fabrication technique for these desirable materials. Working with colleagues in Beijing, Dongguan and Wuhan, they began by depositing a bilayer of platinum (Pt) and gold (Au) onto Si/SiO2 wafers using DC magnetron sputtering.

    Next, members of the team mechanically shaved off thin flakes of CrSBr from a sample of the bulk material and placed them onto the Si/SiO2/Pt/Au substrates. This enabled them to obtain relatively thin CrSBr flakes on Pt/Au with clean and fresh surfaces. At this point, the researchers deposited a further layer of platinum onto the CrSBr with an ultralow sputtering power of 3–5 W and a relatively high deposition pressure of around 1 Pa. Finally, they used ultraviolet lithography and Ar ion milling to fabricate several sf-MTJs from the layered structure they created.

    Promising properties

    The new sf-MTJs have many favourable characteristics. “The first is that the route we employed to make them is more compatible with those employed to fabricate conventional spintronics metallic stacks,” Yu explains. “The second is that they retain 50% of their TMR even at a temperature of 130 K, which is so far the record-high working temperature for sf-MTJs.”

    Yu points out that this record-high operating temperature is not far below CrSBr’s so-called Néel temperature, beyond which the material’s thermal energy prevents its spin moments from aligning. This relatively high operating temperature comes with an important practical advantage, Yu adds. “Compared to previous such junctions, our sf-MTJs might work in the liquid nitrogen temperature range and perhaps even at room temperature,” he observes. “And thanks to their stability in air, they are more suited to real-world applications.”

    That is not all. CrSBr is also a semiconductor, so its neighbouring layers have opposite magnetic moments at zero or small magnetic fields. This means it can be used as a barrier layer at low temperatures. “In this configuration, all the electrons, spin-up or spin-down, must encounter a higher barrier height after being polarized in one spin direction or another by passing through the first layer because the next layer has an opposite spin orientation, giving rise to higher tunnelling resistance,” Yu tells Physics World. “When the applied magnetic field is large enough, all the magnetic moments are aligned with this field and, in this case, the electrons with spins parallel to the field direction encounter a lower barrier height, which results in lower tunnelling resistance.”

    The researchers, who report their work in Chinese Physics Letters, suggest that the new junctions could be used in spintronic devices based on a stack of a just a few layers of CrSBr. “Our study has revealed that sf-MTJs based on 2D vdW A-type antiferromagnets have some outstanding properties,” Yu says. “We will now be trying to find a 2D vdW A-type ferromagnet with a higher Néel temperature to further improve the working temperature of the junction we have made so that it is more suited to applications.”

    A further challenge, the researchers say, will be to figure out a way to electrically manipulate the magnetization on the A-type antiferromagnet so they can construct fully functioning spintronic devices.

    The post Giant tunnelling magnetoresistance appears in an antiferromagnet appeared first on Physics World.

    ]]>
    Research update New spin-filter magnetic tunnel junction could make a promising platform for spintronic devices https://physicsworld.com/wp-content/uploads/2023/05/CrSBr.jpg newsletter1
    Sects, drugs and drunken duels: lighter moments from the history of science https://physicsworld.com/a/sects-drugs-and-drunken-duels-lighter-moments-from-the-history-of-science/ Wed, 10 May 2023 10:00:34 +0000 https://physicsworld.com/?p=106879 Kate Gardner reviews The Limits of Genius: the Surprising Stupidity of the World’s Greatest Minds by Katie Spalding

    The post Sects, drugs and drunken duels: lighter moments from the history of science appeared first on Physics World.

    ]]>
    Ada Lovelace

    René Descartes revolutionized philosophy, science and mathematics, but did you know he moved to Amsterdam in the early 17th century for the same reason young people continue to go there today? Yes, he went to smoke a lot of weed and get away from his father. And then he became a fanatical supporter of a weird religious sect that didn’t actually exist.

    Meanwhile, Ada Lovelace – mathematical prodigy and author of the first computer program in the 1840s – was a compulsive gambler. She lost so big she sold the family jewels, and when her mother-in-law bought them back, Lovelace lost them all over again. In fact, she was still in debt when she died.

    These are just two examples from The Limits of Genius: the Surprising Stupidity of the World’s Greatest Minds, written by science journalist Katie Spalding. In this informative, funny book, Spalding profiles people widely considered to have been geniuses. In each case she briefly sketches their background before digging deep into examples of when they were…not so clever.

    Spalding’s style is chatty and irreverent, with quite a bit of swearing, so that at times this book reads like a series of Twitter threads – admittedly impeccably researched and heavily footnoted Twitter threads. And though you may already be familiar with some stories – such as astronomer Tycho Brahe’s penchant for getting drunk and fighting duels – the mix of people covered means there is bound to be something new to you.

    As Spalding’s introduction admits, however, there aren’t many women in this book. This is largely because the women who have managed to achieve renown tend to be written about so little that their interesting quirks and flaws just weren’t recorded, which means the chapters about women feel lighter on detail.

    “Stupid” is also a subjective label. Spalding is quick to point out the racism, sexism, ableism and other forms of bigotry her subjects suffered from or were guilty of. But she also includes issues that could be seen as out of an individual’s control rather than “stupid”. For example, psychologist Sigmund Freud probably didn’t know cocaine was massively addictive before he started using (and prescribing) it in huge quantities. And civil rights activist and author Maya Angelou certainly couldn’t help having a murderously dangerous mother.

    On the other hand, in some cases the acts of stupidity are intrinsically linked to the scientific research being conducted. Physicist Marie Curie did carry around radioactive materials in her pockets, leading to horrible skin lesions and her early death – but it’s also in part how she figured out radioactivity. The meteorologist and aeronaut James Glaisher did nearly kill himself by taking multiple hot-air balloon flights so high that he passed out (the exact height is unknown because all his instruments broke). But from these accidents he figured out details of the Earth’s atmosphere that revolutionized the nascent field of meteorology.

    So The Limits of Genius might be best described as a highly entertaining whistle-stop tour of lesser-known facts and anecdotes about well-known people. With swears.

    • 2023 Hachette 352pp £22hb
    • Sold in the US with the title Edison’s Ghosts: the Untold Weirdness of History’s Greatest Geniuses (2023 Hachette 352pp $29hb)

    The post Sects, drugs and drunken duels: lighter moments from the history of science appeared first on Physics World.

    ]]>
    Opinion and reviews Kate Gardner reviews The Limits of Genius: the Surprising Stupidity of the World’s Greatest Minds by Katie Spalding https://physicsworld.com/wp-content/uploads/2023/05/2023-05-Gardner-Ada_Lovelace_featured-crop.jpg newsletter
    Ultrathin e-tattoo provides continuous heart monitoring https://physicsworld.com/a/ultrathin-e-tattoo-provides-continuous-heart-monitoring/ Wed, 10 May 2023 08:45:26 +0000 https://physicsworld.com/?p=107663 A flexible chest e-tattoo measures electrical and mechanical cardiac signals to detect early signs of heart disease

    The post Ultrathin e-tattoo provides continuous heart monitoring appeared first on Physics World.

    ]]>
    Cardiovascular disease is the leading cause of death worldwide. Continuous cardiac monitoring could allow earlier detection of heart disease, enabling timely intervention to prevent serious cardiac complications. Traditional monitoring devices, however, are designed for clinical settings and are too heavy and power-hungry for long-term measurements of people on the move.

    A team headed up at The University of Texas at Austin aims to solve this problem with the creation of an ultrathin (200 µm) and lightweight (2.5 g) device that provides continuous cardiac monitoring outside of the clinic. The stretchable electronic tattoo, or e-tattoo, attaches to the chest via a medical dressing. It boasts ultralow power consumption (less than 3 mW), runs on a small battery with a life of more than 40 h, and wirelessly streams real-time data to a host device such as a mobile phone.

    “Most heart conditions are not very obvious. The damage is being done in the background and we don’t even know it,” explains lead author Nanshu Lu in a press statement. “If we can have continuous, mobile monitoring at home, then we can do early diagnosis and treatment, and if that can be done, 80% of heart disease can be prevented.”

    Dual-mode electro-mechanical sensing

    The e-tattoo works by measuring two key cardiac signals: the electrical activity of the heart via electrocardiography (ECG); and mechanical cardiac rhythm (subtle vibrations caused by heart contraction and blood movement) via seismocardiography (SCG). The ECG sensor interfaces with the body using bio-compatible graphite film electrodes, while the SCG is recorded by a high-resolution, low-noise accelerometer.

    Synchronization between the ECG and SCG signals enables the measurement of key cardiac time intervals – the pre-ejection period (PEP) and the left ventricular ejection time (LVET) – with high accuracy. Such time intervals are important indicators of many cardiovascular diseases, but currently can only be measured via invasive means.

    “Those two measurements, electrical and mechanical, together can provide a much more comprehensive and complete picture of what’s happening with the heart,” says Lu. “There are many more heart characteristics that could be extracted out of the two synchronously measured signals in a non-invasive manner.”

    An e-tattoo designed for long-term wear must be comfortable, conform to the contours of the chest and stretch with the skin as the user moves. To achieve this, the team used serpentine interconnects between the sensors and electronic circuits. They found that the e-tattoo could stretch up to 20% without any damage or drop in signal quality.

    <strong>Flexible and comfortable</strong> The e-tattoo uses stretchable interconnections to conform to the body. (Courtesy: The University of Texas at Austin)

    Performance comparisons

    To validate the quality of the acquired signals, the researchers compared data from the e-tattoo’s sensors against gold-standard clinical devices, observing that both devices captured equivalent signals and data. Next, they tested the e-tattoo on five healthy volunteers, who wore the e-tattoo while holding static poses and cycling under incremental load with breaks. For comparison, participants also wore a non-invasive cardiac output monitor (NICOM).

    During the static poses, heart rates measured by the e-tattoo and the NICOM agreed well for all participants, with a difference of 0.07±1.21 beats per minute (bpm). As participants transitioned from lying to sitting upright and then to standing, the e-tattoo recorded increased PEP and decreased LVET, as expected. This demonstrates the device’s ability to measure small changes in cardiac time intervals caused by posture variations.

    In the cycling experiment, heart rate measurements from the e-tattoo and the NICOM were again highly correlated, with a difference of 0.02±1.24 bpm. The ECG signal remained pristine during cycling, but the SCG signal was corrupted by motion artefacts. Thus the researchers examined the rest periods between each cycling segment, during which the mean difference in LVET between the e-tattoo and the NICOM was −0.44 ± 8.74 ms. This linear relationship shows that the e-tattoo can provide a viable alternative to bulky and expensive clinical monitors.

    Finally, the team performed a long-term wearability test with a single subject, who wore the e-tattoo for more than 24 h to demonstrate its use in day-to-day settings. The e-tattoo demonstrated good correlation with a consumer smartwatch in heart rate measurements. Manual inspection of the long-term data showed that during restful segments (such as working at a computer, pausing during a walk or sleeping), the ECG and SCG data were mostly free of motion artefacts and suitable for extracting cardiac time intervals.

    The researchers describe the e-tattoo in Advanced Electronic Materials.

    The post Ultrathin e-tattoo provides continuous heart monitoring appeared first on Physics World.

    ]]>
    Research update A flexible chest e-tattoo measures electrical and mechanical cardiac signals to detect early signs of heart disease https://physicsworld.com/wp-content/uploads/2023/05/chest-e-tattoo.jpg
    Iridium Netwerk’s medical physics team sees the ‘big picture’ on transit in vivo dosimetry https://physicsworld.com/a/iridium-netwerks-medical-physics-team-sees-the-big-picture-on-transit-in-vivo-dosimetry/ Tue, 09 May 2023 16:00:40 +0000 https://physicsworld.com/?p=107642 The SunCHECK Quality Management Platform is helping medical physicists to optimize the radiotherapy workflow by automating the analysis of large-scale clinical data sets

    The post Iridium Netwerk’s medical physics team sees the ‘big picture’ on transit <em>in vivo</em> dosimetry appeared first on Physics World.

    ]]>
    Electronic portal imaging devices (EPIDs) are now widely used as dosimeters by radiation oncology clinics – both for automated pre-treatment verification (without the patient present) and for transit in vivo dosimetry (with the patient in situ on the treatment couch). In the case of the latter, the motivation is to enhance patient safety by detecting errors and deviations in dose delivery (owing to changes in patient anatomy, for example) over the course of radiation treatment, while simultaneously addressing the increased patient QA complexity of advanced modalities such as volumetric modulated arc therapy (VMAT) and stereotactic body radiotherapy (SBRT).

    Beyond the immediate upsides of transit in vivo dosimetry – chiefly, a final safety net for the healthcare team and the patient – there’s also the longer-term roadmap towards at-scale patient-specific quality assurance (PSQA). Put another way: fully automated, EPID-based transit dosimetry opens the way for medical physicists to not only detect divergence of an individual radiotherapy fraction versus the treatment plan, but to use the cumulative (and ever-growing) patient QA data set as a tool to re-evaluate and reimagine best practice within the radiation oncology workflow.

    The engine-room of patient QA

    A pioneer in this regard is Dirk Verellen, director of medical physics at Iridium Netwerk, a multi-site radiation oncology programme in the Greater Antwerp region of Belgium. Earlier this year, Verellen and his team published a granular analysis of a four-year PSQA data set spanning a large cohort of Iridium Netwerk cancer patients with diverse disease indications. Their findings are instructive, demonstrating systematic correlation between transit dosimetry measurements over time versus adaptations within the clinical workflow. “Our results suggest EPID in vivo dosimetry is able to assess the impact of some adaptations to the workflow and can therefore assist in continuous quality improvement of patient treatment and outcomes,” explains Verellen.

    Dirk Verellen

    At the heart of Iridium Netwerk’s PSQA work programme is the SunCHECK Quality Management Platform from Sun Nuclear (a Mirion Medical company), the US-based manufacturer of independent QA solutions for radiotherapy facilities and diagnostic imaging providers. Deployed across the Antwerp healthcare system’s four treatment centres through late 2017 and early 2018, SunCHECK comprises a single interface and database offering a unified view of patient and machine QA that’s independent from the treatment system. As such, SunCHECK’s two core software modules – SunCHECK Patient and SunCHECK Machine – are now established as the QA “engine-room” for Iridium Netwerk’s distributed medical physics service.

    That service, staffed by 19 medical physicists and seven physics assistants, is built around a unified suite of Varian treatment systems (currently nine TrueBeam machines and one Clinac iX) delivering leading-edge cancer care to around 6000 patients every year. Two treatment planning systems (TPS) deal with the heavy lifting in advance of treatment delivery: RayStation (from RaySearch Laboratories, Sweden) for stereotactic plans and Varian’s Eclipse TPS for all other types of treatment plan. Meanwhile, SunCHECK Patient encompasses all aspects of Iridium Netwerk’s patient QA, including secondary checks, phantomless pre-treatment QA and automated in vivo monitoring (with the EPID-based measurements managed by SunCHECK’s dedicated PerFRACTION software module).

    “With PerFRACTION, we’ve shown that large-scale clinical implementation of in vivo transit dosimetry is feasible, even for complex techniques,” says Evy Bossuyt, a senior medical physicist in Verellen’s team and project lead for the integration and ongoing development of SunCHECK Patient within the Iridium Netwerk radiotherapy programme. “In this way, PerFRACTION adds an extra dimension to patient QA, revealing a variety of deviations spanning errors in planning, machine problems, patient positioning and changes in patient anatomy such as weight loss, tumour shrinkage or rectal/bladder filling.”

    Alongside the enhanced error detection, the Iridium Netwerk medical physics department has seen significant streamlining over the past five years with regards to the aggregate workload and staff-time allocated to essential patient QA checks. “That’s down to SunCHECK Patient’s high degree of automation plus the in-built accessibility that comes from a web-based software platform,” notes Bossuyt.

    Data-driven insights

    That emphasis on automation and online access is, by extension, fundamental to Iridium Netwerk’s retrospective PSQA study – a longitudinal review that aggregates data from all the group’s radiotherapy patients treated between September 2018 and August 2022. In total, Bossuyt and colleagues analysed 84,100 transit in vivo dosimetry measurements, dividing them into four yearly periods. The team also classified failed measurements by pathology and into four categories of failure: technical, planning and positioning problems as well as anatomical changes in the patient.

    “We investigated if the observed trends in the in vivo dosimetry results versus time could be a result of adaptations to the clinical workflow,” explains Bossuyt. “Also the other way around: if the impact of adaptations could be monitored via the in vivo dosimetry.”

    Overall, Bossuyt and the project team found that the number of failed measurements linked to patient-related problems gradually decreased from 9.5% to 5.6% over the four-year study period (see “Further reading”). What’s more, a deep-dive into the transit dosimetry data set reveals no shortage of success stories reflecting the impact of targeted workflow changes.

    Failed measurements attributed to positioning problems, for example, decreased from 10.0% to 4.9% in boost breast-cancer patients after the introduction of extra imaging; from 9.1% to 3.9% in head-and-neck patients following education of radiation therapists on positioning of patients’ shoulders; from 6.1% to 2.8% in breast-cancer patients after introduction of ultrahypofractionated breast radiotherapy with daily online pre-treatment imaging; and from 11.2% to 4.3% in extremities following introduction of immobilization with calculated couch parameters and a surface-guided radiation therapy solution. Elsewhere, following targeted patient education from dieticians, failed measurements related to anatomical changes decreased from 10.2% to 4.0% in colorectal patients and from 6.7% to 3.3% in prostate patients.

    Automate and accumulate

    Verellen and Bossuyt, for their part, are already thinking about next steps regarding the clinical exploitation of PerFRACTION and transit dosimetry. One use-case under investigation is the automatic triggering of offline (ultimately online) adaptation for specific disease indications – in head-and-neck patients, for example, where the anatomy adjacent to the tumour often changes only slowly during the course of treatment.

    “We’re evaluating a workflow that’s able to ‘red-flag’ significant changes in patient anatomy based on transit dose measurements over the previous three or four fractions,” says Verellen. “Automation is the key to success here,” he adds. “Using the transit dose data to show that the treatment plan is gradually decreasing in quality, the organs-at-risk are reaching their safe limits, so it might be time to replan the patient. That’s where we want to go next with our patient QA platform.”

    As a SunCHECK reference site, Iridium Netwerk promotes radiotherapy QA best practice using the SunCHECK Quality Management Platform. The clinical team collaborates with Sun Nuclear on its product development roadmap while serving as a regional resource for the growing European base of SunCHECK users.

    Further reading

    Evy Bossuyt et al. 2023 Assessing the impact of adaptations to the clinical workflow in radiotherapy using transit in vivo dosimetry (phiRO 25 100420)

    • For more information about SunCHECK, visit Sun Nuclear on booth 150 at the ESTRO Annual Congress in Vienna, Austria (12–15 May).

    The post Iridium Netwerk’s medical physics team sees the ‘big picture’ on transit <em>in vivo</em> dosimetry appeared first on Physics World.

    ]]>
    Analysis The SunCHECK Quality Management Platform is helping medical physicists to optimize the radiotherapy workflow by automating the analysis of large-scale clinical data sets https://physicsworld.com/wp-content/uploads/2023/05/Evy-Iridium.jpg
    Threshold for X-ray flashes from lightning is identified by simulations https://physicsworld.com/a/threshold-for-x-ray-flashes-from-lightning-is-identified-by-simulations/ Tue, 09 May 2023 13:26:06 +0000 https://physicsworld.com/?p=107651 Research could lead to new types of X-ray sources  

    The post Threshold for X-ray flashes from lightning is identified by simulations appeared first on Physics World.

    ]]>
    New insights into how X-ray flashes are produced during lightning strikes have been made by researchers in the US, France, and the Czech Republic. Using computer simulations, a team led by Victor Pasko at Penn State University showed how avalanches of electrons responsible for the flashes are triggered at a minimum threshold the electric fields produced by the precursor to lightning. This discovery could lead to the development of new techniques for producing X-rays in the lab.

    Terrestrial gamma-ray flashes (TGFs) involve the emission of high-energy photons from sources within Earth’s atmosphere. While the term gamma-ray is used, most of the photons are created by the acceleration of electrons and are therefore X-rays.

    These X-rays are emitted in the megaelectronvolt energy range and their creation is closely associated with lightning. Although TGFs are rare and incredibly brief, they are now regularly observed by instruments that detect gamma rays from space.

    Space telescopes

    “TGFs were discovered in 1994 by NASA’s Compton Gamma Ray Observatory,” Pasko explains. “Since then, many other orbital observatories have captured these high-energy events, including NASA’s Fermi Gamma-ray Space Telescope.”

    Following their initial discovery, the origins of TGFs were linked to electrons that are liberated from air molecules by the intense electric fields of “lightning leaders”. These are channels of ionized air that form between a negatively charged cloud bas and the positively charged ground. As the name suggests, the creation of lightning leaders is followed shortly by lightning discharges.

    Once these electrons are liberated in a lightning leader, they are accelerated by the electric field and collide with molecules to liberate more electrons. This process continues, very rapidly creating more and more electrons in what Pasko describes an “electron avalanche”.

    Ionizing X-rays

    As the electrons collide with molecules, some of the energy lost by the electrons is radiated in the form of X-rays. These X-rays travel in all directions – including back along the path of the electron avalanche. As a result, the X-rays can ionize more molecules upstream from the avalanche, liberating more electrons and making the TGFs even brighter.

    After this initial model was conceived in the early 2000s, researchers attempted to recreate the behaviour in computer simulations. So far, however, these simulations have not managed to closely mimic the sizes of TGFs observed in real lightning strikes.

    Pasko and colleagues believe that this lack of success is related to the relatively large size of these simulations, which usually model regions that are several kilometres across. However, this latest work suggests that TGFs typically form in highly compact regions (ranging from 10 to 100 m in size) surrounding the tips of lightning leaders. Until now, the reasons surrounding this compactness have largely remained a mystery.

    Minimum threshold

    In their study, the researchers assumed that TGFs only form when the strength of the lightning leader’s electric field exceeds a minimum threshold value. By simulating more compact regions of space, Pasko and colleagues were able to identify this threshold. What is more, the TGFs produced in this way matched real observations far more closely than previous simulations.

    Pasko and colleagues hope that future simulations could mimic the TGF electron avalanche mechanism far more closely – potentially leading to new techniques for producing X-rays in the lab. “In the presence of electrodes, the same amplification mechanism and X-ray production may involve generation of runaway electrons from the cathode material,” Pasko explains.

    Ultimately, this could lead to deeper insights into how X-rays can be produced through controlled electrical discharges in gases. This could lead to compact, highly efficient X-ray sources. Pasko concludes, “We anticipate a lot of new and interesting research to explore different electrode materials, as well as gas pressure regimes and compositions that would lead to enhanced X-ray production from small discharge volumes.”

    The work is described in Geophysical Research Letters.

    The post Threshold for X-ray flashes from lightning is identified by simulations appeared first on Physics World.

    ]]>
    Research update Research could lead to new types of X-ray sources   https://physicsworld.com/wp-content/uploads/2023/05/Lightning-researchers.jpg
    New transport measurement system from Oxford Instruments https://physicsworld.com/a/new-transport-measurement-system-from-oxford-instruments/ Tue, 09 May 2023 09:52:58 +0000 https://physicsworld.com/?p=107482 Oxford Instruments outline their new measurement data server

    The post New transport measurement system from Oxford Instruments appeared first on Physics World.

    ]]>

    In this video filmed at the 2023 March Meeting of the American Physical Society in Las Vegas, Matt Martin, managing director of Oxford Instruments Nanoscience, introduces a new transport measurement system, which has been produced to allow complete integration with Lake Shore products. It uses open-source technologies, such as QCoDeS and Jupyter Notebooks, so that it can be easily tailored to meet the needs of the user and can be flexibly integrated with all third-party electronics.

    As Martin explains, the user ends up with a complete data set, which they can then use for the research papers that they want to write. Using Grafana and the Jupyter Notebook tools, graphs can be created and dropped directly into an article.

    Also featured in the video is Abi Graham, a measurement scientist from the company, who talks about the firm’s cryostat control software, which lets users monitor the status of their fridge and also set and change parameters such as field and temperature. This software is linked to the Proteox control unit, with the measurement server hosting a browser-based Jupyter Notebook to give full remote access.

    Martin then describes how the integration with Lake Shore products was achieved. He explains that with OI:DECS and an open-framework environment, the company is attempting to create a community using the framework of Oxford Instruments software and databasing.

    The post New transport measurement system from Oxford Instruments appeared first on Physics World.

    ]]>
    Video Oxford Instruments outline their new measurement data server https://physicsworld.com/wp-content/uploads/2023/05/OxfordInstruments-frontis.png