Tuesday, July 29, 2025

NASA Expedition 73 Explores Human Health and Robotic Tech in Space



NASA's Expedition 73 has yielded new insights into how microgravity affects human health and demonstrated advanced robotic control systems that could shape the future of planetary exploration.City lights glitter across the southern United States in this photograph taken from the International Space Station as it orbited into sunrise, 260 miles above Florida. In the right foreground, part of the station’s main solar arrays is visible, alongside a smaller set of roll-out solar arrays that help power the orbital outpost. Image Credit: NASA







Aboard the International Space Station (ISS), the Expedition 73 crew conducted a series of studies examining the physical toll of extended spaceflight and testing robotic systems designed for remote planetary operations. The mission focused on understanding bone and cardiovascular changes in astronauts while evaluating new tools for controlling robotic vehicles in space environments.
Why Long Missions Are So Tough on the Body

Spending months in microgravity isn’t just disorienting; it rewires the body. Bones weaken, muscles shrink, and cardiovascular function shifts. For future missions lasting a year or more, these changes could pose serious risks.

Medical Robots eBook

Compilation of the top interviews, articles, and news in the last year.
Download a free copy

At the same time, astronauts will need to rely more on robotics. Whether it’s navigating a dusty Martian landscape or handling repairs far from home, robots can go where humans can’t—or shouldn’t. Expedition 73 combined both lines of research, using the ISS as a lab for testing countermeasures and new tech side by side.
Inside the Research: Biology Meets Engineering

Commander Takuya Onishi (JAXA) and NASA Flight Engineer Nichole Ayers led the Bone on ISS study, tracking how astronauts’ bones change over time. They collected and processed blood samples during the mission, preserving them for detailed analysis back on Earth.

Meanwhile, NASA’s Anne McClain worked on CIPHER, a suite of 14 experiments that monitor everything from bone and heart health to hormonal changes. By studying blood and urine samples, researchers are building a clearer picture of how the body adjusts to space and what it needs to stay strong.

On the tech front, Flight Engineer Jonny Kim ran a series of robotic control tests from the Columbus lab module. Using laptops, touchscreens, and VR goggles, he simulated what it would be like to control robotic rovers on a planetary surface from orbit—no spacesuit required.
What They Found—and Why it Matters

The biology side of the mission revealed key details about how bones break down in microgravity, including metabolic pathways that could be targeted to slow or stop bone loss. These insights won’t just help astronauts—they could also lead to better treatments for osteoporosis here on Earth.

CIPHER provided a deep, real-time view into how space affects multiple body systems. That kind of data is crucial for designing health plans tailored to long missions and shifting gravity environments.

The robotics tests proved that astronauts can remotely operate vehicles with precision, thanks to haptic feedback and VR tools that improve control and awareness. These systems could one day allow a crew in lunar orbit to run complex tasks on the surface without ever leaving their spacecraft.
Not Just for Space

Much of what was tested during Expedition 73 could have immediate value on Earth. Robotic control systems designed for space might improve search-and-rescue operations, disaster response, or remote industrial work in dangerous environments.

And the health research could lead to smarter monitoring and prevention strategies for people dealing with bone density issues, cardiovascular disease, or long-term immobility.
What’s Next

Expedition 73 made it clear that tackling space’s toughest problems will require both biological know-how and smarter machines. As NASA and its partners prepare for missions that will last longer and travel farther, research like this is laying the groundwork.

Next steps include refining countermeasures for muscle and bone loss, advancing real-time health tracking, and pushing robotic systems to be more responsive, more intuitive, and more capable in complex environments.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Monday, July 28, 2025

NASA Launches Mission to Study Earth’s Magnetic Shield

 




NASA’s TRACERS (Tandem Reconnection and Cusp Electrodynamics Reconnaissance Satellites) mission launched at 2:13 p.m. EDT atop a SpaceX Falcon 9 rocket at Space Launch Complex 4 East at Vandenberg Space Force Base in California.
Credit: SpaceX

NASA’s newest mission, TRACERS, soon will begin studying how Earth’s magnetic shield protects our planet from the effects of space weather. Short for Tandem Reconnection and Cusp Electrodynamics Reconnaissance Satellites, the twin TRACERS spacecraft lifted off at 11:13 a.m. PDT (2:13 p.m. EDT) Wednesday aboard a SpaceX Falcon 9 rocket from Space Launch Complex 4 East at Vandenberg Space Force Base in California.

“NASA is proud to launch TRACERS to demonstrate and expand American preeminence in space science research and technology,” said acting NASA Administrator Sean Duffy. “The TRACERS satellites will move us forward in decoding space weather and further our understanding of the connection between Earth and the Sun. This mission will yield breakthroughs that will advance our pursuit of the Moon, and subsequently, Mars.”

The twin satellites will fly one behind the other — following as closely as 10 seconds apart over the same location — and will take a record-breaking 3,000 measurements in one year to build a step-by-step picture of how magnetic reconnection changes over time.

Riding along with TRACERS aboard the Falcon 9 were NASA’s Athena EPIC (Economical Payload Integration Cost), PExT (Polylingual Experimental Terminal), and REAL (Relativistic Electron Atmospheric Loss) missions — three small satellites to demonstrate new technologies and gather scientific data. These three missions were successfully deployed, and mission controllers will work to contact them over the coming hours and days.

Ground controllers for the TRACERS mission established communications with the second of the two spacecraft at 3:43 p.m. PDT (6:43 p.m. EDT), about 3 hours after it separated from the rocket. During the next four weeks, TRACERS will undergo a commissioning period during which mission controllers will check out their instruments and systems.

Once cleared, the twin satellites will begin their 12-month prime mission to study a process called magnetic reconnection, answering key questions about how it shapes the impacts of the Sun and space weather on our daily lives.

“NASA’s heliophysics fleet helps to safeguard humanity’s home in space and understand the influence of our closest star, the Sun,” said Joe Westlake, heliophysics division director at NASA Headquarters in Washington. “By adding TRACERS to that fleet, we will gain a better understanding of those impacts right here at Earth.”

The two TRACERS spacecraft will orbit through an open region in Earth’s magnetic field near the North Pole, called the polar cusp. Here, TRACERS will investigate explosive magnetic events that happen when the Sun’s magnetic field — carried through space in a stream of solar material called the solar wind — collides with Earth’s magnetic field. This collision creates a buildup of energy that causes magnetic reconnection, when magnetic field lines snap and explosively realign, flinging away nearby particles at high speeds.

Flying through the polar cusp allows the TRACERS satellites to study the results of these magnetic explosions, measuring charged particles that race down into Earth’s atmosphere and collide with atmospheric gases — giving scientist the tools to reconstruct exactly how changes in the incoming solar wind affect how, and how quickly, energy and particles are coupled into near-Earth space.

“The successful launch of TRACERS is a tribute to many years of work by an excellent team,” said David Miles, TRACERS principal investigator at the University of Iowa. “TRACERS is set to transform our understanding of Earth’s magnetosphere. We’re excited to explore the dynamic processes driving space weather.”

Small Satellites Along for Ride

Athena EPIC is a pathfinder mission that will demonstrate NASA’s use of an innovative and configurable commercial SmallSat architecture to improve flexibility of payload designs, reduce launch schedule, and reduce overall costs in future missions, as well as the benefits of working collaboratively with federal partners. In addition to this demonstration for NASA, once the Athena EPIC satellite completes its two-week commissioning period, the mission will spend the next 12 months taking measurements of outgoing longwave radiation from Earth.

The PExT demonstration will test interoperability between commercial and government communication networks for the first time by demonstrating a wideband polylingual terminal in low Earth orbit. This terminal will use software-defined radios to jump between government and commercial networks, similar to cell phones roaming between providers on Earth. These terminals could allow future missions to switch seamlessly between networks and access new commercial services throughout its lifecycle in space.

The REAL mission is a CubeSat that will investigate how energetic electrons are scattered out of the Van Allen radiation belts and into Earth’s atmosphere. Shaped like concentric rings high above Earth’s equator, the Van Allen belts are composed of a mix of high-energy electrons and protons that are trapped in place by Earth’s magnetic field. Studying electrons and their interactions, REAL aims to improve our understanding of these energetic particles that can damage spacecraft and imperil astronauts who pass through them.

The TRACERS mission is led by David Miles at the University of Iowa with support from the Southwest Research Institute in San Antonio, Texas. NASA’s Heliophysics Explorers Program Office at the agency’s Goddard Space Flight Center in Greenbelt, Maryland, manages the mission for the Heliophysics Division at NASA Headquarters in Washington. The University of Iowa, Southwest Research Institute, University of California, Los Angeles, and the University of California, Berkeley, all lead instruments on TRACERS.

The Athena EPIC mission is led by NASA’s Langley Research Center in Hampton, Virginia, and is a partnership between National Oceanic and Atmospheric Administration, U.S. Space Force, and NovaWurks. Athena EPIC’s launch is supported by launch integrator SEOPS. The PExT demonstration is managed by NASA’s SCaN (Space Communications and Navigation) program in partnership with Johns Hopkins Applied Physics Laboratory, with launch support by York Space Systems. The REAL project is led by Dartmouth College in Hanover, New Hampshire, and is a partnership between Johns Hopkins Applied Physics Laboratory, Montana State University, and Boston University. Sponsored by NASA’s Heliophysics Division and CubeSat Launch Initiative, it was included through launch integrator Maverick Space Systems.

NASA’s Launch Services Program, based at the agency’s Kennedy Space Center in Florida, manages the VADR (Venture-class Acquisition of Dedicated and Rideshare) contract.

To learn more about TRACERS, visit:

https://nasa.gov/tracers

-end-

Abbey Interrante / Karen Fox

Friday, July 25, 2025

Q&A with professor of computer science: What happens when AI faces the human problem of uncertainty?



by Will Kwong, University of Southern California


edited by Sadie Harley, reviewed by Andrew Zinin






Editors' notes
Credit: Pixabay/CC0 Public Domain

In a world increasingly shaped by artificial intelligence, the question of how machines make decisions under uncertain conditions grows more urgent every day.
How do we weigh competing values when outcomes are uncertain? What constitutes reasonable choice when perfect information is unavailable? These questions, once confined to academic philosophy, are now front and center as we delegate increasingly complex decisions to AI.

A new large language model (LLM) framework developed by Willie Neiswanger, assistant professor of computer science at the USC Viterbi School of Engineering and the USC School of Advanced Computing, along with students in the computer science department, combines classical decision theory and utility theory principles to significantly enhance AI's ability to face uncertainty and tackle those complex decisions.

Neiswanger's research was spotlighted at 2025's International Conference on Learning Representations and published on the arXiv preprint server. He recently discussed how AI handles uncertainty with USC News.
What are your thoughts on the difference between artificial and human intelligence?

Neiswanger: At present, human intelligence has various strengths relative to machine intelligence. However, machine intelligence also has certain strengths relative to humans, which make it valuable.

Large language models (LLMs)—AI systems trained on vast amounts of text that can understand and generate humanlike responses—for instance, can rapidly ingest and synthesize large amounts of information from reports or other data sources, and can generate at scale by simulating many possible futures or proposing a wide range of forecasted outcomes. In our work, we aim to take advantage of the strengths of LLMs while balancing them against the strengths and judgment of humans.

Monday, July 21, 2025

It's elementary: Problem-solving AI approach tackles inverse problems used in nuclear physics and beyond

 by Matt Cahill, Thomas Jefferson National Accelerator Facility

Solving life's great mysteries often requires detective work, using observed outcomes to determine their cause. For instance, nuclear physicists at the U.S. Department of Energy's Thomas Jefferson National Accelerator Facility analyze the aftermath of particle interactions to understand the structure of the atomic nucleus.

This type of subatomic sleuthing is known as the inverse problem. It is the opposite of a forward problem, where causes are used to calculate the effects. Inverse problems arise in many descriptions of physical phenomena, and often their solution is limited by the experimental data available.

That's why scientists at Jefferson Lab and DOE's Argonne National Laboratory, as part of the QuantOm Collaboration, have led the development of an artificial intelligence (AI) technique that can reliably solve these types of puzzles on supercomputers at large scales.

"We set out to prove we could use generative AI to better understand the structure of the proton," said Jefferson Lab Data Scientist Daniel Lersch, a lead investigator on the study. "But this framework isn't bound to nuclear physics. Inverse problems can be anything."

The system is called SAGIPS (Scalable Asynchronous Generative Inverse-Problem Solver). It relies on high-performance computing and generative AI models, which can produce new text, images or videos based on data the algorithms are trained on.

SAGIPS was built for QuantOm. Its goal is to better understand fundamental nuclear physics by using advanced computational methods, and the SAGIPS system was recently featured in the journal Machine Learning: Science and Technology.

The problem

Inverse problems can be found in most areas of science, from astrophysics to chemistry to medical imaging. The process can be likened to reverse engineering, said Nobuo Sato, a Jefferson Lab theoretical physicist and author on the paper.

"Imagine throwing a ball into a dark hole," Sato said. "If the ball bounces back in a particular pattern, you can play around with different directions and in principle infer what kind of surface is inside."

In the SAGIPS study, the ball is an electron. It's part of a "toy" nuclear physics problem based on inclusive deep inelastic scattering, in which an electron is measured after interacting with another particle.

But the math behind inverse problems can be a little fuzzy. Solutions are represented as probabilities instead of concrete answers. Using a problem-solver like SAGIPS can add clarity and definition to those probabilities, reducing uncertainties and bringing scientists closer to an answer.

SAGIPS ran a machine learning (ML) algorithm on the Polaris supercomputer cluster at the Argonne Leadership Computing Facility, a DOE Office of Science user facility in the Advanced Scientific Computing Research (ASCR) portfolio. Using 400 processing cores, SAGIPS solved the toy problem and showed promise for solving larger problems on an even more powerful supercomputer.

"This technique scales linearly with the available computing resources, which means we could process much bigger problems on a much bigger cluster," said Malachi Schram, Jefferson Lab's head of data science and a co-author on the paper. "That's the heart of it."

Quantum Internet Meets Einstein’s Theory of Gravity in This New Ingenious Idea






A new study reveals that quantum networks can do more than secure communication, they can also test how quantum mechanics behaves in the warped spacetime described by Einstein’s theory of gravity. Credit: SciTechDaily.com

Scientists demonstrate that quantum networks of clocks offer a new way to explore the interplay between quantum mechanics and curved space-time.


Quantum networking is advancing rapidly across the globe. As a foundational technology in the emerging field of quantum science, it holds the promise of building a worldwide quantum internet. Such a system would allow for secure communication on a massive scale and make it possible to link quantum computers over vast distances. Efforts to turn this vision into reality are already well underway, both on the ground and in orbit.

In a recent breakthrough, researchers have discovered that quantum networks may have capabilities beyond secure communication. A collaborative study led by Igor Pikovski from Stevens Institute of Technology, along with Jacob Covey at the University of Illinois at Urbana-Champaign and Johannes Borregaard at Harvard University, has opened up a new scientific possibility. Their findings, published in PRX Quantum, reveal that quantum networks can be used to investigate how space-time curvature influences quantum mechanics. This marks the first experimental approach of its kind.

While quantum theory has consistently stood up to experimental testing, its behavior in the presence of gravity remains uncertain. Einstein’s theory of general relativity redefined gravity not as a force, but as a result of the bending of space and time—a concept known as curved space-time. This curvature gives rise to unusual effects, such as the slowing of time near massive objects like planets. These effects have been confirmed with high precision and have also made their way into mainstream culture through science fiction stories, including the film Interstellar.

But how does this changing flow of time affect quantum mechanics? Could quantum theory or general relativity, or both, require modification where they intertwine? While a full theory of quantum gravity remains lacking, there are suggestions that quantum principles might change in the presence of curved spacetime. However, probing this frontier was so far impossible in experiments.
A New Approach Using Quantum Networks

In a previous study published in Physical Review Research, Pikovski and Borregaard have shown that the time is ripe for experiments to explore these questions, using quantum networks. They showed how two unique, but distinct features of quantum theory and gravity come into play simultaneously.

In quantum theory, there exist superpositions: matter can exist not only in specific, definite states, but also in mixtures of them at the same time. Quantum computing exploits this fact to build qubits —superpositions of bits of 0 and 1. Then, quantum networks can spread such qubits across large distances. But in the vicinity of Earth, these qubits would also be affected by curved space-time because the flow of time itself changes. The researchers showed that superpositions of atomic clocks in quantum networks would pick up different time-flows in superposition, and that this opens the door to probe how quantum theory and curved space-time intertwine.


“The interplay between quantum theory and gravity is one of the most challenging problems in physics today, but also fascinating,” says Igor Pikovski, Geoffrey S. Inman Junior Professor at Stevens Institute of Technology, and one of the authors. “Quantum networks will help us test this interplay for the first time in actual experiments.”

Teaming up with Covey’s lab, Pikovski and Borregaard then developed a concrete protocol. The team showed how quantum effects can be distributed across network nodes using so-called entangled W-states, and how interference between these entangled systems is recorded. By exploiting modern quantum capabilities, such as quantum teleportation (transferring the quantum state of a particle to another particle) and entangled Bell-pairs (maximally entangled states of two qubits) in atom arrays, a test of quantum theory on curved space-time can be achieved.
Rethinking the Role of Quantum Networks

“We assume that quantum theory holds everywhere — but we really don’t know if this is true,” says Pikovski. “It might be that gravity changes how quantum mechanics works. In fact, some theories suggest such modifications, and quantum technology will be able to test that.”

The results of Pikovski, Covey, and Borregaard demonstrate that quantum networks are not only a useful practical tool for a future quantum internet, but that they also provide unique opportunities for the study of fundamental physics that cannot be achieved with classical sensing. At the very least, a test of how quantum mechanics behaves on curved space-time is now possible.

References: “Probing curved spacetime with a distributed atomic processor clock” by Jacob P. Covey, Igor Pikovski and Johannes Borregaard, 2025, PRX Quantum.
DOI: 10.48550/arXiv.2502.12954

“Testing quantum theory on curved spacetime with quantum networks” by Johannes Borregaard and Igor Pikovski, 27 May 2025, Physical Review Research.
DOI: 10.1103/PhysRevResearch.7.023192

Wednesday, July 16, 2025

Great British Chemicals to turn industrial waste into world-first green chemistry

 A major research centre that is set to position the UK as a global leader in clean technology, is being launched by the universities of Sheffield, Newcastle and Nottingham.



A major new research centre that is set to position the UK as a global leader in clean technology by replacing fossil petrochemicals and recycling industrial waste using sustainable chemistry, is being launched by researchers at the universities of Sheffield, Newcastle and Nottingham.

As referenced in the UK government’s recent Industrial Strategy, Great British (GB) Chemicals brings together researchers from a total of 10 universities who will work with stakeholders throughout the chemical industry to produce cleaner versions of the chemicals that we depend on in our modern lives, to reduce pollution, ensure resilience, and secure economic sustainability. 

The centre is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Natural Environment Research Centre (NERC). 

Led by Professor Peter Styring from the University of Sheffield, and Professors Libby Gibson from Newcastle University and Mike George from the University of Nottingham, GB Chemicals aims to accelerate the deployment of world-leading laboratory research through real-world demonstration and validation, and promote UK investment, job creation and potential export markets for the chemical industry.

Kedar Pandya, Executive Director for Strategy at EPSRC said: "This investment by EPSRC and NERC will drive a sustainable chemical industrial future, shifting the UK away from environmentally harmful processes towards circular alternatives that improves peoples’ lives and drive economic growth. Working closely with industry partners, this will be a systems approach that optimises the interdependencies between environmental net gain, decarbonisation, and resource efficiency. 

“By embedding environmental science within manufacturing solutions, we're enabling an environmentally sound net zero transition that has a positive impact on biodiversity, ecosystems, and natural resources - aligning with priorities in the clean energy industries sector plan.”

Rupert Lewis, Deputy Executive Chair at NERC said: “I am delighted that EPSRC and NERC have been able to partner to deliver sustainability improvements in the UK’s important chemical sector. This investment highlights the innovation opportunities in securing improved competitiveness and growth for UK businesses in the net zero transition.”

The UK’s chemical sector is a major contributor to the UK’s economy with an annual turnover of £65.5 billion - the same as the aerospace, automotive and life sciences industries combined. It is also a critical part of UK industry more generally as it produces chemicals that underpin products, goods and services that people rely on every day, such as materials, pharmaceuticals and cleaning products. 

With the sector’s economic importance, the industry has the opportunity to create new revenue streams by turning waste into products, fuelling a sustainable and circular economy. This will lead to better outcomes for people and the planet. GB Chemicals will work to make this a reality using smarter chemical and biological conversion technologies to create value-added, cleaner chemicals and non-fossil feedstocks guided by social, environmental and economic analysis. It will also deliver technical and entrepreneurship skills to develop a modern workforce that will underpin this essential transition.

Professor Peter Styring, Professor of Chemical Engineering and Chemistry at the University of Sheffield and Co-Director of GB Chemicals, said: “The award of Great British Chemicals reflects a great effort by our 10-university team to put a sustainable chemicals industry at the forefront of a long-needed transition. We will take emissions from foundation industries to provide the feedstocks to drive future chemicals production. One of the things that shone through during the process was the enthusiasm of the team to succeed and to help develop a world-leading new chemicals sector. 

“There will be challenges: technical, economic and social, however we have the right team to deliver that to where there are currently gaps, and we have the flexibility in funding to bring in new partners and stakeholders. We already have combined experience in developing technologies to pre-commercial systems and we have shown that working as teams on a consolidated whole systems approach can deliver results at an accelerated pace. Co-creation with our stakeholders can drive that even more when we work together as a focused team.”

Professor Libby Gibson, Professor of Energy Materials at Newcastle University, and Co-Director of GB Chemicals, said: “I'm delighted that we have been awarded the opportunity to lead Great British Chemicals. Carbon from the petrochemical industry is embedded in almost every manufactured product. If we want to cut pollution, improve health outcomes, become more resilient, grow the economy, provide jobs and keep products affordable, we need to urgently accelerate the deployment of smarter technology that keeps carbon in use rather than digging it up and then discarding it. 

“This award enables us to unlock that opportunity, by driving innovation from the lab bench to the industrial backbone through our partnerships, pilots, data, and training. Ultimately, this will enable the community to secure investment, strengthen policy and create a lasting benefit for the planet."

Professor Michael George, Professor of Chemistry at the University of Nottingham, and Co-Director of GB Chemicals, said: “The UK chemical using industries are an under-appreciated jewel in our country’s economy. I am thrilled to be part of Great British Chemicals, helping to shift this sector towards sustainable operations. Success needs the participation of our wide range of university and industrial stakeholders focusing on the skills agenda. 

“Our centre recognises the vital role of technical professionals in research across academia and industry. This includes partnership with the UK Institute for Technical Skills and Strategy, aligning with the Technician Commitment to support visibility, opportunity and the sustainability of skills.”

Great British Chemicals is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Natural Environment Research Council (NERC), both part of UKRI. The centre will be funded at a full economic cost of £22.5 million for seven years and includes investment for Moonshot projects in novel catalysis, core funding for development to scale and flexible funding to provide an agile response to technology as it develops. There will also be an initial wave of funding for the first three years to allow the team to concentrate on emerging themes that could show quick benefits.

GB Chemicals will organise conferences, networking events, training and consultations across the initial lifespan of the centre with important themes addressed as research and development progresses. These will include Technician, Early Career and Mid-Career Researcher development, and a novel workstream on entrepreneurship. In the latter, GB Chemicals aims to educate and develop university spinout companies to move research ideas up the technology readiness ladder.

GB Chemicals will begin officially on 1 August 2025, although work has already begun to ensure the consortium hits the ground running.


Monday, July 14, 2025

Powerful new AI tool help doctors read chest X‑rays better

https://engineeringscientist.com/

 




Can artificial intelligence, or AI, potentially transform health care for the better?

Now, rising to the challenge, an Arizona State University team of researchers has built a powerful new AI tool, called Ark+, to help doctors read chest X‑rays better and improve health care outcomes.

"Ark+ is designed to be an open, reliable and ultimately useful tool in real-world health care systems," said Jianming "Jimmy" Liang, an ASU professor from the College of Health Solutions, and lead author of the study recently published in the prestigious journal Nature.

In a proof-of-concept study, the new AI tool demonstrated exceptional capability in diagnosis, from common lung diseases to rare and even emerging ones like COVID-19 or avian flu. It also was more accurate and outperformed proprietary software currently released by industry titans like Google and Microsoft.

Our goal was to build a tool that not only performed well in our study but also can help democratize the technology to get it into the hands of potentially everyone. Ultimately, we want AI to help doctors save lives."

Jianming "Jimmy" Liang, ASU professor from the College of Health Solutions

More bang for the health care buck

People certainly are demanding more bang for their health care buck.

Yet, with health care now the leading driver of the US economy, the US continues to rank lower than many countries in many indicators, including 49th in life expectancy, according to the World Bank. That's lower than countries like Cuba and Qatar.

Patients want to live healthier lives and have better outcomes. And doctors want to make sure to get the diagnosis right the first time for better patient care.

That's when AI enters the waiting room.

A new AI healthcare tool

Liang's research team wanted to use AI to help interpret the most common type of X-ray used in medicine, the chest X-ray.

Chest X-rays are a big help for doctors to quickly diagnose various conditions affecting the chest, including lung problems (like pneumonia, tuberculosis or Valley fever), heart issues, broken ribs and even certain gut conditions. 

But sometime, they can be hard to interpret, even for experienced physicians, or they may miss diagnosing rare conditions or emerging diseases, as was seen in the first year of the COVID-19 pandemic.

The Ark+ tool makes chest X-rays easier by reducing mistakes, speeding up diagnosis and making the technology more equitable by providing top‑quality AI health tools free and open access worldwide.

"We believe in open science, said Liang. "So, we used a public data and a global data set as we think this will more quickly develop the AI model."

Ark+ outperforms previous AI chest x-ray tools

AI works by training computer software on large data sets, or in the case of the Ark+ model, a total of more than 700,000 worldwide images from several publicly available X-ray datasets.

The key difference-maker for Ark+ was adding value and expertise from the human art of medicine. Liang's team critically included all the detailed doctors' notes compiled for every image. "You learn more knowledge from experts," said Liang.

These expert physician notes were critical in the Ark+ learning and getting more and more accuracy as it was trained on each data set.

"Ark+ is accruing and reusing knowledge," said Liang, explaining the acronym. "That's how we train it. And pretty much, we were thinking of a new way to train AI models with numerous datasets via fully supervised learning."

"Because before this, if you wanted to train a large model using multiple data sets, people usually used self-supervised learning, or you train it on the disease model, the abnormal, versus a normal x-ray."

Large companies like Google and Microsoft have been developing AI healthcare models this way.

"That means you are not using the expert labels," said Liang. "And so, that means you throw out the most valuable information from the data sets, these expert labels. We wanted AI to learn from expert knowledge, not only from the raw data."

So, in a case of David versus Goliath, Liang's small yet plucky research team, including graduate students DongAo Ma and Jiaxuan Pang, worked on the project with funding from the National Institutes of Health, National Science Foundation and seed funding from a long-standing collaboration with Mayo Clinic Arizona radiologist Michael Gotway.

ASU's new tool may be the slingshot needed to give medicine a boost, as it was shown to outperform private and property software developed by giants.

Other key highlights from the pilot project include:

• Foundation model for X‑rays: Ark+ is trained on many different chest X‑ray datasets from hospitals and institutions around the world. This makes it better at detecting a wide range of lung issues.
• Open and sharable: The team has released the code and pretrained models. This means other researchers can improve it or adjust it for local clinics.
• Quick learning: Ark+ can identify rare diseases even when only a few examples are available.
• Adapts to new tasks: Ark+ can be also fine‑tuned to spot new or unseen lung problems without needing full retraining.
• Resilient and fair: Ark+ works well even with uneven data and fights against biases. It can also be used in private, secure ways.

Among the most important aspects of outcompeting proprietary companies was making the Ark+ software open access, and free to all.

"If we compete directly, it's unlikely that we're going to win," said Liang. "But with open-source software, we invite collaborations with many other labs. And with everyone involved, I think we are more powerful than one company."

Putting the AI into the hands of doctors

Liang also notes that the software can be adapted for any kind of medical imaging diagnosis, including CT, MRI and other imaging tools, thereby expanding its impact in the future. 

Liang and his research team hopes Ark+ will become a foundation for future AI tools in medicine, allowing better care no matter where patients live.

The Ark+ team hopes to further commercialize the software for hospitals so that researchers everywhere will use and build on their work. By sharing everything openly, they want to help doctors in all countries, even rural places without big data resources.

Their goal is to make medical AI safer, smarter and more helpful for everyone.

"By making this model fully open, we're inviting others to join us in making medical AI more fair, accurate and accessible," Liang added. "We believe this will help save lives."

That's a better pill for US healthcare that every American would like to swallow.

Thursday, July 10, 2025

How to Fine-Tune Small Language Models to Think with Reinforcement Learning

 A visual tour and from-scratch guide to train GRPO reasoning models in PyTorch

Reasoning models are currently in fashion. DeepSeek-R1, Gemini-2.5-Pro, OpenAI’s O-series models, Anthropic’s Claude, Magistral, and Qwen3 — there is a new one every month. When you ask these models a question, they go into a chain of thought before generating an answer.

A simple demonstration of what reasoning looks like. When asked a question, the Language Model (LM) generates a chain of thought first, followed by the answer. (Illustration by the Author)

I recently asked myself the question, “Hmm… I wonder if I should write a Reinforcement Learning loop from scratch that teaches this ‘thinking’ behaviour to really small models — like only 135 million parameters“. It should be easy, right?

Well, it wasn’t.

Small models simply do not have the world knowledge that large models do. This makes < 1B parameter model lack the “common sense” to easily reason through complex logical tasks. Therefore, you cannot just rely on compute to train them to reason.

You need additional tricks up your sleeve.

In this article, I won’t just cover tricks though. I will cover the major ideas behind training reasoning behaviours into language models, share some simple code snippets, and some practical tips to fine-tune Small Language Models (SLMs) with RL.

This article is divided into 5 sections:

  1. Intro to RLVR (Reinforcement Learning with Verifiable Rewards) and why it is uber cool
  2. A visual overview of the GRPO algorithm and the clipped surrogate PPO loss.
  3. A code walkthrough!
  4. Supervised fine-tuning and practical tips to train reasoning models
  5. Results!

Unless otherwise mentioned, all images used in this article are illustrations produced by the author.

At the end of this article, I will link to the 50-minute companion YouTube video of this article. If you have any queries, that video likely has the answers/clarification you need. You can also reach out to me on X (@neural_avb).

1. Reinforcement Learning with Verifiable Rewards (RLVR)

Before diving into specific challenges with Small models, let’s first introduce some terms.

Group Relative Policy Optimization, or GRPO, is a (rather new) Reinforcement Learning (RL) technique that researchers are using to fine-tune Large Language Models (LLMs) on logical and analytical tasks. Since its inception, a new term has been circulating in the LLM research space: RLVR, or Reinforcement Learning with Verifiable Rewards.

To understand what makes RLVR unique, it’s helpful to contrast it with the most common application of RL in language models: RLHF (Reinforcement Learning with Human Feedback). In RLHF, an RL module is trained to maximize scores from a separate reward model, which acts as a proxy for human preferences. This reward model is trained on a dataset where humans have ranked or rated different model responses.

In other words, RLHF is trained so LLMs can output responses that are more aligned with human preferences. It tries to make models follow instructions more closely.

RLVR tries to solve a different problem. RLVR teaches a model to be verifiably correct, often by learning to generate it’s own chain of thought.

Where RLHF had a subjective reward model, RLVR uses an objective verifier. The core idea is to provide rewards based on whether an answer is demonstrably correct, not on a prediction of what a human might prefer.


Wednesday, July 9, 2025

India’s new metallurgical coke import restrictions is a blow to industry and trade

 Presented as a step toward self-reliance, the measure to impose temporary restrictions on the import of low-ash metallurgical coke has drawn considerable scrutiny, sparking concerns over its broader economic and industrial ramifications.

On December 26, 2024, the Union Government announced a six-month restriction on the import of low-ash metallurgical coke (met coke), a crucial ingredient in steel production. This policy, effective January 1, 2025, limits imports to 0.8 million tonnes per month, aiming to boost domestic production and reduce dependence on foreign suppliers. While the move aligns with the government’s broader push for “Atmanirbhar Bharat” (self-reliant India), its implications for the steel industry, trade relations, and environmental sustainability raise serious concerns.

Metallurgical coke is indispensable for steelmaking, serving as a reducing agent in blast furnaces. India, the world’s second-largest steel producer, consumed over 136 million tonnes of steel in 2023-24 alone, with demand expected to grow by 7-8% annually. However, the country relies on imports for nearly 50% of its met coke requirements, primarily from Australia and China, due to insufficient domestic production and higher ash content in locally available coal. The new import cap of 0.8 million tonnes per month represents a sharp reduction from the 1.2 million tonnes imported monthly in 2024. This sudden limitation risks creating supply bottlenecks, driving up costs, and jeopardizing production schedules for steel manufacturers.

India’s steel industry contributes about 2% to GDP, employs over 600,000 workers, and serves as a critical input for infrastructure, construction, and automobile sectors. With domestic met coke production constrained by quality and capacity issues, the restriction could lead to a 30-40% rise in raw material costs, according to industry estimates. The cascading effects of these cost increases will likely impact downstream industries. For instance, infrastructure projects, already reeling under high financing costs, may face delays due to escalating steel prices. The construction sector, which accounts for nearly 60% of steel consumption, may witness reduced activity, further slowing economic growth.

This protectionist measure also risks straining India’s trade relations. Countries like Australia, a significant supplier of met coke, might view the restriction as discriminatory. India imported approximately 71 million tonnes of metallurgical coal in 2023, with 51.5% of that total, or about 13.21 million tonnes, coming from Australia for the period between April and August 2023. A sudden reduction in import volumes could provoke retaliatory trade measures, complicating India’s export ambitions in sectors like textiles and pharmaceuticals. Moreover, such unilateral trade restrictions might invite scrutiny under World Trade Organization (WTO) norms. India has been criticized in the past for imposing sudden tariff hikes and quantitative restrictions, actions deemed inconsistent with global trade agreements.

While promoting domestic met coke production may align with self-reliance goals, it poses environmental challenges. The production process is energy-intensive and emits large quantities of carbon dioxide. India’s industrial sector accounts for around 22% of total greenhouse gas emissions, with the iron and steel sector being a significant contributor. The iron and steel sector is the single-largest consumer of energy (approximately 36%) and the contributor to greenhouse gas emissions (approximately 37%) within the Indian manufacturing sector. Scaling up domestic production to meet the shortfall created by import restrictions would likely worsen air and water pollution, particularly in coal-mining regions like Jharkhand and Chhattisgarh. Despite pledges at COP28 to achieve net-zero emissions by 2070, such policies risk undermining India’s global climate commitments.

The met coke import cap is symptomatic of a broader trend of sudden and unilateral policy announcements by the Union Government. In recent years, measures such as the abrupt ban on wheat exports in 2022 and the introduction of export duties on steel in 2023 have caused significant disruptions in domestic and global markets. These decisions, often made without adequate consultation with stakeholders, reflect a top-down approach to policy making. The lack of transparency and predictability in trade and industrial policies undermines investor confidence and risks derailing India’s ambitions of becoming a $5 trillion economy.

For India to achieve self-reliance without compromising economic stability or global trade relations, a more planned and phased approach is essential. Gradually reducing import quotas, rather than imposing sudden caps, would allow the steel industry sufficient time to adjust while enabling domestic met coke producers to scale up their capacity. A hasty implementation risks supply shocks that could destabilize the entire value chain. Concurrently, incentivizing cleaner and more efficient technologies for domestic coke production should be a priority. Investments in advanced heat recovery systems, for instance, could reduce emissions by 20-30%, ensuring that industrial growth aligns with environmental sustainability goals.

Equally important is stakeholder engagement, where policy formulation involves meaningful consultations with industry representatives, trade partners, and environmental experts. Such an inclusive approach ensures that policies remain practical, equitable, and broadly supported. Additionally, enhancing the quality of domestic coal through investments in beneficiation technologies could make local met coke production more viable by lowering ash content. This strategy would reduce reliance on imports without resorting to restrictive trade measures. Finally, India must strengthen trade diplomacy by advocating for balanced policies that safeguard domestic industries while honoring global trade norms.

Collaborative initiatives with key trade partners would help mitigate potential adverse impacts and foster long-term economic stability.
The government’s met coke import restriction risks destabilizing the steel industry and creating broader economic and environmental challenges. Policymaking in a complex and interconnected economy like India requires careful balancing of competing priorities – economic growth, environmental sustainability, and global trade relations. Without such a shift, India risks not only alienating its trade partners but also undermining its own developmental goals in the long run.

Scientists from Russia and Vietnam discover new antimicrobial compounds in marine sponges

  Scientists from the G. B. Elyakov Pacific Institute of Bioorganic Chemistry of the Far Eastern Branch of the Russian Academy of Sciences, ...