An anonymous reader quotes a report from the Wall Street Journal: Rivian is joining with Redwood Materials to reuse EV batteries for energy storage -- the largest repurposed-battery energy storage system for an automotive manufacturer in the U.S., executives told The Wall Street Journal. Redwood Materials is a battery-recycling firm started by Tesla co-founder JB Straubel. Once completed later this year, Rivian's plant in Normal, Ill., will draw electricity from more than 100 Rivian EV batteries in an area the size of a small parking lot. It will reduce Rivian's dependence on the power grid during peak demand hours. "It saves Rivian money on what it takes to run the plant. It reduces the demand on the grid, which is great," Rivian Chief Executive Officer RJ Scaringe said in an interview.

In the Rivian project, the batteries will come from either its test vehicles or from vehicles that have viable batteries but can no longer drive. Those batteries get sent off to Redwood, which integrates them into power storage units. Both companies declined to specify the cost of this project. The setup is expected to initially provide 10 megawatt-hours of energy, equivalent to about 1,000 home-energy battery storage units linked together, Redwood's Straubel said. "These batteries are already built," he said. "We need to integrate them and connect them together, but that can happen quite fast. They don't have to get imported from some other place." [...] Scaringe said that while branching into battery energy storage systems is "not a focus for us as a business right now," Rivian hopes to do more at its sites with Redwood. "There's hopefully a lot more, and there's going to be a lot of batteries we'll have access to," he said.

New 3D map of Universe could solve dark energy mystery
DESI collaboration and KPNO/NOIRLab/NSF/AURA/R. Proctor
arstechnica.com
Visualization shows how DESI built its 3D map of the Universe. Earth is at the center of the wedges, and every point is a galaxy. Credit: DESI/KPNO/NOIRLab/NSF/AURA/R. Proctor
      </div>

In a significant milestone, the Dark Energy Spectroscopic Instrument (DESI) has completed its 3D map of the Universe—the highest resolution of any such map yet achieved—on schedule and with more data than expected, the collaboration announced today. Analyses of DESI data from earlier runs have already produced exciting hints of new physics—namely that the Universe's dark energy, rather than being constant, might vary over time. The latest data must still be analyzed but could help definitively confirm or disprove those hints within the next couple of years.

"DESI's five-year survey has been spectacularly successful," DESI director Michael Levi of Berkeley Lab said. "The instrument performed better than anticipated. The results have been incredibly exciting. And the size and scope of the map and how quickly we've been able to execute is phenomenal. We're going to celebrate completion of the original survey and then get started on the work of churning through the data, because we're all curious about what new surprises are waiting for us."

As previously reported, Albert Einstein’s cosmological constant (lambda) implied the existence of a repulsive form of gravity. (For a more in-depth discussion of the history of the cosmological constant and its significance for dark energy, see our 2024 story.) Quantum physics holds that even the emptiest vacuum is teeming with energy in the form of “virtual” particles that wink in and out of existence, flying apart and coming together in an intricate quantum dance. This roiling sea of virtual particles could give rise to dark energy, giving the Universe a little extra push so that it can continue accelerating. The problem is that the quantum vacuum contains too much energy: roughly 10120 times too much.

Read full article

A 63-year-old man in Norway appears to be cured of HIV after receiving a stem cell transplant from his brother, who turned out to have a rare mutation that makes immune cells resistant to HIV. "Four years after the transplant, and two years after the man stopped antiretroviral therapy, he still appears to be free of the infection," reports Gizmodo. From the report: According to the report, the man was first diagnosed with myelodysplastic syndrome, a type of cancer that weakens blood cell production from bone marrow, in 2018. Though he seemed to initially respond to treatment, the cancer returned after two years, and doctors decided to perform a stem cell transplant. Because the man also had HIV (diagnosed in 2006), the doctors were hoping to treat both conditions at once, though they knew their chances were low. Most of these cases have involved the use of stem cells taken from people with two copies of a particular mutation in their CCR5 gene, which regulates the CC5R receptor on white blood cells. This mutation, named CCR5-delta 32, makes immune cells naturally resistant to infection from strains of HIV-1 (the most common type of the virus). However, only about 1% of the population carries two copies of the mutation.

After initial screening failed to find someone who both possessed the mutation and had compatible bone marrow, the doctors decided to move ahead with the man's brother, who was already known to have compatible bone marrow. But to everyone's surprise, testing on the day of the transplant showed that the brother also had the mutation. Though the man did experience some complications from the procedure, his body successfully started to produce new blood cells with the mutation. The doctors decided to take him off antiretroviral medication two years after the transplant. And in the two years since then, regular follow-up tests have failed to show any signs of the virus in his system. [...] According to AFP, there have only been roughly 10 cases worldwide involving an HIV cure through stem cell transplantation. This is the first to involve a family donor.

What’s the deal with Alzheimer’s disease and amyloid?
Aurich Lawson | Getty Images
arstechnica.com

At the end of last month, a scientific journal pulled a research paper on Alzheimer's disease.

The retraction came from Neurobiology of Aging, which removed a 2011 paper claiming to show that a version of a protein called amyloid-β was responsible for memory loss in Alzheimer's disease. On its own, that might not seem notable; bad papers can make it through peer review and are only caught after publication.

But this wasn't an isolated case. Over the past few years, multiple studies arguing that amyloid-β is the central driver of Alzheimer's disease have been retracted. Some scientists have even been indicted for fraud over the issue. All the while, none of the drugs targeting this protein and its pathway have had any real clinical effect.

Read full article

New Adobe Premiere Color Grading Mode Accelerated on NVIDIA GPUsblogs.nvidia.com

The NAB Show 2026 trade show, running April 18-22 in Las Vegas, is set to showcase a wave of new features and optimizations for top video editing applications. Bringing together over 60,000 content professionals from across the broadcast and media and entertainment industries, the event highlights how video editors, livestreamers and professional creators are exploring new tools, accelerated by NVIDIA RTX technology, to enhance and streamline their creative workflows.

At the show, Adobe is announcing a new Adobe Premiere Color Mode in beta.

Designed to function as a dedicated grading environment nested directly within Premiere, it offers a clean, responsive interface that lets editors stay in their creative flow rather than relying on external tools for color correction. Tapping into GPU acceleration on NVIDIA GeForce RTX- and NVIDIA RTX PRO-equipped systems, this streamlined workflow, operating in 32-bit color depth for the first time, delivers significantly faster performance and quality.

NVIDIA also launched a new update to NVIDIA Project G-Assist — an experimental AI assistant that helps tune, control and optimize GeForce RTX systems.

Color Meets Compute

Premiere’s Color Mode is a new clean, responsive interface within Adobe Premiere that enables editors to do color grading on native videos. Every element is designed to guide editors through the grading process without distractions. A large program monitor anchors the experience, providing immediate visual feedback as adjustments are made to enable faster decision-making and more precise control.

A clip grid view allows editors to visualize progression across shots in a sequence. This makes it easier to maintain consistency across scenes and ensure a cohesive look throughout a project.

Controls are organized into focused modules, each tailored to a specific aspect of color grading. Multiple modules can be active simultaneously, giving editors flexibility while maintaining clarity. Each control features a unique heads-up display (HUD), providing contextual guidance without cluttering the interface.

Color grading is one of the most computationally intensive tasks in post-production. Every adjustment — bidirectional controls, multi-zone tonal shaping and stacked color operations — runs on NVIDIA GPUs, accelerating playback, iteration and visual feedback.

Editors can work with up to six luminance adjustment zones, moving beyond traditional highlights, midtones and shadows models. This allows for more nuanced tonal control and finer adjustments across the image.

Visual scopes are context-aware, dynamically adapting based on the selected tool. HUD overlays provide visual cues directly within the scopes, helping editors understand how their adjustments affect the image without needing to interpret complex visual scopes and graphs.

The entire system now operates in 32-bit color depth precision, delivering maximum color fidelity and preventing unwanted clipping. Editors retain full control, with the ability to clip colors intentionally when needed for creative effect. Color styles can also be applied flexibly, at the sequence, clip, reel or custom group level, making it easier to manage looks across complex projects.

Download the Adobe Premiere (beta) to get started with Color Mode.

Project G-Assist: Enhanced Recommendations and Controls

The NVIDIA Project G-Assist on-device AI assistant helps users get the most out of their hardware. Today’s update adds an advanced detection system for gaming settings, as well as an enhanced knowledge system, enabling G-Assist to deliver higher accuracy when providing advice or adjusting settings for esports and AAA gaming.

The assistant can also now control more settings across systems. It can configure advanced RTX features from the NVIDIA App, including NVIDIA DLSS Overrides, Smooth Motion, RTX HDR, Digital Vibrance and encoder settings.

Download Project G-Assist v0.2.1 from the NVIDIA App.

#ICYMI: The Latest Updates for RTX AI PCs

📹 Learn how visual effects shop Corridor Crew’s Niko Pueringer built his own green screen key tool, powered by NVIDIA RTX GPUs, at NAB. Stop by the Puget Systems booth on Monday, April 20, at 1 p.m. PT for a special presentation, or tune in on NVIDIA Studio’s YouTube channel on Tuesday, April 21, at 12 p.m. PT to watch the full session.

🖼 Also at NAB, join NVIDIA’s Sabour Amirazodi for a special presentation at the ASUS booth on Tuesday, April 21, at 11 a.m. PT. Amirazodi will showcase how guiding generative AI can produce creative outputs like storyboards or entire movie trailers — based on a single image input.

📽 Check out content creator Gavin Herman’s Studio Session, “How to Edit Professional Talking Head Videos in DaVinci Resolve,” on the NVIDIA Studio YouTube channel. Generative workflow specialists can watch this two-hour, instructor-led workshop on how to use NVIDIA GPU acceleration for ComfyUI.

🦞 LM Studio is now an official OpenClaw provider. OpenClaw can now run local models through LM Studio on NVIDIA GPUs, unlocking faster on-device performance.

🦥 Unsloth and NVIDIA have teamed up to eliminate hidden bottlenecks that slow down fine-tuning on NVIDIA GPUs, improving fine-tuning performance by 15%.

Google’s Gemma 4 family of omni-capable models are built for local AI across a wide range of devices. Google and NVIDIA have optimized Gemma 4 for NVIDIA GPUs, enabling efficient performance on NVIDIA RTX-powered PCs and workstations, NVIDIA DGX Spark personal AI supercomputers and NVIDIA Jetson Orin Nano edge AI modules.

📽 Check out this NVIDIA GTC session on how developers can build, run and optimize AI agents locally on NVIDIA GPUs, covering everything from quantization to backends like Ollama and applications like OpenClaw and ComfyUI.

👀 Wondershare Filmora has added a new feature for Eye Contact Correction based on the NVIDIA Broadcast Eye Contact feature. This feature runs on the cloud on NVIDIA GPUs, designed to refine the gaze of subjects in post production for a more natural, confident and camera-ready look, delivering polished, professional videos in seconds.

Filmora’s AI Eye Contact Correction feature powered in the cloud by NVIDIA GPUs.

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Follow NVIDIA Workstation on LinkedIn and X.

A screenshot of the Firefly AI Assistant editing a headshot.
Image: Adobe
You don’t need to understand any fancy editing terms — just describe what changes you want to make. | Image: Adobe

Adobe is fully embracing AI tools that enable creators to edit their work using descriptive prompts, instead of manually using specific Creative Cloud apps. The software giant's new Firefly AI Assistant allows users to describe what they want to change by typing their own words into a conversational interface.

Adobe says this marks a "fundamental shift in how creative work is done" by removing skill barriers and laborious tasks, while still giving creatives full control over their work. It'll be "available soon" on the Firefly AI studio platform according to Adobe, though no specific launch date was provided in the announcement.

The unifi …

Read the full story at The Verge.

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

NASA is building the first nuclear reactor-powered interplanetary spacecraft. How will it work?

Just before Artemis II began its historic slingshot around the moon, NASA revealed an even grander space travel plan. By the end of 2028, the agency aims to fly a nuclear reactor-powered interplanetary spacecraft to Mars.

A successful mission would herald a new era in spaceflight—and might just give the US the edge in the race against China. But the project remains shrouded in mystery.

MIT Technology Review picked the brains of nuclear power and propulsion experts to find out how the nuclear-powered spacecraft might work. Here’s what we discovered.

—Robin George Andrews

This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Coming soon: our 10 Things That Matter in AI Right Now

Each year, we compile our 10 Breakthrough Technologies list, featuring our educated predictions for which technologies will change the world. Our 2026 list, however, was harder to wrangle than normal. Why? We had so many worthy AI candidates we couldn’t fit them all in!

That got us thinking: what if we made an entirely new list all about AI? Before we knew it, we had the beginnings of what we’re calling 10 Things That Matter in AI Right Now.

On April 21, we’ll unveil the list on stage at our signature AI conference, EmTech AI, and then publish it online later that day. If you want to be among the first to see it, join us at EmTech AI or become a subscriber to livestream the announcement.

Find out more about the list’s methodology and aims here.

—Niall Firth & Amy Nordrum

MIT Technology Review Narrated: this company is developing gene therapies for muscle growth, erectile dysfunction, and “radical longevity”

In January, a handful of volunteers were injected with two experimental gene therapies as part of an unusual clinical trial. Its long-term goal? To achieve radical human life extension.

The therapies are designed to support muscle growth. The company behind them, Unlimited Bio, also plans to trial similar therapies in the scalp (for baldness) and penis (for erectile dysfunction). But some experts are concerned about the plans.

Find out why the trial has divided opinion.

—Jessica Hamzelou

This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google, Microsoft, and Meta track users even when they opt out
According to an independent audit, they may be racking up billions in fines. (404 Media)
+ How our digital devices put our privacy at risk. (Ars Technica)
+ Privacy’s next frontier is AI “memories.” (MIT Technology Review)

2 OpenAI has a new cybersecurity model—and strategy
GPT-5.4-Cyber is designed specifically for defensive cybersecurity work. (Reuters $)
+ OpenAI has joined Anthropic in focusing on cybersecurity recently. (Wired $)
+ Like Anthopic, its latest model is only available to verified testers. (NYT $)
+ AI is already making online crimes easier. It could get much worse. (MIT Technology Review)

3 Amazon is buying satellite firm Globalstar in a bid to rival Starlink
The $11.6 billion deal targets the lucrative satellite internet market. (WSJ $)
+ Apple has chosen Amazon satellites for iPhone. (Ars Technica)

4 What it’s like to live with an experimental brain implant
Early BCI users explain what the technology gives—and takes. (IEEE)
+ A patient with Neuralink got a boost from generative AI. (MIT Technology Review)

5 Dozens of AI disease-prediction models were trained on dubious data
A few might already have been used on patients. (Nature)

6 Uber is breaking from its gig economy model to avoid robotaxi disruption
It’s spending $10 billion to buy thousands of autonomous vehicles. (FT $)

7 xAI is being sued over data center pollution
Musk’s AI venture stands accused by the NAACP of violating the Clean Air Act. (Engadget)
+ No one wants a data center in their backyard. (MIT Technology Review)

8 Apple could win the AI race without running
It may reap the rewards of everyone else’s spending. (Axios)

9 How 4chan set a precedent for AI’s reasoning abilities
The notorious forum tested a feature called “chain of thought.” (The Atlantic $)

10 The surprising emotional toll of wearing Meta’s AI sunglasses
Their shortcomings are making users sad. (NYT $)

Quote of the day

“Everything got a whole lot worse once they rolled out AI.”

—A copywriter tells the Guardian that they’re drowning in “workslop” — AI-generated work that seems polished but has major flaws

One More Thing

blocks of frozen carrots and peas
GETTY IMAGES

How refrigeration ruined fresh food

Bananas may not be chilled in the grocery store, but they’re the ultimate refrigerated fruit. It’s only thanks to a network of thermal control that they’ve become a global commodity. And that salad bag on the shelf? It’s not just a bag but a highly engineered respiratory apparatus.

According to Nicola Twilley—a contributor to the New Yorker and cohost of the podcast Gastropod—refrigeration has wrecked our food system. Thankfully, there are promising alternative preservation methods.

Read the full story on her research.

—Allison Arieff

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ Spotify only shows 10 popular songs per artist. This tool lists them all.
+ These GIF animations are mesmerizing loops of nostalgia.
+ This site beautifully visualizes Curiosity’s 13 years on Mars.
+ A retro-futurist designer has turned a NES console into a working synthesizer.

Just as world gasoline prices start to soar, Vietnam, Thailand, Malaysia, and Indonesia now manufacture enough solar panels to cheaply power 20 million EVs, six times their annual new car sales.

These 4 countries now manufacture approximately. 40GW of solar panels per year. That's enough to power around 20 million EVs (maths below). That's far in excess of new car sales in those 4 countries, which come in at 3-4 million cars per year.

The oil shock from the Middle East War has not hit the world economy yet. Pre-war deliveries & reserves are still keeping prices artificially low, but that won't last much longer. $6/gallon oil is not far off. This is an acute economic crisis for SE Asia. There's an alternative, and it won't take long for more and more people to start joining the dots. If you make cheap power yourself for EVs - why stick with gas-cars?

A prediction? By year's end, new gas-car sales will be plummeting in country after country. China won't be able to keep up with the export demand for new EVs.

Southeast Asia’s Solar Panel Boom: It’s not just about China. The world is now benefiting from historically cheap solar panels made in Vietnam, Thailand, and Indonesia.

Maths - 40 GW × 20% capacity factor × 24 hours/day × 365 days/year = 70,080,000 MWh/year (70.08 TWh/year). Annual energy per EV: 12,000 miles × 0.3 kWh/mile = 3,600 kWh (3.6 MWh) per year. 70,080,000 MWh / 3.6 MWh per EV ≈ 19.4666666667 EVs

Amazon is buying satellite communications company Globalstar for $10.8 billion to expand its Leo satellite-internet network and compete more directly with SpaceX's Starlink. The deal also includes a partnership with Apple to support satellite connectivity for iPhones and Apple Watches, with Amazon planning voice, data, and messaging services starting in 2028. The New York Times reports: Leo was Amazon's move to enter the market for beaming high-speed internet to the ground from orbit. That is an arena dominated by Elon Musk's SpaceX, which operates the Starlink satellite-internet service. Starlink, which has thousands of satellites in orbit, already serves several million customers around the world. This month, SpaceX filed to go public in what is shaping up to be one of the largest-ever initial public offerings. Mr. Musk has valued SpaceX -- which has landed contracts with federal agencies such as NASA and the Department of Defense -- at more than $1 trillion. Other companies are racing to catch up to what Mr. Musk has built for space.

Globalstar, founded in 1991, is a Louisiana-based global telecommunications company. It operates networks of low-Earth orbiting satellites to provide internet connectivity to customers. Paul Jacobs, Globalstar's chief executive, said in a statement that together, the two companies "will advance innovations in digital connectivity."

The practice of privacy-led user experience (UX) is a design philosophy that treats transparency around data collection and usage as an integral part of the customer relationship. An undertapped opportunity in digital marketing, privacy-led UX treats user consent not as a tick-box compliance exercise, but rather as the first overture in an ongoing customer relationship. For the companies that get it right, the payoff can bring something more intangible, valuable, and durable than simple consent rates: consumer trust.

The opportunities of privacy-led UX have only recently come into focus. Adelina Peltea, the chief marketing officer at Usercentrics, has seen enterprise sentiment shift: “Even just a few years ago, this space was viewed more as a trade-off between growth and compliance,” she says. “But as the market has matured, there’s been a greater focus on how to tie well-designed privacy experiences to business growth.”

And it turns out that well-designed, value-forward consent experiences routinely outperform initial estimates.
Touchpoints for privacy-led UX often include consent management platforms, terms and conditions, privacy policies, data subject access request (DSAR) tools, and, increasingly, AI data use disclosures.

This report examines how data transparency builds trust with customers; how this, in turn, can support business performance; and how organizations can maintain this trust even as AI systems add complexity to consent processes.

Key findings include the following:

  • Privacy is evolving from a one-time consent transaction into an ongoing data relationship. Rather than asking users for broad permissions up front, leading organizations are introducing data-sharing decisions gradually, matching the depth of the ask to the stage of the customer relationship. Companies that take this tack tend to gather both a larger quantity and higher quality of consumer data, the value of which often compounds over time.
  • Privacy-led UX is a prerequisite for AI growth. The consumer data that organizations gather is rapidly becoming a core foundation upon which AI-powered personalization is built. Organizations that establish clear, enforceable privacy and data transparency policies now are better positioned to deploy AI responsibly and at scale in the future. This starts with correctly configured consent mode across ad platforms.
  • Agentic AI introduces new levels of both complexity and opportunity. As AI systems begin acting on users’ behalf, the traditional consent moment may never occur. Governing agent-generated data flows requires privacy infrastructure that goes well beyond the cookie banner.
  • Realizing the advantages of privacy-led UX requires cross-functional collaboration and clear leadership. Privacy-led UX touches marketing, product, legal, and data teams—but someone must own the strategy and weave the threads together. Chief marketing officers
  • (CMOs) are often best positioned for that role, given their visibility across brand, data, and customer experience.
  • A practical framework can support businesses in getting it right. Organizations must define their data collection and usage strategies and ensure their UX incorporates data consent, including a focus on banner design. Following a blueprint for evaluating and improving privacy-led UX supports consistency at every consent touchpoint.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Anthropic claims autonomous AI researchers beat human baselines on alignment work

Article

In this article, Anthropic describes an automated research system made of parallel Claude-powered agents that can independently propose ideas, run experiments, analyze results, and iterate on the open alignment problem of weak-to-strong supervision, which asks how a stronger model can be trained using only feedback from a weaker one.

The company argues that this kind of outcome-gradable research is a good target for automation because progress can be measured clearly through “performance gap recovered” on held-out test sets. In their main experiment, Anthropic reports that its automated researchers dramatically outperformed manually tuned human baselines on a chat preference benchmark, reaching a near-complete recovery of the strong model’s performance while also surfacing lessons about diversity of research directions, idea collapse, generalization, and reward hacking.

The broader takeaway is that automated AI research already appears practical for some well-scoped problems, and that the main bottleneck may shift from generating and testing ideas to designing robust evaluations that agents can optimize without exploiting loopholes.

Google is rolling out a Chrome feature called "Skills" that lets users save Gemini prompts as reusable one-click workflows they can run across multiple tabs. The feature also includes preset Skills from Google. It's launching first for Chrome desktop users set to US English. The Verge reports: Once you have access to the feature, it can be managed by typing a forward slash ( / ) in Gemini and clicking the compass icon. AI prompts can be saved as Skills directly from your Gemini chat history on desktop, where they'll then be available to reuse on any other desktop devices that are signed into the same Google account on Chrome.

The aim is to spare Chrome users from having to manually retype frequently used Gemini prompts or having to copy and paste them over from a saved list. Some of the Skills made by early testers include commands for calculating the nutritional information of online recipes and creating a side-by-side comparison of product specifications while shopping across multiple tabs, according to Google.

The company is also launching a library of preset Skills that you can save and use instead of making your own. These ready-to-use Skills can also be customized to better suit your needs, providing a starting point without requiring you to create your own from scratch.

An image of the Chrome logo on a yellow background
Image: The Verge

Google is launching a new Chrome workflow feature that allows you to reuse your favorite Gemini commands across multiple webpages. Any AI prompts can now be saved as "Skills" in the Chrome desktop browser, letting you instantly run them across any tabs you select.

"Until now, repeating an AI task - like asking for ingredient substitutions to make a recipe vegan - meant re-entering the same prompt as you visited different pages," Chrome product manager Hafsah Ismail said in the announcement. "To make this easier, we're launching Skills in Chrome, which lets you save and reuse your most helpful AI prompts and run them with a single click."

S …

Read the full story at The Verge.

Image of a hydrogen atoms electron orbitals taken with a quantum microscope in 2013.arstechnica.com
Image of a hydrogen atom's electron orbitals taken with a quantum microscope in 2013.

There has been considerable debate among physicists over the last 15 years about conflicting measurements of the charge radius of a hydrogen atom's proton—some confirming the predictions of our strongest theoretical models, others suggesting it was smaller than expected. The discrepancy hinted at possible exciting new physics. Now the debate seems to be winding down with the latest experimental measurements, described in two recent papers published in the journals Nature and Physical Review Letters, respectively. And the evidence has tilted in favor of a smaller proton radius and against new physics.

"We believe this is the final nail in the coffin of the proton radius puzzle," Lothar Maisenbacher, of the University of California, Berkeley, who co-authored the Nature paper, told Ars.

As previously reported, most popularizations discussing the structure of the atom rely on the much-maligned Bohr model, in which electrons move around the nucleus in circular orbits. But quantum mechanics gives us a much more precise (albeit weirder) description. The electrons aren’t really orbiting the nucleus; they are technically waves that take on particle-like properties when we do an experiment to determine their position. While orbiting an atom, they exist in a superposition of states, both particle and wave, with a wave function encompassing all the probabilities of its position at once. A measurement will collapse the wave function, giving us the electron’s position. Make a series of such measurements and plot the various positions that result, and it will yield something akin to a fuzzy orbit-like pattern.

Read full article

Christina Koch, Jeremy Hansen, Victor Glover, and Reid Wiseman (left to right) are joined by their mission mascot, a plushie toy named Rise, inside the Orion spacecraft.
NASA
arstechnica.com
Christina Koch, Jeremy Hansen, Victor Glover, and Reid Wiseman (left to right) are joined by their mission mascot, a plushie toy named "Rise," inside the Orion spacecraft.

HOUSTON—Their mission is complete. The four people who flew beyond the Moon on NASA's Artemis II mission are back home in Houston with their families. But the lessons from Artemis II are just beginning to be told.

There are tangible, objective takeaways from the nine-day mission. How did NASA's Space Launch System rocket perform? Nearly perfectly. Was the Orion spacecraft up to the job of flying to the Moon and back? Absolutely. Will engineers need to make any changes before the next Artemis mission? Yes, and that's not terribly surprising for a program that, 20 years in, has just flown a crew to space for the first time.

Ars has covered the technical lessons from Artemis II, such as hydrogen leaks on the launch pad, helium leaks in space, and a toilet that wasn't always available for No. 1.

Read full article

Researchers at the University of Southern California say they've developed a memristor memory device that continued operating at 700 degrees Celsius. "And crucially, 700 degrees was not the limit, it was simply as hot as their testing equipment could go," adds ScienceAlert. "The device showed no signs of failing." From the report: The device is called a memristor and it's a nanoscale component that can both store information and perform computing operations. Think of it as a tiny sandwich with two electrode layers on the outside and a thin ceramic filling in the middle. The team built theirs from tungsten, the metal with the highest melting point of any element, combined with a ceramic called hafnium oxide, and with a layer of graphene at the bottom. Each material can withstand enormous heat. Together, they turned out to be extraordinary.

What makes graphene the key ingredient is the way it interacts with tungsten at the atomic level. In a conventional device, heat causes metal atoms to drift slowly through the ceramic layer until they bridge the two electrodes, short circuiting everything and leaving the device permanently broken. Graphene stops that process dead. Its surface chemistry with tungsten is ... almost like oil and water. Tungsten atoms that drift toward the graphene find they simply cannot take hold, no anchor, no short circuit, no failure. The team used advanced electron microscopy and quantum level computer simulations to understand exactly why, turning a single lucky result into a repeatable principle. The findings have been published in the journal Science.

Uber Lucid robotaxi
Image: Nuro
Lucid and Nuro executives hailing an Uber robotaxi. | Image: Nuro

Lucid is making some changes. The luxury EV company said Tuesday that it was expanding its robotaxi deal with Uber - and nabbing some additional investment cash in the process. And it's naming a new CEO who hails not from the world of electric vehicles, but from a company that manufactures a different kind of mobility device: elevators, escalators, and moving walkways.

First, Lucid said that Uber is increasing the number of Lucid Gravity SUVs it is purchasing, from 20,000 to 35,000, for its robotaxi fleet. If you'll recall, last year, Lucid, Uber, and autonomous delivery startup Nuro announced a massive robotaxi deal that would see the dep …

Read the full story at The Verge.

A screenshot from the launch video for Amazon Leo
Image: Amazon

Amazon has made a deal to buy Globalstar's low-Earth orbit satellite network for $11.57 billion, snapping up its spectrum licenses, operations, and assets to combine with its upcoming Leo internet satellite constellation. Apple owned 20 percent of Globalstar, and as a part of the deal, Amazon will continue to support satellite services like Emergency SOS for iPhones and Apple Watches, and develop future services that connect them to its Leo satellite network. The deal is currently scheduled to close in 2027, pending approval by regulators.

Globalstar currently provides direct-to-device services to the iPhone and Apple Watch. That's differen …

Read the full story at The Verge.

Strengthening the humanities at MIT isn’t a departure from our core mission — it’s a way of ensuring that our technical leadership continues to matter in the world, says SHASS Dean Agustín Rayo.
Photo: Bryce Vickmark
news.mit.edu
"Strengthening the humanities at MIT isn’t a departure from our core mission — it’s a way of ensuring that our technical leadership continues to matter in the world," says SHASS Dean Agustín Rayo.

The MIT School of Humanities, Arts, and Social Sciences (SHASS) was founded in 1950 in response to “a new era emerging from social upheaval and the disasters of war,” as outlined in the 1949 Lewis Committee Report.

The report’s findings emphasized MIT’s role and responsibility in the new nuclear age, which called for doubling down on genuine “integration” of scientific and technical topics with humanistic scholarship and teaching. Only that way, the committee wrote, could MIT tackle “the most difficult and complicated problems confronting our generation.”

As SHASS marks its 75th anniversary, Dean Agustín Rayo answers questions about why the need for developing students with broad minds and human understanding is as urgent as ever, given pressing challenges in the midst of a new technological revolution.

Q: Many universities are responding to artificial intelligence by launching new technical programs or updating curricula. You’ve suggested the change is deeper than that. Why?

A: Artificial intelligence isn’t just changing the way students learn — it’s transforming every aspect of society. The labor market is experiencing a dramatic shift, upending traditional paths to financial stability. And AI is changing the ways we bring meaning to our lives: the ways we build relationships, the ways we pay attention, and the things we enjoy doing.

The upshot is that the most important question universities need to ask is not how to adapt our pedagogy to AI — although we certainly need to address that. The most important question we need to ask is how to provide an education that brings real value to students in the age of AI.

We need to ensure that universities provide students with the tools they need to find a path to financial security and to build meaningful lives.

We need to produce students with minds that are both nimble and broad. We need our students to not only be able to execute tasks effectively, but also have the judgment to determine which tasks are worth executing. We need students who have a moral compass, and who understand how the world works, in all of its political, economic, and human complexity. We need students who know how to think critically, and who have excellent communication and leadership skills.

Q: What role do the humanities, arts, and social sciences play in preparing MIT students for that future?

A: They’re essential, and are rightly a core part of an MIT education: MIT has long required its undergraduates take at least eight courses in HASS disciplines to graduate.

Fields like philosophy, political science, economics, literature, history, music, and anthropology are crucial to developing the parts of our lives that are essentially human — the parts that will not be replaced by AI.

They are crucial to developing critical thinking and a moral compass. They are crucial to understanding people — our values, institutions, cultures, and ways of thinking. They are crucial to creating students who are broad thinkers who understand the way the world works. They are crucial to developing students who are excellent communicators and are able to describe their projects — and their lives — in a way that endows them with meaning.

Our students understand this. Here is how one of them put the point: “Engineering gives me the tools to measure the world; the humanities teach me how to interpret it. That balance has shaped both how I do science and why I do it.” (Full interview here.)

Q: Some people worry that emphasizing humanistic study could dilute MIT’s technological edge. How do you respond to that concern?

A: I think the opposite is true.

MIT is an important engine for social mobility in the United States, and a catalyst for entrepreneurship, which has added billions of dollars to the American economy. That cannot be separated from the fact that we are a technical institution, which brings together the country’s most talented undergraduates — regardless of socioeconomic background — and transforms them into the next generation of our country's top scientific and engineering leaders.

MIT plays an incredibly important role in our country. So, the last thing I want to do is mess with our secret sauce.

But I also think that the age of AI is forcing us to rethink what it means to be a top engineer.

Think about artificial intelligence itself. The challenges we face are not just technical. Issues like bias, accountability, governance, and the societal impact of automation are no less important. Understanding those dimensions helps technologists design better systems and anticipate real-world consequences.

Strengthening the humanities at MIT isn’t a departure from our core mission — it’s a way of ensuring that our technical leadership continues to matter in the world.

Q: What kinds of changes is MIT SHASS pursuing to support this vision?

A: There’s a lot going on!

We’ve launched the MIT Human Insight Collaborative (MITHIC) as a way of strengthening research in the humanities, arts, and social sciences, and of deepening collaboration with colleagues across MIT.

We’re shaping the undergraduate experience to ensure that every MIT student engages with the big societal questions shaping our time, from democratic resilience to climate change to the ethics of new technologies.

We’re building stronger connections through initiatives like the creation of shared faculty positions with the MIT Schwarzman College of Computing (SCC). And we recently launched a new Music Technology and Computation Graduate Program with the School of Engineering.

We’re partnering with SERC (the SCC’s Social and Ethical Responsibilities of Computing) to design new classes on the intersection of computing and human-centered issues, such as ethics.

And we’re elevating the humanities — for their own sake, and as a space for experimentation, bringing together students, faculty, and partners to explore new forms of research, teaching, and public engagement.

This is a very exciting time for SHASS.

Ella Wawrzynek, Madeline Miller, and David Whelihan deploy their sensor-equipped AUV from UNHs Gulf Surveyor into the Atlantic Ocean.
Photo: Tim Briggs/Lincoln Laboratory
news.mit.edu
Ella Wawrzynek, Madeline Miller, and David Whelihan deploy their sensor-equipped AUV from UNH's Gulf Surveyor into the Atlantic Ocean.

The electricity to an island goes out. To find the break in the underwater power cable, a ship pulls up the entire line or deploys remotely operated vehicles (ROVs) to traverse the line. But what if an autonomous underwater vehicle (AUV) could map the line and pinpoint the location of the fault for a diver to fix?

Such underwater human-robot teaming is the focus of an MIT Lincoln Laboratory project funded through an internally administered R&D portfolio on autonomous systems and carried out by the Advanced Undersea Systems and Technology Group. The project seeks to leverage the respective strengths of humans and robots to optimize maritime missions for the U.S. military, including critical infrastructure inspection and repair, search and rescue, harbor entry, and countermine operations.

"Divers and AUVs generally don't team at all underwater," says principal investigator Madeline Miller. "Underwater missions requiring humans typically do so because they involve some sort of manipulation a robot can't do, like repairing infrastructure or deactivating a mine. Even ROVs are challenging to work with underwater in very skilled manipulation tasks because the manipulators themselves aren't agile enough."

Beyond their superior dexterity, humans excel at recognizing objects underwater. But humans working underwater can't perform complex computations or move very quickly, especially if they are carrying heavy equipment; robots have an edge over humans in processing power, high-speed mobility, and endurance. To combine these strengths, Miller and her team are developing hardware and algorithms for underwater navigation and perception — two key capabilities for effective human-robot teaming.

As Miller explains, divers may only have a compass and fin-kick counts to guide them. With few landmarks and potentially murky conditions caused by a lack of light at depth or the presence of biological matter in the water column, they can easily become disoriented and lost. For robots to help divers navigate, they need to perceive their environment. However, in the presence of darkness and turbidity, optical sensors (cameras) cannot generate images, while acoustic sensors (sonar) generate images that lack color and only show the shapes and shadows of objects in the scene. The historical lack of large, labeled sonar image datasets has hindered training of underwater perception algorithms. Even if data were available, the dynamic ocean can obscure the true nature of objects, confusing artificial intelligence. For instance, a downed aircraft broken into multiple pieces, or a tire covered in an overgrowth of mussels, may no longer resemble an aircraft or tire, respectively.

"Ultimately, we want to devise solutions for navigation and perception in expeditionary environments," Miller says. "For the missions we're thinking about, there is limited or no opportunity to map out the area in advance. For the harbor entry mission, maybe you have a satellite map but no underwater map, for example."

On the navigation side, Miller's team picked up on work started by the MIT Marine Robotics Group, led by John Leonard, to develop diver-AUV teaming algorithms. With their navigation algorithms, Leonard's group ran simulations under optimal conditions and performed field testing in calm waters using human-paddled kayaks as proxies for both divers and AUVs. Miller's team then integrated these algorithms into a mission-relevant AUV and began testing them under more realistic ocean conditions, initially with a support boat acting as a diver surrogate, and then with actual divers.

"We quickly learned that you need more sensing capabilities on the diver when you factor in ocean currents," Miller explains. "With the algorithms demonstrated by MIT, the vehicle only needed to calculate the distance, or range, to the diver at regular intervals to solve the optimization problem of estimating the positions of both the vehicle and diver over time. But with the real ocean forces pushing everything around, this optimization problem blows up quickly."

On the perception side, Miller's team has been developing an AI classifier that can process both optical and sonar data mid-mission and solicit human input for any objects classified with uncertainty.

"The idea is for the classifier to pass along some information — say, a bounding box around an image — to the diver and indicate, "I think this is a tire, but I'm not sure. What do you think?" Then, the diver can respond, "Yes, you've got it right, or no, look over here in the image to improve your classification," Miller says.

This feedback loop requires an underwater acoustic modem to support diver-AUV communication. State-of-the-art data rates in underwater acoustic communications would require tens of minutes to send an uncompressed image from the AUV to the diver. So, one aspect the team is investigating is how to compress information into a minimum amount to be useful, working within the constraints of the low bandwidth and high latency of underwater communications and the low size, weight, and power of the commercial off-the-shelf (COTS) hardware they're using. For their prototype system, the team procured mostly COTS sensors and built a sensor payload that would easily integrate into an AUV routinely employed by the U.S. Navy, with the goal of facilitating technology transition. Beyond sonar and optical sensors, the payload features an acoustic modem for ranging to the diver and several data processing and compute boards.

Miller's team has tested the sensor-equipped AUV and algorithms around coastal New England — including in the open ocean near Portsmouth, New Hampshire, with the University of New Hampshire's (UNH) Gulf Surveyor and Gulf Challenger coastal research vessels as diver surrogates, and on the Boston-area Charles River, with an MIT Sailing Pavilion skiff as the surrogate.

"The UNH boats are well-equipped and can access realistic ocean conditions. But pretending to be a diver with a large boat is hard. With the skiff, we can move more slowly and get the relative motion in tune with how a diver and AUV would navigate together."

Last summer, the team started testing equipment with human divers at Michigan Technological University's Great Lakes Research Center. Although the divers lacked an interface to feed back information to the AUV, each swam holding the team's tube-shaped prototype tablet, dubbed a "tube-let." The tube-let was equipped with a pressure and depth sensor, inertial measurement unit (to track relative motion), and ranging modem — all necessary components for the navigation algorithms to solve the optimization problem.

"A challenge during testing was coordinating the motion of the diver and vehicle, because they don't yet collaborate," Miller says. "Once the divers go underwater, there is no communication with the team on the surface. So, you have to plan where to put the diver and vehicle so they don't collide."

The team also worked on the perception problem. The water clarity of the Great Lakes at that time of year allowed for underwater imaging with an optical sensor. Caroline Keenan, a Lincoln Scholars Program PhD student jointly working in the laboratory's Advanced Undersea Systems and Technology Group and Leonard's research group at MIT, took the opportunity to advance her work on knowledge transfer from optical sensors to sonar sensors. She is exploring whether optical classifiers can train sonar classifiers to recognize objects for which sonar data doesn't exist. The motivation is to reduce the human operator load associated with labeling sonar data and training sonar classifiers.

With the internally funded research program coming to an end, Miller's team is now seeking external sponsorship to refine and transition the technology to military or commercial partners.

"The modern world runs on undersea telecommunication and power cables, which are vulnerable to attack by disruptive actors. The undersea domain is becoming increasingly contested as more nations develop and advance the capabilities of autonomous maritime systems. Maintaining global economic security and U.S. strategic advantage in the undersea domain will require leveraging and combining the best of AI and human capabilities," Miller says.

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.


Want to understand the current state of AI? Check out these charts.

If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. Stanford’s 2026 AI Index—the field’s annual report card—cuts through the noise.

The data reveals a technology evolving faster than we can manage. From the China-US rivalry and model breakthroughs to public sentiment and the impact on jobs, here are the index’s key findings on the state of AI today.

—Michelle Kim

Why opinion on AI is so divided

Stanford’s 2026 AI Index is full of striking stats. It also reveals a field riddled with inconsistencies, most notably in the gap between experts and non-experts.

On jobs, 73% of US experts view AI’s impact positively, compared to just 23% of the public. Similar divides emerged on the economy and healthcare. What’s driving this disconnect?

Part of the answer may lie in their diverging experiences. Those using AI for coding and technical work see it at its best, while everyone else gets a more mixed bag. The result is two very different realities. Read the full story on what they are—and why they matter.

This story is from The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday.

—Will Douglas Heaven

Job titles of the future: Wildlife first responder

Grizzly bears have made such a comeback across eastern Montana that in 2017, the state hired its first-ever prairie-based grizzly manager: wildlife biologist Wesley Sarmento.

For seven years, Sarmento worked to keep both bears and humans out of trouble. He acted like a first responder, trying to defuse potentially dangerous situations. He even got caught in some himself, which led him to a new wildlife safety tool: drones. Find out the results of his experiments in digital ecology.

—Emily Senkosky

This article is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands on Wednesday, April 22.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Human scientists still trounce the top AI agents at complex tasks
The best agents perform only half as well as experts with PhDs. (Nature)
+ Can AI really help us discover new materials? (MIT Technology Review)

2 OpenAI is escalating its fight with Anthropic while pulling away from Microsoft
A leaked memo exposes plans to attack Anthropic. (Axios)
+ And says Microsoft “limited our ability” to reach clients. (The Information $)
+ While touting a budding alliance with Amazon. (CNBC)

3 Carbon removal technology is stalling—and that may be good news
Better solutions could now emerge. (New Scientist)
+ Here are three that are set to break through. (MIT Technology Review)

4 AI is finding bugs faster than we can fix them—and hackers will benefit
Welcome to the bug armageddon. (WSJ $)
+ AI may soon be capable of fully automated attacks. (MIT Technology Review)

5 A Texas man has been charged with the attempted murder of Sam Altman
He allegedly threw a Molotov cocktail at the OpenAI CEO’s home last Friday. (NPR)
+ The suspect reportedly had a list of other AI leaders. (NYT $)

6 AI is beginning to transform mathematics
It’s proving new results at a rapid pace. (Quanta)
+ One AI startup plans to unearth new mathematical patterns. (MIT Technology Review)

7 Students are turning away from computer science
It’s had a massive drop in enrollments. (WP $)
+ AI coding tools have diminished the degree’s value. (NYT $)

8 India’s bid to become a data center hub is sparking a fierce backlash
Farmers are protesting Delhi’s courtship of hyperscalers. (Rest of World)

9 Meta is set to overtake Google in advertising revenue this year
And become the world’s largest digital ad platform for the first time. (WSJ)

10 AI influencers are taking over Coachella
Synthetic content creators are “everywhere” at the festival. (The Verge)

Quote of the day

“These people are almost nothing like you. They are most likely sociopathic/psychopathic and, in the case of Altman, consistently reported to be a pathological liar.”

—The alleged firebomber of Sam Altman’s home shares his distrust of AI leaders in a blog post.

One More Thing

close crop of the titular rodent and smaller rodents
FRANCESCO FRANCAVILLA

We’ve never understood how hunger works. That might be about to change.

A few years ago, Brad Lowell, a Harvard University neuro­scientist, figured out how to crank the food drive to the maximum. He did it by stimulating neurons in mice. Now, he’s following known parts of the neural hunger circuits into uncharted parts of the brain.

The work could have important implications for public health. More than 1.9 billion adults worldwide are overweight, and more than 650 million are obese. Understanding the circuits involved could shed new light on why these numbers are skyrocketing.

Read the full story.

—Adam Piore

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

Top image credit: Stephanie Arnett/MIT Technology Review | Getty Images

+ Someone built a mechanical version of Tony Hawk’s Pro Skater from Lego.
+ Enjoy this wholesome clip of toddlers discovering the existence of hugs.
+ This interactive body map shows exactly which exercises you need.
+ Jon McCormack’s photos of nature’s patterns are breathtaking.

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Just before Artemis II began its historic slingshot around the moon, Jared Isaacman, the recently confirmed NASA administrator, made a flurry of announcements from the agency’s headquarters in Washington, DC. He said the US would soon undertake far more regular moon missions and establish the foundations for a base at the lunar south pole before the end of the decade. He also affirmed the space agency’s commitment to putting a nuclear reactor on the lunar surface.

These goals were largely expected—but there was still one surprise. Isaacman also said NASA would build the first-ever nuclear reactor-powered interplanetary spacecraft and fly it to Mars by the end of 2028. It’s called the Space Reactor-1 Freedom, or SR-1 for short. “After decades of study, and billions spent on concepts that have never left Earth, America will finally get underway on nuclear power in space,” he said at the event. “We will launch the first-of-its-kind interplanetary mission.”

A successful mission would herald a new era in spaceflight, one in which traveling between Earth, the moon, and Mars would—according to a range of experts—be faster and easier than ever. And it might just give the US the edge in the race against China—allowing the country to beat its greatest geopolitical rival to landing astronauts on another planet.

While experts agree the timeline is extremely tight, they’re excited to see if America’s space agency and its industry partners can deliver an engineering miracle. “You wake up to that announcement, and it puts a big smile on your face,” says Simon Middleburgh, co-director of the Nuclear Futures Institute at Bangor University in Wales.

Little detail on SR-1 is publicly available, and NASA’s own spaceflight researchers did not respond to requests for comment. But MIT Technology Review spoke to several nuclear power and propulsion experts to find out how the new nuclear-powered spacecraft might work.

Nuclear propulsion 101

Traditionally, spaceflight has been powered by chemical propulsion. Liquefied hydrogen and liquefied oxygen are mixed, and then ignited, within a rocket; the searingly hot exhaust from this explosion is ejected through a nozzle, which propels the rocket forth.

Chemical propulsion offers a significant amount of thrust and will, for the foreseeable future, still be used to launch spacecraft from Earth. But nuclear propulsion would enable spacecraft to fly through the solar system for far longer, and faster, than is currently possible.

“You get more bang per kilogram,” says Middleburgh. A nuclear fuel source is far more energy-dense than its conventional cousin, which means it’s orders of magnitude more efficient. “It’s really, really, really high efficiency,” says Lindsey Holmes, an expert in space nuclear technology and the vice president of advanced projects at Analytical Mechanics Associates, an aerospace company in Virginia.

The approach also removes one other element of the traditional power equation: solar. Spacecraft, including the Artemis II mission’s Orion space capsule, often rely on the sun for power. But this can be a problem, since it doesn’t always shine in space, particularly when a planet or moon gets in its way—and as you head toward the outer solar system, beyond Mars, there’s just less sunlight available.

To circumvent this issue, nuclear energy sources have been used in spacecraft plenty of times before—including on both Voyager missions and the Saturn-interrogating Cassini probe. Known as radioisotope thermoelectric generators, or RTGs, these use plutonium, which radioactively decays and generates heat in the process. That heat is then converted into electricity for the spacecraft to use. RTGs, however, aren’t the same as nuclear reactors; they are more akin to radioactive batteries—more rudimentary and considerably less powerful.

So how will a nuclear-reactor-powered spacecraft work?

Despite operational differences, the fundamentals of running a nuclear reactor in space are much the same as they are on Earth. First, get some uranium fuel; then bombard it with neutrons. This ruptures the uranium’s unstable atomic nuclei, which expel a torrent of extra neutrons—and that rapidly escalates into a self-sustaining, roasting-hot nuclear fission reaction. Its prodigious heat output can then be used to produce electricity.

Doing this in space may sound like an act of lunacy, but it’s not: The idea, and even a lot of the basic technology, has been around for decades. The Soviet Union sent dozens of nuclear reactors into orbit (often to power spy satellites), while the US deployed just one, known as SNAP-10A, back in 1965—a technological demonstration to see if it would operate normally in space. The aim was for the reactor to generate electricity for at least a year, but it ran for just over a month before a high-voltage failure in the spacecraft caused it to malfunction and shut down.

Now, more than half a century later, the US wants its second-ever space-based nuclear reactor to do something totally different: power an interplanetary spacecraft.

To be clear, the US has started, and terminated, myriad programs looking into nuclear propulsion. The latest casualty was DRACO, a collaboration between NASA and the Department of Defense, which ended in 2025. Like several previous efforts, DRACO was canceled because of a mix of high experimentation costs, lower prices for conventional rocket propulsion, and the difficulty of ensuring that ground tests could be performed safely and effectively (they are creating an incredibly powerful nuclear reaction, after all).

But now external considerations may be changing the calculus. The Artemis program has jump-started America’s return to the moon, and the new space race has palpable momentum behind it. The first nation to deploy nuclear propulsion would have a serious advantage navigating through deep space.

“I think it’s a very doable technology,” says Philip Metzger, a spaceflight engineering researcher at the Florida Space Institute. “I’m happy to see them finally doing this.”

One version of this technology is known as nuclear thermal propulsion, or NTP. You start with a nuclear reactor, one that’s cooking at around 5,000°F. Then “you’ve got a cold gas, and you squirt cold gas over the hot reactor,” says Middleburgh. “The gas expands, you shoot it out the back of a nozzle, and you have an impulse. And that impulse drives you forward.”

Because the thrust depends on the speed of the gas being ejected, the propellant gas needs to be light, making hydrogen a popular choice. But hydrogen is a corrosive and explosive substance, so using it in NTP engines can make them precarious to operate. On top of this, NTP doesn’t necessarily have a very long operating life.

Alternatively, there’s nuclear electric propulsion, or NEP, which “is very low thrust, but very efficient, so you can use it for a long period of time,” says Sebastian Corbisiero, the US Department of Energy’s national technical director of space reactor programs. This method uses heat from a fission reactor to generate power. That power is used to electrify a gas and then blast it out of the spacecraft, generating thrust.

Both NTP and NEP have been investigated by US researchers, because both have the added benefit of making it easier and safer for human beings to explore the solar system. Astronauts in space are exposed to harmful cosmic radiation, but because nuclear propulsion makes spacecraft speedier and more agile, they’d spend less time in it. “It solves the radiation problem,” says Metzger. “That’s one of the main motivations for inventing better propulsion to and from Mars.”

How to build a nuclear-powered spaceship

For SR-1, NASA has opted for nuclear electric propulsion. NEP is “a much simpler affair” than its thermal counterpart, says Middleburgh. Essentially, you just need to plug a nuclear reactor into a power-and-propulsion system. Luckily for NASA, it’s already got one.

For many years, NASA—along with its space agency partners in Canada, Europe, Japan, and the Middle East—was preparing for Gateway, meant to be humanity’s first space station to orbit around the moon. Isaacman canceled the project in March, but that doesn’t mean its technology will go to waste; the power-and-propulsion element of the nixed space station will be used in SR-1 instead. This contraption was going to be powered by solar energy. It’ll now be attached to an in-development nuclear reactor custom built to survive in space.

What might the SR-1 look like? MIT Technology Review saw a presentation by Steve Sinacore, program executive of NASA’s Space Reactor Office, that offers some clues. So far, the concept art makes it look like a colossal fletched arrow. At the back will be the power-and-propulsion system, while its tip will hold a 20-kilowatt-or-greater uranium-filled nuclear reactor. (For context, a typical nuclear plant on Earth is 50,000 times more powerful, producing a gigawatt of power.)

Annotated diagram of the key systems of SR-1 Freedom. Indicated at the front is the power and propulsion element, up to 48kw Advanced electric propulsion system. Panels at the middle are high performance, light weight composite and titanium heat rejection system. At the tail there is indicated an advanced closed Brayton cycle power conversion system and a .20kWe Reactor with HALEU UO2 fuel, heat pipe thermal transfer and boron carbide radiation shield. A small attachment at midcraft is labelled. :High Rate Direct to Earth Communications.
NASA

The “fletches” on SR-1 are large fins that allow the reactor to cool down. “You have to have really large radiators,” says Holmes, since the nuclear fission process produces so much heat that much of it has to be vented into space—otherwise, the reactor and spacecraft will melt.

According to that presentation, the spacecraft’s hardware development is due to start this June. By January 2028, SR-1’s systems should be ready for assembly and testing. And by that October, the spacecraft will arrive at the launch site, ready for liftoff before the year’s end. Will the nuclear reactor manage to hold itself together? “Going through the launch safely is going to be a challenge,” says Middleburgh. “You are being shaken, rattled, and rolled.”

Then, he says, “once you’re up in space, once you’ve got through that few minutes of hell in getting there, it’s zero-gravity considerations you have to worry about.” The question then becomes: Will the mechanics of the reactor, built on terra firma, still work?

For safety reasons, the nuclear reactor will be switched on around two days post-launch, when it’s comfortably in space. Uranium isn’t tremendously dangerous by itself, but that can’t be said of the nuclear waste products that emerge when the reactor is activated, so you don’t want any of that to fall back to Earth.

If this schedule is adhered to, and SR-1 works as planned, it’s expected to reach Mars about a year after launch. “It’s an aggressive timeline,” says Holmes, something she suspects is being driven partly by China’s and Russia’s own deep-space nuclear ambitions. The two countries aim to place their own nuclear reactor on the moon’s surface to power the planned International Lunar Research Station—a jointly operated lunar base—by 2035.

Whether it flies or fails in space, SR-1’s operations should help NASA with putting a nuclear reactor on the moon soon after. “All of the things we’d be learning about how that system operates in space [are] very helpful for a surface application, because basically it’s the same,” says Corbisiero. “There’s still no air on the moon.”

And if SR-1 does triumph, it will be a game-changing victory for NASA. It will also be “a massive win for the human race, frankly,” says Middleburgh. “It will be a marvel of engineering, and it will move the dial in humans potentially taking a step on Mars.” Like many of his colleagues, including Holmes, he remains thrilled by the prospect of the first-ever nuclear-powered interplanetary spacecraft—even with the incredibly ambitious timeline.

“These are the things that get us up in the morning,” he says. “These are the sorts of things we will remember when we’re old.”

a bicyclists launching over his handlebars as the airbag expands to protect him.
Image: Van Rysel
The lightweight Airbag deploys in just milliseconds after detecting a crash. | Image: Van Rysel

What you're looking at is a new airbag system integrated directly into a "race-ready" skinsuit, not bolted on like other solutions. It was developed for road cyclists by Van Rysel, with the help of airbag technology specialist In&motion. It's currently being tested on pro riders ahead of a general consumer release sometime "within the next two years."

Its development comes after the UCI, pro cycling's governing body, put out a call in February seeking gear that could help protect riders traveling faster than ever.

The current version is in final validation ahead of potential race deployment. It has a total weight of about 700 grams (500 gr …

Read the full story at The Verge.

####*Room Temperature Superconductors

Peter Diamandis:

"What's really exciting about what's coming in the very near future is the fact that these AGI/ASI systems are going to solve math, physics, chemistry, and materials science."

"So we're about to see this extraordinary golden era of scientific discoveries that are going to occur at a rate far beyond anything else."

"Scientific breakthroughs, Nobel prizes, came at the rate of the number of geniuses on the planet. But we've now increased the number of genius individual equivalents by a billion-fold."

"So we're going to end up with a situation where we get room temperature semiconductors, new substrates that allow us to pull carbon out of the atmosphere, or desalinate at a rate like never before, or allow us to reach longevity escape velocity."

Link to the Full Interview (Peter Diamandis Interview Starts @ 1:36:23): https://www.youtube.com/watch?v=8iWSNwIRazc

Bloomberg's Mark Gurman reports that Apple is developing display-free AI smart glasses aimed at rivaling Meta's Ray-Bans, with multiple frame styles, a distinctive oval camera design, and tight iPhone integration. "The idea is to unveil the product at the end of 2026 or early the following year, with the actual release coming in 2027," writes Gurman. From the report: Like Meta's offering, Apple's glasses will be designed to handle everyday uses: capturing photos and videos, syncing with a smartphone for editing and sharing, handling phone calls, listening to notifications, playing music, and enabling hands-free interaction via a voice assistant. In Apple's case, that assistant will be a significantly upgraded Siri coming in iOS 27. The glasses are part of a broader, three-pronged AI wearables strategy that also includes new AirPods and a camera-equipped pendant. Each device is designed to leverage computer vision to interpret the user's surroundings and feed contextual awareness into Siri and Apple Intelligence. That will enable features like improved turn-by-turn map directions and visual reminders.

When Apple typically enters a new product category, it offers clear advantages over what's currently available. We saw this with the original iPod, iPhone, iPad and Apple Watch -- and, even though it was a flop, the Vision Pro. That approach won't be as obvious with Apple's upcoming foldable iPhone, but we should see it on full display with the glasses. According to employees working on the project, Apple's strategy is to outdo competitors by tightly integrating the glasses with the iPhone and offering a higher-end build. While Meta relies heavily on partner EssilorLuxottica SA for frames, Apple is unsurprisingly planning to go at it alone in terms of design. That also should set it apart from Alphabet Inc.'s Google and Samsung Electronics Co., which are leaning on Warby Parker.

Apple's design team has whipped up at least four different styles and plans to launch some or all of them, I'm told, as well as many color options. The latest units are made from a high-end material called acetate, which is known to be more durable and luxurious than the standard plastic used by many brands. Here are the designs in testing:

  • A large rectangular frame, reminiscent of Ray-Ban Wayfarers
  • A slimmer rectangular design, similar to the glasses worn by Apple Chief Executive Officer Tim Cook
  • Larger oval or circular frames
  • A smaller, more refined oval or circular option
Mercedes EQS sedan
Image: Mercedes-Benz

A year ago, Mercedes-Benz did the prudent thing and paused its EQ lineup of electric vehicles in the US. With customer demand drying up for luxury EVs, and federal incentives getting axed by vengeful Republicans, Mercedes put its first-generation EVs on ice.

But then, in January, Mercedes quietly reintroduced the EQS brand in the US, with The Drive declaring that the "blobs are back" - a reference to the sedan's much-maligned jelly-bean shape that prioritized aerodynamics over a more traditional profile. But we didn't yet realize how back the EQS truly was.

Today, Mercedes is reintroducing its electric sedan to a wary, cash-strapped market …

Read the full story at The Verge.

Linux 7.0 ReleasedSlashdot

"The new Linux kernel was released and it's kind of a big deal," writes longtime Slashdot reader rexx mainframe. "Here is what you can expect." Linuxiac reports: A key update in Linux 7.0 is the removal of the experimental label from Rust support. That (of course) does not make Rust a dominant language in kernel development, but it is still an important step in its gradual integration into the project. Another notable security-related change is the addition of ML-DSA post-quantum signatures for kernel module authentication, while support for SHA-1-based module-signing schemes has been removed.

The kernel now includes BPF-based filtering for io_uring operations, providing administrators with improved control in restricted environments. Additionally, BTF type lookups are now faster due to binary search. At the same time, this release continues ongoing cleanup in the kernel's lower layers. The removal of linuxrc initrd code advances the transition to initramfs as the sole early-userspace boot mechanism.

Linux 7.0 also introduces NULLFS, an immutable and empty root filesystem designed for systems that mount the real root later. Plus, preemption handling is now simpler on most architectures, with further improvements to restartable sequences, workqueues, RCU internals, slab allocation, and type-based hardening. Filesystems and storage receive several updates as well. Non-blocking timestamp updates now function correctly, and filesystems must explicitly opt in to leases rather than receiving them by default. Phoronix has compiled a list of the many exciting changes.

Linus Torvalds himself announced the release, which can be downloaded directly from his git tree or from the kernel.org website.

Linux 7.0 has a major new version number but it's "largely a numbering reset [...], not a sign of some unusually disruptive release," notes Linuxiac.

Slate truck
Image: Owen Grove / The Verge

Slate Auto, the EV startup backed by Jeff Bezos, raised $650 million to fund its effort to build an affordable electric pickup truck expected to start in the mid-$20,000s. The company plans on delivering its first EV later this year.

The Series C round was led by TWG Global, headed by Guggenheim Partners founder and LA Dodgers owner Mark Walter and financier Thomas Tull. Slate didn't disclose its latest investors, but both Walter and Tull were investors in Re:Build Manufacturing, a Bezos-owned company from which Slate spun off last year. The company also didn't disclose its latest valuation, but was at $1.2 billion as of January 2025, acco …

Read the full story at The Verge.

Loading more cached articles...