Wednesday, December 11, 2024
No menu items!

HPE takes teraflop computing to space – and beyond

Must Read
HPE takes teraflop computing to space – and beyond
Kena Setshogoe, Managing Director for South Africa at HPE

Imagine a future where astronauts aboard a spaceship, or exploring Mars, can leverage supercomputing power to conduct simulations, run artificial intelligence (AI) applications, collect, analyse and then transmit data back to Earth, in real time, clearly and without delays.

A future where space exploration missions can use high performance, commercial off-the-shelf computers that offer teraflop speeds (according to WhatIs.com, a measure of a computer’s speed and can be expressed as: a trillion floating point operations per second or 10 to the 12th power floating-point operations per second) in the harsh, radiative atmosphere of space, without the need for incorporating expensive and time consuming protective measures. Measures which can take so long to add, that by the time the mission launches, the computer is often already considered “legacy”.


Imagine a future where a space-bound parent can call their Earth-bound child in real-time, on their smartphone, to wish them for their birthday.

This is the future that Hewlett Packard Enterprise (HPE) and the National Aeronautics and Space Administration (NASA) are envisioning as, together, they launch HPE’s Spaceborne Computer on a year-long journey to Mars.

Spaceborne

The Spaceborne Computer, based on HPE’s Apollo 40 class systems will be the first high performance commercial off-the-shelf (COTS) computer system to run one teraflop in space. Launched on August 14 2017, aboard the Dragon Spacecraft, a SpaceX CRS-12 rocket, from Kennedy Space Center, Florida, Spaceborne is bound first for the International Space Station (ISS) National Lab.

Kena Setshogoe, Managing Director for South Africa at HPE, says, “Sending a computer to space isn’t a unique idea. In fact, computers have fundamentally changed and advanced space travel over the years. However, in the past, engineers would spend a considerable amount of time, resources and money to ruggedise and harden the computer enough to withstand the harsh environments of space that by launch time, the machine is usually already obsolete. Think radiation, solar flares, subatomic particles, micrometeoroids, unstable electrical power, irregular cooling and a host of other environmental factors.

“Spaceborne is unmodified, safe for the addition of self-analysing and correcting, autonomous software; software which is designed to automatically ‘harden’ the computer as it self-learns and adjusts to the varying conditions of space, mitigating environmentally induced errors. Even without traditional ruggedising, our system still passed the 146 safety tests and certifications in order to be NASA-approved for space.”

Spaceborne includes the HPE Apollo 40 class systems with a high speed High Performance Computing (HPC) interconnect running an open-source Linux operating system. Though there are no hardware modifications to these components, HPE created a unique water-cooled enclosure for the hardware and developed purpose-built system software to address the environmental constraints and reliability requirements of supercomputing in space.

The HPC nodes are loaded with advanced self-care software to oversee and protect the computer’s progress for the year-long experiment on the ISS. During this mission, HPE Apollo servers will continuously run compute- and data – intensive HPC benchmark tests in the changing environmental conditions and monitor factors such as power consumption.

Engineers will then compare the performance, runtime, and results of these machines with the output of two identical earth-based systems; both the twin to Spaceborne. This will allow HPE to determine the effects of harsh environmental factors like radiation on HPC machines and adapt them in real-time. The objective is to enable the launch of the latest COTS supercomputers — as is — for use on long-range space voyages.

What will it do?

The goal of this powerful collaboration is to provide interplanetary missions with the latest supercomputing capabilities that support simulation, AI and real-time data collection and analysis, while reducing the latency and expense of transmitting data back to distant Earth.

Since the Spaceborne Computer is designed to address the in-situ needs of astronauts, and potential eventual space settlers, for experimental and other data processing capabilities, HPE will be running internationally recognised benchmark tests to simulate the wide variety of possible processing required. This includes compute-intensive, as well as I/O-intensive benchmark tests.

Simultaneously, two of NASA’s own standard HPC benchmark tests will be undertaken.

Says Setshogoe, “Our focus remains on off-the-shelf standards-based highest-performance servers that can run the wealth of applications anticipatable for astronauts on their long duration missions. Our intention is to achieve a ‘hardening through software’, rather than through additional, heavy and expensive hardware.”

What does it mean?

Today, most of the calculations needed for space research projects are still done on Earth, due to the limited computing capabilities in space creating a challenge when transmitting data to and from space. While this approach works for space exploration to the moon or in low Earth orbit (LEO), when astronauts can be in near real-time communication with Earth, the further they go, the larger the communication latencies become.

This means it could take up to 20 minutes for communications to reach Earth, and then another 20 minutes for responses to reach astronauts. The long communication lag makes any on-the-ground exploration missions challenging and potentially dangerous, especially if astronauts are met with any mission critical scenarios where immediate communication with Earth is absolutely essential.

Setshogoe says, “A mission to Mars will require sophisticated onboard computing resources that are capable of extended periods of uptime. To meet these requirements, we need to improve technology’s viability in space in order to better ensure mission success. By sending a supercomputer to space, HPE is taking the first step in that direction.”

He adds that the goals of the mission are not reserved purely for space travel. Should the mission prove successful, the experiment will highlight that HPE equipment will be well-suited for any terrain, environment or project back home, on Earth.

“For both the local and international market, this means the likes of military, mining or other volatile environments will have the ability to use an HPE device to achieve real time data at the edge and in-touch capability. For example, an engineer undertaking seismic testing can rely on HPE equipment to withstand tremors, eruptions, and more, reliably computing, analysing and transmitting data directly from the source,” explains Setshogoe.

This dramatically shortens typical communication times and, with real-time analytics, could open up a world of possibility for disaster predictions, and can even mean the difference between life and death.”

Setshogoe includes that this level of data-at-the-edge computing will allow businesses from every industry to reliably compute at the speed of thought, enabling them to make fast decisions, and respond to business trends more quickly than ever before.

How does HPE fit in?

HPE, through subsidiary SGI, has been supplying NASA with computers for missions for well over thirty years. This mission cements that relationship, with the provision of Spaceborne.

“HPE has a long history of innovation and, while we are tremendously excited to be a part of this ground-breaking project, we are also not surprised. Being “the first” is part of our DNA. We were the first to build industry standard servers for the market. We shifted the paradigm from ‘closed’ to ‘open’ systems. We were the first to launch a computer that was memory based and not just Central Processing Unit (CPU) based.

“Innovation is – and always will be – what we do, at HPE. We want to better society, and work towards a future where everything is connected; where two people can connect across space with no time-lapse; where real-time space computing is not just something that you have to imagine, but is here, and now,” concludes Setshogoe.

Edited by Daniëlle Kruger
Follow Daniëlle Kruger on Twitter
Follow IT News Africa on Twitter

- Advertisement -

Crypto Industry Giants Collaborate to Expand USDC & Digital Asset Ecosystem

Circle Internet Group Inc. and Binance have announced a new strategic partnership at Abu Dhabi Finance Week that will...
Latest News
- Advertisement -

More Articles Like This

- Advertisement -