Deep space dial-up: How NASA speeds up its interplanetary communications

By Jacek Krywko

On November 26, 2018 at 2:52:59 ET, NASA did it again—the agency’s InSight probe successfully landed on Mars after an entry, descent, and landing maneuver later dubbed "six and a half minutes of terror.” The moniker fits because NASA engineers couldn't know right away whether the spacecraft had made it safely down to the surface because of the current time delay (roughly 8.1 minutes) for communications between Earth and Mars. During that window of time, InSight couldn't rely on its more modern, high-powered antennas—instead, everything depended on old-fashioned UHF communications (the same method long utilized in everything from TV antennas and walkie-talkies to Bluetooth devices).

Eventually, critical data concerning InSight's condition was transmitted in 401.586Mhz radio waves to two CubeSats called WALL-E and EVE, which in turn relayed the data at 8Kbps back to huge 70 meter antennas on Earth. The CubeSats had been launched on the same rocket as InSight, and they followed along on the trip to Mars in order to observe the landing event and send back data immediately. Other Mars orbiters like Mars Reconnaissance Orbiter (MRO) were out of position and couldn't initially provide real-time communications with the lander. That’s not to say that the entire landing coverage hinged on two experimental CubeSats (each the size of a briefcase), but the MRO would have relayed InSight's landing data only after further delay.

InSight’s entire landing truly put all of NASA’s communications architecture—called the Mars Network—through its paces. The signal the InSight lander beamed back at relay orbiters was sure to reach Earth even if one or more of the orbiters failed. WALL-E and EVE were there to pass information through immediately, and they did just that. If those CubeSats didn't work for some reason, the MRO was ready to step in. Each piece worked as a node in an Internet-like network making it possible to route packages of data through multiple terminals made with different kinds of hardware. Right now, the most efficient tool is the MRO spacecraft, which can relay data at a maximum rate of 6Mbps (a current record for planetary missions). But NASA had to work with much less communications muscle in the past—it’s also going to need much more in the future.

Just like your local ISP, NASA allows Internet users to check the <a href="">real-time connectivity</a> between Earth and its various space explorers.
Enlarge / Just like your local ISP, NASA allows Internet users to check the real-time connectivity between Earth and its various space explorers.

The Deep Space Network

As NASA has increased its footprint in space, better space communications systems have been steadily appearing to extend coverage: first the goal was low-Earth orbit, then geosynchronous orbit and the Moon second, and soon farther into deep space. It started with crude portable radio tracking stations deployed by the US Army in Nigeria, Singapore, and California to receive telemetry data from Explorer 1, the first artificial satellite the US successfully launched into orbit back in 1958. And slowly but surely, that basis evolved into the advanced communications systems of today.

Douglas Abraham, a Strategic and Systems Forecasting Lead at NASA's Interplanetary Network Directorate, highlights three independently developed space communication networks today. The Near Earth Network supports spacecraft in low-Earth orbit. "It's a collection of antennas, mostly between 9 and 12 meters. There are a few larger ones that are 15 and 18 meters,” says Abraham. Next, slightly above geosynchronous Earth orbit, there are several telecommunications Tracking and Data Relay Satellites (TDRS). "These can look down at low-Earth orbiters and communicate with them, and this information then gets relayed from the TDRS satellites to the ground,” Abraham explains. “That's the trafficking and data relay satellite system generally known as NASA's Space Network.”

But even the TDRS was not enough to communicate with spacecraft flying way beyond the Moon to other planets. "So we had to build a network covering the entire Solar System. This is the Deep Space Network,” says Abraham. The Mars Network is an extension of the DSN.

Given its reach and ambitions, the DSN is the most complicated of these systems. At its core, the DSN is a collection of very large antennas measuring from 34 to 70 meters in diameter. Multiple 34-meter antennas and one 70-meter antenna operate in each of the three DSN sites. One site is located at Goldstone, California, another sits outside of Madrid, Spain, and the third resides outside of Canberra, Australia. These facilities are placed approximately 120 degrees apart around the globe to ensure 24/7 coverage for all spacecraft beyond the geosynchronous orbit.

The 34-meter antennas are DSN's daily drivers, and they come in two variants: older high-efficiency antennas and relatively modern beam waveguide antennas. The difference is that the beam waveguide version has five precision radio frequency mirrors that reflect signals along a tube to the control room below the ground, where electronics analyzing those signals are better shielded from all sources of interference. The 34-meter antennas working separately, or in arrays of two or three dishes, can close most links NASA needs to be closed. But for special occasions when the distance is too large even for a few 34-meter antennas working together, people running the DSN use their 70-meter behemoths.

"They are important in several situations,” Abraham says of the larger antennas. The first is when a spacecraft is so far from Earth that it would be impossible to close the link with a smaller dish. "The New Horizons mission, which currently is way past Pluto, or the Voyager spacecraft, which is beyond the Solar System, are good examples. Only 70-meter antennas can get through to them and get their data back to Earth,” Abraham explains.

The 70-meter dishes are also used when a spacecraft can't communicate with its high-gain antenna, either because of a planned critical event like an orbit insertion or because something has just gone terribly wrong. A 70-meter antenna was used to safely bring Apollo 13 back to Earth, for instance. It also received Neil Armstrong's famous, "That's one small step for a man. One giant leap for mankind" message. Even today, the DSN is the most advanced and sensitive telecommunications system in the world. "But for a number of reasons, it is close to its limits,” Abraham warns. “There's not much room to improve the radio frequency technology the DSN relies on. We're running out of low-hanging fruit to go for."

Listing image by NASA Ames

Page 2

This is not a new phenomenon. The history of deep space communications is a constant struggle to get the frequencies higher and the wavelengths shorter. The Explorer 1 used radio frequencies reaching 108MHz. Then, NASA implemented larger antennas with better gain that supported L band frequencies between 1 and 2GHz. Next came the S band, between 2 and 4GHz, and then the agency moved to X band ( i.e. between 7 and 11.2GHz).

Today’s space communications systems are again in the process of moving—this time to a 26-40GHz Ka band. "The reason behind this trend is that going for shorter wavelengths, higher frequencies allows much higher data rates,” says Abraham.

There’s reason for optimism given that NASA’s historic pace of communications development has been quite rapid. A 2014 research paper from NASA’s Jet Propulsion Lab provides throughput figures for perspective: if we used Explorer 1 communications technology to send a typical iPhone photo from Jupiter to Earth, for example, it would take the age of the Universe multiplied by 460 for the image to get through. Pioneers 2 and 4 from the early 1960s would need more than 633,000 years for the same task. Mariner 9 from 1971 would do this in 55 hours. But at present, Mars Reconnaissance Orbiter could make it in three minutes.

The only problem, of course, is the amount of data generated by space missions has been growing equally fast, if not faster than these communications capabilities. Voyagers 1 and 2 produced 5TB of data over 40 years of operation. NISAR Earth Science satellite scheduled for launch in 2020 is expected to produce 85TB of data per month. While that cache is perfectly doable for Earth orbiters, transferring that much data from one planet to another is an entirely different story. Even the relatively fast Mars Reconnaissance Orbiter would need around 20 years to send those 85TB to Earth.

"Projected data rates for the human Mars exploration era in the late 2020s or early 2030s are 150 Mbps or higher, so let's do the math,” Abraham says. “If an MRO-class spacecraft at maximum Mars distance can send roughly one 1Mbps to a 70-meter antenna on Earth, then closing a 150Mbps link would take an array of 150 70-meter antennas. Sure, we can come up with some clever ways to bring that absurd number down a little, but the problem is obviously there: closing a 150Mbps link at interplanetary distances is extremely difficult. Plus, we're running out of allocated radio frequency spectrum.”

As Abraham showcases, operating at either S or X band, one mission with just 25Mbps downlink would use up all the available spectrum. The Ka band has more headroom, but still only two 150Mbps Mars orbiters would exceed the Ka band spectrum allocation. Simply put, the interplanetary Internet is going to need more than just a radio to work—it's going to rely on lasers.

Rise of the optical communications

Lasers may sound futuristic, but the idea of optical communications can be traced back to a patent filed by Alexander Graham Bell in the 1880s. Bell designed a system where sunlight focused into a very narrow beam was directed onto a reflective diaphragm that vibrated in response to sound. The vibrations caused variations in light going through a lens to a crude photodetector. Changes in the photodetector's resistance then varied the current flowing through a telephone.

The system was unreliable, the volume was very low, and Bell eventually dropped the idea. But after nearly a century, armed with lasers and fiber optics, NASA communications engineers have picked this old Bell concept back up.

"We knew radio frequency systems had their limitations, so at NASA's JPL in the late 1970s and early 1980s, we began discussing deep space communications via free-space lasers,” Abraham tells Ars. To get a good understanding of what was possible and what wasn’t in deep space optical communications, NASA's JPL commissioned a four-year study called the Deep Space Relay Satellite System (DSRSS) in the late 1980s. The study was meant to answer some critical questions: what about the weather and visibility issues (since radio waves can easily get through the clouds but laser beams cannot)? What if the Sun-Earth-Probe angle gets too narrow? Will a detector at Earth discern a weak optical signal from sunlight? Last but not least, how much would it all cost and would it be worth the price? "We're still in the process of answering those questions,” Abraham admits. “But the answers start to look more and more in favor of optical communications.”

The DSRSS study assumed that a site above the Earth atmosphere was the best for both radio and optical links. It claimed that an optical communications system installed at an in-orbit facility would beat any ground-based architecture, including the iconic 70-meter antennas. This orbital facility was supposed to be a 10-meter aperture deployed in low-Earth orbit and then boosted up to a geosynchronous Earth orbit. But the cost of the entire system—consisting of the satellite with the aperture onboard, a rocket to launch it, and five user terminals—was extremely high. Moreover, the study didn't include the costs of a necessary backup system that would take over in case of the satellite's failure.

As such, the communications people at JPL started to look at a ground-based architecture described in the Ground Based Advanced Technology Study (GBATS), an analysis done internally at JPL at roughly the same time as the DSRSS (Abraham references both in his own more recent papers). People working on the GBATS project put forward two alternative proposals. The first one envisioned six stations, each a 10-meter aperture with a 1-meter secondary, located roughly 60 degrees apart about the equatorial region. The stations were to be built on mountain tops at locations with at least 66 percent of their days cloud-free. This way, two or three stations were to always be in view of any spacecraft to ensure weather diversity. The second option proposed nine stations clustered in the groups of three located 120 degrees apart. The stations within a group were to be located 200km apart from each other to have mutual line of sight but remain in different weather cells.

Both GBATS architectures were cheaper than the space-based approach, but these had issues, too. First, because the signals had to go through the Earth's atmosphere, the daytime reception was noticeably worse than nighttime reception due to daytime sky background. Despite the clever placement, optical ground stations were still susceptible to weather. A spacecraft pointing its laser at a ground station would ultimately have to adapt to weather outages and go through the pointing procedure all over again to connect with another station that was not covered with clouds.

Regardless of these issues, the DSRSS and GBATS studies laid down the theoretical foundation for deep space optical communications systems and shaped the way NASA engineers think about them today. The only remaining thing to do was to build such a system and show it can work. Luckily, we're mere months away from this getting done.

Page 3

From the LRO back in 2013: To clean up transmission errors introduced by Earth's atmosphere (left), Goddard scientists applied Reed-Solomon error correction (right), which is commonly used in CDs and DVDs. Typical errors include missing pixels (white) and false signals (black). The white stripe indicates a brief period when transmission was paused.
Enlarge / From the LRO back in 2013: To clean up transmission errors introduced by Earth's atmosphere (left), Goddard scientists applied Reed-Solomon error correction (right), which is commonly used in CDs and DVDs. Typical errors include missing pixels (white) and false signals (black). The white stripe indicates a brief period when transmission was paused.

There already have been some early demonstrations of optical communications in space. The very first was taken back in 1992, when the Galileo probe was on its way to Jupiter and turned its high-resolution imaging camera back at Earth to successfully receive a set of laser pulses sent from the 60 centimeter telescope at the JPL's Table Mountain observatory in California and from the 1.5 meter telescope at the USAF Starfire Optical Range facility in New Mexico. At this point, Galileo was 870,000 miles away from the Earth, but both laser beams got through to its camera.

Japanese and European space agencies have also managed to close optical links between their ground-based sites and satellites orbiting the Earth. Then, they managed to establish optical communications between one satellite and another at 50Mbps. A few years ago, a German research team closed a coherent bidirectional 5.6Gbps optical link between the Near Field Infrared Experiment (NFIRE) satellite in low Earth's orbit and a ground station at Tenerife, Spain. Still, all of this involved near-Earth communications.

The very first optical link connecting a ground station at Earth and a spacecraft orbiting another body in the Solar System was established in January 2013. A black and white image of Mona Lisa measuring 152 by 200 pixels was transmitted from the Next Generation Satellite Laser Ranging station at NASA's Goddard Space Flight Center to the Lunar Reconnaissance Orbiter (LRO) at 300bps. It was a one-way link. The LRO sent the received image back to Earth via its standard radio. The Mona Lisa needed some software error correction, but even with no coding involved it was easily recognizable. And at the time, a much more capable system was already scheduled to reach the Moon.

The Lunar Atmosphere and Dust Environment Explorer (LADEE) entered the Moon's orbit on October 6, 2013, and just a week later it fired up its data transmission pulsed laser. This time, NASA attempted a two-way connection with a 20Mbps uplink and a record-breaking downlink of 622Mbps. The only issue involved the missions’ short lifetimes. The LRO's optical link lasted for just a few minutes. The LADEE communicated with its laser for about 16 hours over a 30-day period. This is about to change in the Laser Communications Relay Demonstration (LCRD) satellite scheduled for launch in June 2019. Its mission is to show how space communication systems will work in the future.

Developed by NASA's JPL in cooperation with MIT's Lincoln Laboratory, the LCRD will have two optical terminals onboard: one for near-Earth communications and one for deep space. The near-Earth version is designed to use the so-called Differential Phase Shift Keying (DPSK). The transmitter will send laser pulses at a rate of 2.88GHz. In this technique, each bit will be encoded in the phase difference between the consecutive pulses. It will be capable of reaching 2.88Gbps data rate, but this will require lots of energy to work. Detectors can discern the phase difference between separate pulses only in relatively high-energy signals, so the DPSK works brilliantly in near-Earth communications, but it’s not the best method to go for deep space where energy becomes a big issue. A signal sent from Mars is quite energy-starved by the time it reaches Earth, which is why for the deep space optical link demonstration the LCRD will use another terminal working with the more energy-efficient Pulse Position Modulation technology.

"It's basically photon counting,” explains Abraham. "A brief period allotted for communication is divided down into multiple time slots. To get your data, you just check whether there were photons hitting a detector in each individual time slot or not. That's how you encode data in the PPM.” You can think of this a bit like the Morse code, only at a super fast pace. Either there is a flash at a given moment or there isn't, and the message is encoded in the sequence of flashes. "Even though it is way slower than the DPSK, we still can get an optical link of tens to hundreds of Mbps at Mars distance,” Abraham adds.

Of course, there’s more to the LCRD mission than just those two laser terminals. The LCRD is also designed to work as a space Internet node. On the ground, there are going to be three stations working with the LCRD: one located at White Sands Complex in New Mexico, one at the Table Mountain observatory in California, and one on the Big Island of Hawaii or in Maui. The idea is to test switching from one ground station to another in case of bad weather at one of the sites. The mission will also test how the LCRD works as a data relay. An optical signal sent from one of the stations will go to the satellite and get relayed back to the other station, all via an optical link.

If the immediate transmission proves to be impossible, the LCRD will store the data and send it when the opportunity presents itself. If the data is time sensitive or there is not enough storage available onboard, the LCRD will send it immediately via its Ka band radio antenna. So as a forerunner of future relay satellites, the LCRD is going to be a hybrid radio-optical system. And that’s exactly the kind of solution NASA needs to place in the Mars orbit to establish an interplanetary network supporting deepspace human exploration in the 2030s.

Getting Mars online

Within the last year, Abraham's team submitted two studies covering the future of deep space communications to the SpaceOps Conference held in Marseille, France, (occurring in May). One was about the DSN in general, the other (titled Mars Planetary Network for Human Exploration Era – Potential Challenges and Solutions”) offered a detailed description of the infrastructure capable of providing an Internet-like service to astronauts on the Red Planet.

The estimated peak average data rate turned out to be more or less 215Mbps for downlink and 28Mbps for uplink. The Martian Internet will actually consist of three networks: a Wi-Fi covering the exploration area on the surface, the planetary network relaying data from the ground to Earth, and the Earth network, in this case the DSN with its three sites responsible for receiving this data and communicating back to Mars.

"There are lots of issues to think about in designing such infrastructure. It has to be consistent and reliable, even at maximum Mars distance of 2.67AU and in the periods of superior solar conjunction when the Red Planet hides behind the sun,” Abraham says. Such conjunctions happen every two years and entirely disrupt communication links with Mars. "Today, there is no way around it. All landers and orbiters we have at Mars simply stop communicating with Earth for roughly two weeks. With optical links, outages caused by solar conjunctions will be even longer, in the range of 10 to 15 weeks.” That time frame is not that big of a deal for robots. Prolonged isolation is not a thing for them because they don't get bored or lonely, and they don't need to see their loved ones. The story is unfortunately different for humans.

"So we postulate deployment of two relay orbiters that should be placed in the circular, equatorial orbit, about 17,300km above the surface of Mars,” Abraham continues. According to the study, such orbiters are estimated to weigh 1,500kg at launch and should have a set of X band, Ka band, and optical terminals powered by 20 to 30kW solar arrays. They should support Delay Tolerant Network Protocol, which is basically a TCP/IP designed to cope with large latencies and the huge delays that are sure to occur in interplanetary networks. Participating orbiters should be able to communicate with astronauts and vehicles on the surface, with ground stations at Earth, and with each other.

"This cross-link capability is very important because it brings down the number of antennas we're going to need to deal with 250Mbps transmissions,” says Abraham. His team estimates that a 250Mbps transmission from one of the relay orbiters will take an array of six 34-meter antennas to receive. That means NASA would have to build three more such antennas at each of the DSN sites, and those things take years to build and are super expensive. "But we think that two orbiters can divide the data between them and send it simultaneously at 125Mbps, with one orbiter sending one half of the package and the other one the other half,” says Abraham. Even today, the 34-meter DSN antennas can receive signals from up to four different spacecraft at the same time, a trick that will bring the number of necessary antennas down to three. "Receiving two 125 Mbps transmissions from the same part of the sky takes the same number of antennas as receiving one such transmission,” explains Abraham. "More antennas are needed only when you need to close one link but at a higher data rate."

To deal with superior solar conjunctions, Abraham's team proposed the deployment of an intermediate relay satellite in the Sun-Mars/Sun-Earth L4/L5 orbit. This way, in conjunction periods, it could be used to route the data around the Sun rather than sending the signals right through it. Sadly, the data rate on such occasions will fall to somewhere around 100Kbps. So, to put it plainly, it will suck, but it will work.

Right now, a future NASA astronaut on Mars is going to have to wait for a hair above three minutes for a cat pic to get through, not counting the latency that can in this case be up to 40 minutes. Luckily, until human spaceflight’s ambitions inevitably push farther than even the Red Planet, the impending interplanetary Internet should work just fine most of the time.

Jacek Krywko is a science and technology writer based in Warsaw, Poland. He covers space exploration and artificial intelligence research, and he has previously written for Ars about facial-recognition screening, teaching AI-assistants new languages, and NASA's use of AI in space.