Morse Code’s Vanquished Competitor: The Dial Telegraph - IEEE Spectrum

2022-09-17 04:00:28 By : Ms. Sunny Zhang

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

Over the years, I’ve played with interactive telegraph exhibits in science centers and museums. I can tap out the common ••• – – – ••• of the emergency distress signal, and I know the letters H (••••) and E (•), but beyond that, Morse code’s patterns of dots and dashes run together in my brain. Stories of telegraph operators who could decipher hundreds of characters a minute still amaze me.

Recently, though, I learned about the needle telegraph. On both the sending and receiving end, the needle or needles would simply point to the desired letter. Finally, a user-friendly telegraph system, provided the user knew how to read.

The first needle telegraph was patented by William Cooke and Charles Wheatstone in Britain in 1837. The design used a set of magnetic needles arranged in a row, with letters of the alphabet arranged above and below them in a diamond grid pattern. Each needle could point left, right, or neutral; to indicate a letter, two needles would point so as to outline a path to that letter. The sending operator controlled the direction of the needles by pressing buttons that closed the circuits for the desired letter combination.

Although any number of needles could be used, Cooke and Wheatstone recommended five. This combination allowed for 20 possible characters. They omitted the letters C, J, Q, U, X, and Z. Early telegraphs were mainly used for transmitting simple signals, rather than discussion-style communication. For example, to indicate whether a one-way tunnel was clear, an operator might send the short message “wait” or “go ahead.” So the absence of a few letters wasn’t a huge shortcoming.

Operators needed minimal training to use the system, which their employers appreciated. But the system was otherwise costly to operate because it required a wire for each needle plus an additional return wire that completed the circuit. Maintaining multiple wires proved expensive, and many British railroads adopted a version that used just one needle and two wires. A single-needle system, however, required that operators learn a code to send and receive signals. Gone was the ease of simply reading letters.

Cooke and Wheatstone must have realized there was room for improvement, because in 1840 they came out with a dial (or ABC) telegraph, whose face displayed all the letters of the alphabet. The operator selected the desired letter by pressing the appropriate button and turning the handle; the needle on the receiver’s dial would swing around to point to that letter. However, a dispute between the two inventors kept this version of the telegraph from being commercialized. Only after the 1840 patent had expired did Wheatstone return to the dial telegraph, eventually patenting several improvements.

Meanwhile, the French had been using an optical telegraph system that Claude Chappe developed during the French Revolution. It relied on semaphore signals transmitted along a line of towers. By 1839, Alphonse Foy was in charge of over 1,000 optical-telegraph operators, but he saw the need to investigate the growing development of electric telegraphs. He sent Louis-François Breguet to England to study Cooke and Wheatstone’s needle telegraph. The first result was the Foy-Breguet telegraph, which used two needles that mimicked semaphore signals.

Watch and Learn: French watchmaker Louis-François Breguet studied designs for the needle telegraph before devising his own dial telegraph.Image: Class Image/Alamy

Breguet was manager of his family’s watchmaking company, Breguet & Fils, and not long after, he developed a dial telegraph that had both the appearance and the working mechanism of a clock [receiver shown at top]. When activated by an electric current from the sender, a spring connected by gears rotated the needle around the dial; an escapement—the toothed-wheel mechanism that in a clock moves the hands forward—kept the needle in place in the absence of a signal.

Breguet divided the face into 26 slots, with an inner ring of numbers and an outer ring of letters. The starting position was at the top, noted by a cross, leaving room for 25 letters. At the end of each word, the needle would return to the starting position. Some versions omitted the letter W; others omitted the letter J.

After French railroads adopted the Breguet telegraph and made it standard equipment, it became known as the French railway telegraph; it remained in use until the end of the century. Breguet’s system was also exported to Japan, connecting Tokyo and Yokohama as well as Osaka and Kobe. A new face for the telegraph incorporated Japanese katakana characters.

Big in Japan: This print depicts a Breguet system in use at the Yokohama telegraph office. The man in Western-style clothing is Scottish engineer George Miles Gilbert, who was hired by the Japanese government to oversee the introduction of telegraphy.Photo: Postal Museum Japan

Of course, even Breguet’s dial telegraph was limited in the range of characters it could transmit. Operators of the needle and dial telegraphs had to somehow deal with missing letters—perhaps they just made their best guess based on context, or perhaps companies devised their own codes for specific letters or symbols. Louis-François Breguet couldn’t properly transmit the cedilla in his own name, but maybe he accepted it as a limitation of the technology.

As it happens, as early as the 1840s, Friedrich Clemens Gerke, the telegraph inspector for the Hamburg-Cuxhaven line in Germany, was noting similar shortcomings with Morse code. The code, developed by Samuel Morse and Alfred Vail in the United States, was fine for the unaccented English alphabet. To accommodate European languages, Gerke added accented letters; he also significantly revised the patterns of dots and dashes for letters and numbers, making the entire code more efficient to transmit. His version, which became known as Continental Morse Code, spread throughout Europe.

Despite the expanded code’s popularity, the International Telegraphic Union took many years to embrace it. In his 2017 book The Chinese Typewriter: A History, Thomas Mullaney describes the slow, conservative evolution of Morse code. In 1865, the ITU settled on a set of standardized symbols that were decidedly Anglocentric. Three years later, it confirmed the standard codes for the 26 letters of the English alphabet, the numerals 0 to 9, plus 16 special characters—mostly punctuation, plus the e-acute, É. In 1875, the ITU elevated É to a standard character and added six more accented letters as special characters: Á, Å, Ä, Ñ, Ö, Ü. It wasn’t until 1903 that the ITU accepted these supplemental characters as standard. Languages based on nonalphabetic characters, such as Chinese, were never incorporated, although some countries adopted their own telegraphic codes. Thus did the technology of telegraphy connect and also divide the world in new and unexpected ways.

The Breguet telegraph receiver that touched off my inquiries is on display at the Museum of the School of Telecommunication Systems Engineering at the Technical University of Madrid. The museum was started in the 1970s by a small group of professors, who scoured antique shops and flea markets to collect artifacts representing the history of communications. Rather than confining its objects to a dedicated space, the museum maintains exhibit cases in hallways throughout the school, where students, visitors, and others can stumble upon them every day.

I wonder if those who see the Breguet dial telegraph draw connections to modern technology. The set of characters on computer keyboards, for example, vary from place to place and language to language. I remember attending a student conference in Istanbul in 1998 and being unable to access my email. I didn’t realize that Turkish keyboards have both a dotless and a dotted i key, and so I kept hitting the wrong one. A few months later I met students in Hamburg who were using American keyboards to do their computer programming. They’d discovered that German keyboards of the era required three keystrokes to make a semicolon, which slowed down their coding.

Such tales are good reminders of the persistence and the fluidity of language, which adapts to new technologies just as new technologies are molded by their users.

An abridged version of this article appears in the September 2018 print issue as “The ABCs of Telegraphy.”

Part of a continuing serieslooking at photographs of historical artifacts that embrace the boundless potential of technology.

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

There’s plenty of bandwidth available if we use reconfigurable intelligent surfaces

Ground level in a typical urban canyon, shielded by tall buildings, will be inaccessible to some 6G frequencies. Deft placement of reconfigurable intelligent surfaces [yellow] will enable the signals to pervade these areas.

For all the tumultuous revolution in wireless technology over the past several decades, there have been a couple of constants. One is the overcrowding of radio bands, and the other is the move to escape that congestion by exploiting higher and higher frequencies. And today, as engineers roll out 5G and plan for 6G wireless, they find themselves at a crossroads: After years of designing superefficient transmitters and receivers, and of compensating for the signal losses at the end points of a radio channel, they’re beginning to realize that they are approaching the practical limits of transmitter and receiver efficiency. From now on, to get high performance as we go to higher frequencies, we will need to engineer the wireless channel itself. But how can we possibly engineer and control a wireless environment, which is determined by a host of factors, many of them random and therefore unpredictable?

Perhaps the most promising solution, right now, is to use reconfigurable intelligent surfaces. These are planar structures typically ranging in size from about 100 square centimeters to about 5 square meters or more, depending on the frequency and other factors. These surfaces use advanced substances called metamaterials to reflect and refract electromagnetic waves. Thin two-dimensional metamaterials, known as metasurfaces, can be designed to sense the local electromagnetic environment and tune the wave’s key properties, such as its amplitude, phase, and polarization, as the wave is reflected or refracted by the surface. So as the waves fall on such a surface, it can alter the incident waves’ direction so as to strengthen the channel. In fact, these metasurfaces can be programmed to make these changes dynamically, reconfiguring the signal in real time in response to changes in the wireless channel. Think of reconfigurable intelligent surfaces as the next evolution of the repeater concept.

Reconfigurable intelligent surfaces could play a big role in the coming integration of wireless and satellite networks.

That’s important, because as we move to higher frequencies, the propagation characteristics become more “hostile” to the signal. The wireless channel varies constantly depending on surrounding objects. At 5G and 6G frequencies, the wavelength is vanishingly small compared to the size of buildings, vehicles, hills, trees, and rain. Lower-frequency waves diffract around or through such obstacles, but higher-frequency signals are absorbed, reflected, or scattered. Basically, at these frequencies, the line-of-sight signal is about all you can count on.

Such problems help explain why the topic of reconfigurable intelligent surfaces (RIS) is one of the hottest in wireless research. The hype is justified. A landslide of R&D activity and results has gathered momentum over the last several years, set in motion by the development of the first digitally controlled metamaterials almost 10 years ago.

This article was jointly produced by IEEE Spectrum and Proceedings of the IEEE with similar versions published in both publications.

RIS prototypes are showing great promise at scores of laboratories around the world. And yet one of the first major projects, the European-funded Visorsurf, began just five years ago and ran until 2020. The first public demonstrations of the technology occurred in late 2018, by NTT Docomo in Japan and Metawave, of Carlsbad, Calif.

Today, hundreds of researchers in Europe, Asia, and the United States are working on applying RIS to produce programmable and smart wireless environments. Vendors such as Huawei, Ericsson, NEC, Nokia, Samsung, and ZTE are working alone or in collaboration with universities. And major network operators, such as NTT Docomo, Orange, China Mobile, China Telecom, and BT are all carrying out substantial RIS trials or have plans to do so. This work has repeatedly demonstrated the ability of RIS to greatly strengthen signals in the most problematic bands of 5G and 6G.

To understand how RIS improves a signal, consider the electromagnetic environment. Traditional cellular networks consist of scattered base stations that are deployed on masts or towers, and on top of buildings and utility poles in urban areas. Objects in the path of a signal can block it, a problem that becomes especially bad at 5G’s higher frequencies, such as the millimeter-wave bands between 24.25 and 52.6 gigahertz. And it will only get worse if communication companies go ahead with plans to exploit subterahertz bands, between 90 and 300 GHz, in 6G networks. Here’s why. With 4G and similar lower-frequency bands, reflections from surfaces can actually strengthen the received signal, as reflected signals combine. However, as we move higher in frequencies, such multipath effects become much weaker or disappear entirely. The reason is that surfaces that appear smooth to a longer-wavelength signal are relatively rough to a shorter-wavelength signal. So rather than reflecting off such a surface, the signal simply scatters.

One solution is to use more powerful base stations or to install more of them throughout an area. But that strategy can double costs, or worse. Repeaters or relays can also improve coverage but here, too, the costs can be prohibitive. RIS, on the other hand, promises greatly improved coverage at just marginally higher cost

The key feature of RIS that makes it attractive in comparison with these alternatives is its nearly passive nature. The absence of amplifiers to boost the signal means that an RIS node can be powered with just a battery and a small solar panel.

RIS functions like a very sophisticated mirror, whose orientation and curvature can be adjusted in order to focus and redirect a signal in a specific direction. But rather than physically moving or reshaping the mirror, you electronically alter its surface so that it changes key properties of the incoming electromagnetic wave, such as the phase.

That’s what the metamaterials do. This emerging class of materials exhibits properties beyond (from the Greek meta) those of natural materials, such as anomalous reflection or refraction. The materials are fabricated using ordinary metals and electrical insulators, or dielectrics. As an electromagnetic wave impinges on a metamaterial, a predetermined gradient in the material alters the phase and other characteristics of the wave, making it possible to bend the wave front and redirect the beam as desired.

An RIS node is made up of hundreds or thousands of metamaterial elements called unit cells. Each cell consists of metallic and dielectric layers along with one or more switches or other tunable components. A typical structure includes an upper metallic patch with switches, a biasing layer, and a metallic ground layer separated by dielectric substrates. By controlling the biasing—the voltage between the metallic patch and the ground layer—you can switch each unit cell on or off and thus control how each cell alters the phase and other characteristics of an incident wave.

To control the direction of the larger wave reflecting off the entire RIS, you synchronize all the unit cells to create patterns of constructive and destructive interference in the larger reflected waves [ see illustration below]. This interference pattern reforms the incident beam and sends it in a particular direction determined by the pattern. This basic operating principle, by the way, is the same as that of a phased-array radar.

A reconfigurable intelligent surface comprises an array of unit cells. In each unit cell, a metamaterial alters the phase of an incoming radio wave, so that the resulting waves interfere with one another [above, top]. Precisely controlling the patterns of this constructive and destructive interference allows the reflected wave to be redirected [bottom], improving signal coverage.

An RIS has other useful features. Even without an amplifier, an RIS manages to provide substantial gain—about 30 to 40 decibels relative to isotropic (dBi)—depending on the size of the surface and the frequency. That’s because the gain of an antenna is proportional to the antenna’s aperture area. An RIS has the equivalent of many antenna elements covering a large aperture area, so it has higher gain than a conventional antenna does.

All the many unit cells in an RIS are controlled by a logic chip, such as a field-programmable gate array with a microcontroller, which also stores the many coding sequences needed to dynamically tune the RIS. The controller gives the appropriate instructions to the individual unit cells, setting their state. The most common coding scheme is simple binary coding, in which the controller toggles the switches of each unit cell on and off. The unit-cell switches are usually semiconductor devices, such as PIN diodes or field-effect transistors.

The important factors here are power consumption, speed, and flexibility, with the control circuit usually being one of the most power-hungry parts of an RIS. Reasonably efficient RIS implementations today have a total power consumption of around a few watts to a dozen watts during the switching state of reconfiguration, and much less in the idle state.

To deploy RIS nodes in a real-world network, researchers must first answer three questions: How many RIS nodes are needed? Where should they be placed? And how big should the surfaces be? As you might expect, there are complicated calculations and trade-offs.

Engineers can identify the best RIS positions by planning for them when the base station is designed. Or it can be done afterward by identifying, in the coverage map, the areas of poor signal strength. As for the size of the surfaces, that will depend on the frequencies (lower frequencies require larger surfaces) as well as the number of surfaces being deployed.

To optimize the network’s performance, researchers rely on simulations and measurements. At Huawei Sweden, where I work, we’ve had a lot of discussions about the best placement of RIS units in urban environments. We’re using a proprietary platform, called the Coffee Grinder Simulator, to simulate an RIS installation prior to its construction and deployment. We’re partnering with CNRS Research and CentraleSupélec, both in France, among others.

In a recent project, we used simulations to quantify the performance improvement gained when multiple RIS were deployed in a typical urban 5G network. As far as we know, this was the first large-scale, system-level attempt to gauge RIS performance in that setting. We optimized the RIS-augmented wireless coverage through the use of efficient deployment algorithms that we developed. Given the locations of the base stations and the users, the algorithms were designed to help us select the optimal three-dimensional locations and sizes of the RIS nodes from among thousands of possible positions on walls, roofs, corners, and so on. The output of the software is an RIS deployment map that maximizes the number of users able to receive a target signal.

An experimental reconfigurable intelligent surface with 2,304 unit cells was tested at Tsinghua University, in Beijing, last year.

Of course, the users of special interest are those at the edges of the cell-coverage area, who have the worst signal reception. Our results showed big improvements in coverage and data rates at the cell edges—and also for users with decent signal reception, especially in the millimeter band.

We also investigated how potential RIS hardware trade-offs affect performance. Simply put, every RIS design requires compromises—such as digitizing the responses of each unit cell into binary phases and amplitudes—in order to construct a less complex and cheaper RIS. But it’s important to know whether a design compromise will create additional beams to undesired directions or cause interference to other users. That’s why we studied the impact of network interference due to multiple base stations, reradiated waves by the RIS, and other factors.

Not surprisingly, our simulations confirmed that both larger RIS surfaces and larger numbers of them improved overall performance. But which is preferable? When we factored in the costs of the RIS nodes and the base stations, we found that in general a smaller number of larger RIS nodes, deployed further from a base station and its users to provide coverage to a larger area, was a particularly cost-effective solution.

The size and dimensions of the RIS depend on the operating frequency [see illustration below] . We found that a small number of rectangular RIS nodes, each around 4 meters wide for C-band frequencies (3.5 GHz) and around half a meter wide for millimeter-wave band (28 GHz), was a good compromise, and could boost performance significantly in both bands. This was a pleasant surprise: RIS improved signals not only in the millimeter-wave (5G high) band, where coverage problems can be especially acute, but also in the C band (5G mid).

To extend wireless coverage indoors, researchers in Asia are investigating a really intriguing possibility: covering room windows with transparent RIS nodes. Experiments at NTT Docomo and at Southeast and Nanjing universities, both in China, used smart films or smart glass. The films are fabricated from transparent conductive oxides (such as indium tin oxide), graphene, or silver nanowires and do not noticeably reduce light transmission. When the films are placed on windows, signals coming from outside can be refracted and boosted as they pass into a building, enhancing the coverage inside.

Planning and installing the RIS nodes is only part of the challenge. For an RIS node to work optimally, it needs to have a configuration, moment by moment, that is appropriate for the state of the communication channel in the instant the node is being used. The best configuration requires an accurate and instantaneous estimate of the channel. Technicians can come up with such an estimate by measuring the “channel impulse response” between the base station, the RIS, and the users. This response is measured using pilots, which are reference signals known beforehand by both the transmitter and the receiver. It’s a standard technique in wireless communications. Based on this estimation of the channel, it’s possible to calculate the phase shifts for each unit cell in the RIS.

The current approaches perform these calculations at the base station. However, that requires a huge number of pilots, because every unit cell needs its own phase configuration. There are various ideas for reducing this overhead, but so far none of them are really promising.

The total calculated configuration for all of the unit cells is fed to each RIS node through a wireless control link. So each RIS node needs a wireless receiver to periodically collect the instructions. This of course consumes power, and it also means that the RIS nodes are fully dependent on the base station, with unavoidable—and unaffordable—overhead and the need for continuous control. As a result, the whole system requires a flawless and complex orchestration of base stations and multiple RIS nodes via the wireless-control channels.

We need a better way. Recall that the “I” in RIS stands for intelligent. The word suggests real-time, dynamic control of the surface from within the node itself—the ability to learn, understand, and react to changes. We don’t have that now. Today’s RIS nodes cannot perceive, reason, or respond; they only execute remote orders from the base station. That’s why my colleagues and I at Huawei have started working on a project we call Autonomous RIS (AutoRIS). The goal is to enable the RIS nodes to autonomously control and configure the phase shifts of their unit cells. That will largely eliminate the base-station-based control and the massive signaling that either limit the data-rate gains from using RIS, or require synchronization and additional power consumption at the nodes. The success of AutoRIS might very well help determine whether RIS will ever be deployed commercially on a large scale.

Of course, it’s a rather daunting challenge to integrate into an RIS node the necessary receiving and processing capabilities while keeping the node lightweight and low power. In fact, it will require a huge research effort. For RIS to be commercially competitive, it will have to preserve its low-power nature.

With that in mind, we are now exploring the integration of an ultralow-power AI chip in an RIS, as well as the use of extremely efficient machine-learning models to provide the intelligence. These smart models will be able to produce the output RIS configuration based on the received data about the channel, while at the same time classifying users according to their contracted services and their network operator. Integrating AI into the RIS will also enable other functions, such as dynamically predicting upcoming RIS configurations and grouping users by location or other behavioral characteristics that affect the RIS operation.

Intelligent, autonomous RIS won’t be necessary for all situations. For some areas, a static RIS, with occasional reconfiguration—perhaps a couple of times per day or less—will be entirely adequate. In fact, there will undoubtedly be a range of deployments from static to fully intelligent and autonomous. Success will depend on not just efficiency and high performance but also ease of integration into an existing network.

6G promises to unleash staggering amounts of bandwidth—but only if we can surmount a potentially ruinous range problem.

The real test case for RIS will be 6G. The coming generation of wireless is expected to embrace autonomous networks and smart environments with real-time, flexible, software-defined, and adaptive control. Compared with 5G, 6G is expected to provide much higher data rates, greater coverage, lower latency, more intelligence, and sensing services of much higher accuracy. At the same time, a key driver for 6G is sustainability—we’ll need more energy-efficient solutions to achieve the “net zero” emission targets that many network operators are striving for. RIS fits all of those imperatives.

Start with massive MIMO, which stands for multiple-input multiple-output. This foundational 5G technique uses multiple antennas packed into an array at both the transmitting and receiving ends of wireless channels, to send and receive many signals at once and thus dramatically boost network capacity. However, the desire for higher data rates in 6G will demand even more massive MIMO, which will require many more radio-frequency chains to work and will be power-hungry and costly to operate. An energy-efficient and less costly alternative will be to place multiple low-power RIS nodes between massive MIMO base stations and users as we have described in this article.

The millimeter-wave and subterahertz 6G bands promise to unleash staggering amounts of bandwidth, but only if we can surmount a potentially ruinous range problem without resorting to costly solutions, such as ultradense deployments of base stations or active repeaters. My opinion is that only RIS will be able to make these frequency bands commercially viable at a reasonable cost.

The communications industry is already touting sensing—high-accuracy localization services as well as object detection and posture recognition—as an important possible feature for 6G. Sensing would also enhance performance. For example, highly accurate localization of users will help steer wireless beams efficiently. Sensing could also be offered as a new network service to vertical industries such as smart factories and autonomous driving, where detection of people or cars could be used for mapping an environment; the same capability could be used for surveillance in a home-security system. The large aperture of RIS nodes and their resulting high resolution mean that such applications will be not only possible but probably even cost effective.

And the sky is not the limit. RIS could enable the integration of satellites into 6G networks. Typically, a satellite uses a lot of power and has large antennas to compensate for the long-distance propagation losses and for the modest capabilities of mobile devices on Earth. RIS could play a big role in minimizing those limitations and perhaps even allowing direct communication from satellite to 6G users. Such a scheme could lead to more efficient satellite-integrated 6G networks.

As it transitions into new services and vast new frequency regimes, wireless communications will soon enter a period of great promise and sobering challenges. Many technologies will be needed to usher in this next exciting phase. None will be more essential than reconfigurable intelligent surfaces.

The author wishes to acknowledge the help of Ulrik Imberg in the writing of this article.