top of page

Technological context

The two World Wars, especially WWII, led to a technological boost in some domains. In this section, we present an overview of the main technologies at the time of WWII. It also encompasses the need for long distance communications, critical

at times of war, shows one of the first important applications of cryptography, and describes early computing machines.

1. Long distance communications
​
Radio and television

​

The first example of wireless telegraphy, now called radio, made its appearance in 1832 at the scale of a room. The first long distance radio test was conducted in 1901 by Italian Guglielmo Marconi between Canada and England, and led him to receive the Nobel Prize in 1909.

        At the beginning, the radio was less reliable than the telegraph or the wired telephone, but it was considerably improved during WWII, due to the need for communications on the battlefields and at sea. By the start of WWII, the radio had become a common tool (see Figure 3), both for the army and the public. Of course, at times of war, the use for encryption was primordial [69]. It motivated the science of cryptography

(see here).

        The radio led to the possibility of transmitting images. It was first achieved by the ”mechanical” television. A scanning disc captures the images, a microphone the sound. Both signals are transmitted through radio signals, as shown in Figure 4.

Fig. 3 Radio operator in a British Lancaster bomber [82]

        The mechanical television is the first mean used by the famous BBC, as an experimentation in 1929. It was quickly supplanted by the cathode ray tube television. The television can be seen as a complement of the radio, because it conveys images as well as sounds.

Fig. 4  Drawing of the general working of the mechanical television: the scanning disc captures the image that is encoded and transmitted via radio. The image is reconstructed on the receiving disc [21]

Telegraph and telephone

​

The telegraph was created in 1837. The Morse code, either American (invented by Samuel Morse) or International (invented by German Friedrich Clemens Gerke in 1848), was used to transmit information through the telegraph. The first transatlantic cable was completed in 1866. The telephone was invented ten years later by Alexander Bell and Elisha Grey. The telephone worked only by wire in the early years. The first commercial mobile phone call happened in 1946, in St Louis, Missouri, thanks to Bell Labs, after WWII delayed its development.

        Swedish-born American Harry Nyquist published a famous paper about telegraphy [83]. His paper later inspired Claude Shannon’s famous mathematical theory [106]. Nyquist’s work can be summarised in two parts: signal shaping (see Figure 5) and signal coding.

        Systems like the telegraph need to deal with power efficiency and interferences.

Fig. 5 Various signal shapes for the telegraph [82]

        At the time of Nyquist’s paper, half sine waves (signal B, as seen in Figure 5) were used in the telegraph. Nyquist proved that a rectangular signal (A), passed through proper circuits (D and E) and thus becoming signal C, is far more efficient in terms of power use and resistance to interferences, because its spectrum is simpler.

        Interestingly, the rectangular signal can easily be seen as a binary input (signal A being 1, no signal being 0). Having shown how to transmit, Nyquist thought about what to transmit: the code.  The Morse code had been designed as a compromise between speed of transmission and, more importantly, being easily deciphered by ear. Thus, the code is not optimal. There is some logic in the way that the most common letter in the alphabet (E) corresponds to the most simple symbol (a dot), but this pattern is not matched for all the other letters.

        Nyquist’s idea is that one should find every possible symbols transmittable, and rank them from the shortest to the longest to transmit. Then, assign the simplest to the most common letter of the language, and so on until the longest corresponds to the least common letter of the language, thanks to the work of Parker Hitt [50], who studied the frequency of letters in the English language. In his thinking, Nyquist does not include punctuation nor spaces, which simplifies the reasoning, but is of course unusable in reality. Moreover, Shannon will prove that 1-letter frequency, as proposed by Nyquist, is far too simple, and that information should be processed as diagrams, trigrams or even complete words. The conclusion is that Nyquist pioneered the need for a theory of coding, but did not provide one.

​

        Ralph Hartley provided a more theoretical approach to long distance communications in 1928 [48]. A key step in his theory was to decouple the meaning of a message from its code. A transmission system is only as good as its capacity to

distinguish different symbols (pre-agreed) at its receiving end.

        Hartley provides a capacity formula using logarithm (as did Nyquist); a formula on which Shannon will base his later theory:

​

​

S  being the number of symbols and n  the number of possible combinations of those symbols.

2.2 WorldWar II
​

Weapons

​

Weapons, and in particular missiles, were among the first “intelligent” devices, combining control, feedback and signal transmission.

WWII saw the first use of missiles, which are rockets with a guidance system, in combat. The most well-known examples are the V1 and V2 flying bombs used by Germany (the only belligerent to have developed missiles) to target mostly the UK (see Figure 6), especially London, and the Belgian port of Antwerp . 

      The autopilot was developed by German company Askania Werke. Its main goal was simply to maintain altitude and speed to the level defined before being launched by the artillerymen [127].

 

        The physical impact of those bombs on the British war effort was not as important (the explosive charge was small) as the psychological impact. V1s were very imprecise due do the simplicity of their autopilot, but the German army knew that since London was so big, the missile was very likely to hit something, even without precise targeting. After WWII, missiles became a key component in the arsenal of all armies. German scientists who worked on the V1 and V2 were used by the USA and the USSR for their respective space programs.

        Current missiles can cost up to several millions, make decisions by themselves (choosing to auto-destruct if the target is not clear or not found) and be precise to the meter [122].

Fig. 6 A V1 in the British sky [52]

ww2

Cryptography

​

Intelligence and cryptography have always been a focus in all conflicts throughout History. However, due to the advanced state of long distance communications (radio, for instance) and the use of cipher machines such as the German Enigma (see Figure 7), WWII saw an impresive development of those technologies.

        Decrypting enemy messages led to decisive victories for the Allies. The USA were able to break the Japanese Navy code JN25 early in the conflict, leading most notably to their victory at the battle of Midway (June 1942) and the shooting of top Japanese admiral Isoroku Yamamoto’s plane on April 18th , 1943. Interestingly, its replacement JN40 was broken in a couple months due to a huge Japanese mistake, which they never realised, allowing the Allies to know everything about Japanese Navy movements till the end of the war.

        Even more notably, the deciphering of the Enigma machine by Alan Turing at Bletchley Park [49] led to Soviet victory during the largest armoured battle of History, at Kursk, in July 1943. A Soviet spy at Bletchley park transmitted the locations

of German attacks before they happened, allowing the USSR to defend perfectly and later to counterattack.

​

3. Computing machines

​

Alan Turing was not the first one to think about machine learning. One year before him, Edmund C. Berkeley published a book about the topic [13].

      In his book, Berkeley designed a ”mechanical brain” (see Figure 8), using schematics from the current era: barns representing various parts of the computer and telegraph cables (with a control tower) to convey information between the barns.

      The breakdown of this machine is as follows:

  • The input: where all input information, numbers and other data are stored.

  • The storage: where any piece of information that is required to be saved during

the computation will be stored.

  • The computer: which inputs two numbers and outputs the result of an operation

between these two numbers.

  • The control: which would control the different ”switches” of the machine to tell

which part should be ON or OFF and in which order.

  • The output: where all output information, numbers or other data will be stored.

​

This design has been praised for being visionary. Current computers do have the same basic architecture [119].

In his book, Berkeley provided a few examples of what such a computing machine

could do:

  • Automatic address book 

In 1949, the only way to send a mail to more than one person was to copy each address of each person on several envelopes and send it. Berkeley imagined that in the future all these addresses would be stored into a mechanical brain and the machine would copy the address we want on the envelopes itself.

  • Automatic library

      When looking for any piece of information in 1949, the only solution was to go to the library and look for a relevant book. Berkeley imagined that in the future, one could go to the library and dial into a mechanical brain the desired information. The machine would then proceed to indicate which book the person should be looking into. This description is perhaps the first historical example of a search engine..

  • Automatic translator

 Back in the days, when one wanted to learn a language or just wanted to look

up a particular word, one would look into a dictionary that translates from one

language to the other. Berkeley imagined that in the future, one might dial into a

mechanical brain a word or a sentence in a language and that the machine would 

 output this word or sentence in another one. This is the very purpose of today’s

services like Google Translate.

​

Some of those applications were already proposed in Vannevar Bush’s 1945 paper [20]. Bush focused on the problem of storage and data association, leading to his proposition for a new machine, the ”memex” (see Figures 9 and 10).

        Bush searched the most efficient way to retrieve data. The model “asking for data, receiving that data” is too simple and not efficient. Bush thought data should be retrieved like in the human brain: by “association”.

        Association is when an user receives suggestions about additional relevant data when he asks for a particular data. This is precisely why the memex was created to do, as depicted in Figures 9 and 10:

​

​

​

​

ww2-b

Fig. 9 Retrieving a document from the

memex - the memex  suggests additional

documents that are linked to the requested

one.

Fig. 10 Adding a link between documents

– the memex allows its user to create links

between several documents.

In the memex, all the documents contain links with other documents. Those logical links can be created manually by the user or automatically by the memex itself. This association idea laid the basis for what is currently known as ”hyperlinks”.

bottom of page