The Economist

Insight and opinion on international news, politics, business, finance, science, technology, books and arts

Follow publication

A toolkit for predicting the future

The Economist
The Economist
Published in
7 min readMay 31, 2017
Photo: Flickr

Predicting exactly how the future will play out is impossible. But, if you look in the right places, it is possible to make some educated guesses along the way. Megatech: Technology in 2050, a new book compiled by The Economist’s own Daniel Franklin, aims to look in those places, and make those guesses. Here’s an extract from the introduction by Tom Standage, our deputy editor.

Want to have your say on the future of technology for the chance to win a copy of Megatech? Enter our competition, exclusively on Medium

A new networking technology revolutionises long-distance communication, making it cheaper and more convenient than ever before. It is enthusiastically embraced by businesses, causing a speculative boom. The new technology is relentlessly hyped by its advocates and mocked by its detractors. It makes possible new business models and new forms of crime. Governments struggle to prevent the use of cryptography, demanding access to all messages. People make friends and fall in love online. Some say the new technology will lead to world peace, as communication erases borders and unites humanity. It sounds like the story of the internet in the 1990s. But this is in fact the story of the electric telegraph in the mid-19th century, which was known as the “great highway of thought”.

The striking parallels between these two technologies, one modern, one 150 years old, are entertaining — but they can also be useful. The study of history is one of three tools that can be used to predict the future of technology, or at least make slightly more educated guesses about it.

History lessons

Historical analogies of this kind, across years, decades or even centuries, make it possible to foresee the social and cultural impact of new inventions, put hype and scepticism into perspective, provide clues about how a technology might evolve in future, and provide a reminder that problems blamed on new technologies are often the result of human nature. There were, for example, instances of what we would now call “cybercrime” on the mechanical telegraph networks built in the age of Napoleon. “It is a well-known fact that no other section of the population avail themselves more readily and speedily of the latest triumphs of science than the criminal class,” in the words of one law-enforcement official. Those words could have been spoken today, but were in fact spoken by a Chicago policeman in 1888.

Such analogies are never perfect, of course, and history never repeats itself exactly. But analogies do not have to be perfect to be informative. Look closely, and there are many repeating patterns in the history of technology, on both short and long timescales.

New inventions often provoke concerns that they will destroy privacy; the first Kodak camera caused a panic over surreptitious public photography in the 1880s, much as Google Glass did in 2013. They are accused of corrupting the morals of the young, a charge levelled at novels in the 1790s, motion pictures in 1910, comic books in the 1950s and video games in the 1990s. From 19th-century Luddites to modern prophets of robot-induced mass unemployment, the fear that new machines will deprive people of their jobs is centuries old. So too are concerns over new technologies that allow man to play god, from nuclear weapons to genetic modification to artificial intelligence; these are all modern-day versions of the myth of Prometheus, and the question of whether mankind could be trusted with the gift of fire. Whether such concerns are merited or not, an understanding of reactions to past technologies can give futurologists, entrepreneurs and inventors valuable clues about how new products might be received.

Photo: Flickr

Tomorrow is another today

So much for history. The second place to look for glimpses of the future is the present. As William Gibson, a science-fiction writer, once memorably put it, “the future is already here — it’s just not very evenly distributed”. Technologies have surprisingly long gestation periods; they may seem to appear overnight, but they don’t. As a result, if you look in the right places, you can see tomorrow’s technologies today. This approach is taken by journalists and corporate anthropologists who want to understand new trends. It involves seeking out “edge cases”: examples of technologies and behaviours that are adopted by particular groups, or in particular countries, before going on to become widespread. A classic example of an edge case is that of Japan and smartphones at the turn of the century.

In 2001, mobile handsets with cameras and colour screens were commonplace in Japan. They could display maps with walking directions and allowed users to download e-books, games and other apps. Journalists and analysts flocked to Japan to see these phones in action. And whenever Japanese visitors to European and US technology conferences passed around their handsets, they were treated as though they were artefacts from the future that had fallen through a rift in the space-time continuum. Japan arrived in the future early because of the isolated, proprietary nature of its telecoms industry; its domestic market was large enough to allow its technology companies to experiment with new ideas without worrying about compatibility with other countries’ systems. It was several years before consumers in Europe and the US could buy handsets with comparable features. For a while Wired magazine had a column called “Japanese Schoolgirl Watch”, predicated on the idea that what Japanese schoolgirls (the most ardent users of early smartphones) do today, the rest of us might be doing tomorrow.

Just as historical analogies are not perfect, looking at edge cases can also be risky. Some technologies never take off — or, when they do, they take off in an unexpected or different way. In the West, for example, smartphones initially followed the Japanese trajectory, but then took a completely different turn with the advent of the iPhone and other touchscreen devices. But what is undeniable is that all technologies that do eventually catch on first go through an underground period where their use is restricted to a subpopulation; they don’t appear from nowhere. Finding these edge cases and identifying emerging technologies and behaviours is more art than science; trendspotting is hard. But it is the stock-in-trade of countless consultants and futurologists, not to mention technology journalists, who are always looking for new ideas and trends to write about.

The vision thing

The third place to catch glimpses of what is coming next is in the imagined futures of science fiction, whether in the form of books, television shows or films. Sci-fi stories take interesting ideas and carry them to their logical conclusions. What if we could build general-purpose robots, or a space elevator? What would happen if nanotechnology or biotechnology got out of control, or genetic self-modification became as commonplace as tattoos? Such futuristic tales provide visions of how the world might look with ubiquitous artificial intelligence, anti-ageing treatments that expand human lifespans, colonies on Mars and elsewhere in the solar system, or a fragmenting of humanity into post-human tribes. It can be a handy way to map out the space of potential long-term outcomes: what Elon Musk, a leading technology entrepreneur, calls the “branching probability streams” of the future.

Science fiction is not merely predictive, however. It also inspires technologists to invent things. Scratch a technologist and you’ll find a sci-fi fan. The flip-open mobile phone of the 1990s, for example, seems to have been directly inspired by the portable communicators seen in Star Trek in the 1960s. More recently, the idea of being able to talk to computers, another idea from Star Trek, has inspired a new wave of computing devices, starting with the Amazon Echo, that use speech as their main interface, allowing always-on, hands-free use. Generations of computer scientists have grown up on Isaac Asimov’s robot stories; today many entrepreneurs, including Musk, cite the Culture novels of Iain M. Banks as an inspiration. Like Star Trek, they depict a post-scarcity civilisation in which humans and artificial intelligences live and work together.

Yet although sci-fi is outwardly about the future in most cases, it is really about the present, and responds to contemporary ideas and concerns, such as an overdependence on machines or worries about environmental destruction. Reading a diverse selection of sci- can give you greater mental exibility to envisage future scenarios, both technological and societal. But it can also unwittingly constrain, by shaping the way technological developments are perceived and discussed: robots, for example, look very different in the real world than they do in science fiction, and trying to imitate the fictional variety may steer roboticists in the wrong direction. So it is also worth reading classic sci-fi from the mid-20th century, to see what it gets wrong and why – and then ask yourself what mistaken assumptions are being made by today’s tales.

Want to have your say on the future of technology for the chance to win a copy of Megatech? Enter our competition

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Published in The Economist

Insight and opinion on international news, politics, business, finance, science, technology, books and arts

Written by The Economist

Insight and opinion on international news, politics, business, finance, science, technology, books and arts.

Responses (5)

Write a response