The Technologies

Do you remember what life before your first smartphone was like?

Technological development in the past few hundred years has been incredibly swift. Zooming in on the last 30 years or so, it has been swifter still. And looking at just the past few years (or why not months?) changes are happening so fast that new technology barely has time to be implemented before it is obsolete.

But what is all the fuzz about here? What is new about this technological development? We will get more fancy machines and gadgets, sure. But we will nonetheless go on operating them has we have operated our cars, phones and computers in the past, right?

There is a major difference between the technologies emerging today and the technologies of the past. Whereas before, pretty much only human physical labor was being replaced, today, human mental labor is increasingly being outmatched by artificial intelligence (AI).

And it is not just that this AI takes over human tasks and carry them out as before. Rather, AI and the host of technologies that are associated with it has capabilities which will vastly transform our societies. Much of the new technology is bringing with it changes that profoundly affect our economies and politics, our desires and behaviors, our sense of time and space, and our identities. 


What will life be like when many humans will not be needed for work? How will an economy largely run by AI develop? What political developments will result from AI use? What will merging our brains with AI be like?

“The Course 1”, artwork by Simon Stålenhag (2015).

Questions like these could be skipping too far ahead. Let us instead start by getting acquainted with the current situation. In the following will be given a brief overview of the state of present day technology, as it informs the investigation into social implications conducted elsewhere on the Radius platform.

To gain a useful overview of today’s technological developments, we might divide it into roughly the following four categories:

  • Information and communications technologies

  • Cognitive technologies

  • Biotechnologies

  • Nanotechnologies

In the following, we will explore what these different fields encompass, what implications they might have, and how they relate to one another. Simplifying somewhat, we might first say this:

(1) Information and communications technologies underlie all the other developments. This is because increasing capacity to gather and digitize information (i.e., turn information into digits which may be handled by computers), and to communicate the resulting data effectively, enables technological innovation as such to work at a faster and faster pace. Once you have gathered some information and possess a means to communicate it with others, gathering and communicating more information in this way becomes easier.

(2) Whenever a cognitive task involved in technological innovation has been sufficiently grasped and may be represented mathematically (in the form of an algorithm), the task may be automated using cognitive technologies, i.e., products stemming from the use of AI. The process of automating tasks may in turn enable innovation which would have been impossible without use of automation.

(3) Limitations to the scope of innovation present themselves in the form of the laws of physics and all things that follow necessarily from those laws. (A lot of things do.) Two major examples are limitations of biology – the lifespan, fertility, proneness to disease, genetic variation and so on of humans and other living things – and the limited properties of physical materials – feebleness, scarcity, weight and so on. Concerning biological limitations, biotechnologies are emerging to modify these and to open up new avenues of possibility for humankind. Concerning the limitations of the laws of physics and material properties, nanotechnologies are emerging to try and alter the physical world on the nanoscale.

What “the nanoscale” is? That is easy – one nanometer is to a meter what the diameter of one marble is to the diameter of planet Earth. 

Let us dig into it.


Information and Communications Technologies (ICT)


Developments in information and communications technology (ICT) underlie the exponential development in all other technological fields. First, we might note that they follow a long tradition:

The written word, ……… invented c. 7 000 BC,

the printed word, ………. 1454 AD,

the telegraph, …………… 1830s,

phone, ……………………… 1876,

camera, ……………………. 1888,

radio, ……………………….. 1895,

radar, ……………………….. 1904,

television, …………………. 1927,

together with developments in transportation all did their part in improving information storage and communication.


That said, these precursors pale in comparison with the capacity and implications of what is going on today. Information (i) being stored and communicated using computers (ii), most often via the internet (iii) – this is today’s ICT in a nutshell.

(i) Numbers may be used to express most anything, such as an imagined picture of the world or a sort of translation of regular language. Another way of putting this is that digits may contain information about the world. Since numbers may be calculated in a (normatively) logical manner, anything that computes numbers (i.e., a computer) can arrive at the logical implications of the information in question. If the information is well-ordered and relevant for the computer’s aims, this is a big deal.

At first the only computer of interest for us humans was – well, us. But since the invention of the synthetic computer a revolution in ICT has taken place. Of course, we can as humans still process and communicate information in all the ways that are not a matter of digital computations, much like we did before. But with the advent of synthetic computers the scope and character of these activities have been drastically altered.

(ii) The principle governing the modern computer was first proposed by Alan Turing in a 1936 paper. Turing’s idea was simple: To build a machine which would be capable of computing anything computable by executing instructions stored on tape. About a decade later, machines like this were starting to be built. And over the course of the next few decades, they were set to improve – as proven, probably, by the device on which you are reading this. At bottom, all computers have two main parts: Processor parts, which perform the calculations (measured in “bits” of information per a unit of time, the number of binary variables – 1 or 0 – involved in the calculation and represented by switches turning on and off), and memory parts, which store information for future calculation (the amount of stored information being measured in “bytes”, usually 8 bits lumped together.) To these parts are attached an input function and an output function. It may seem baffling that advanced calculations can be performed using only 1s, 0s and physical parts in this way, but with the improvements in mathematics that took place in the late 1800s and early 1900s, this is really the case.

The actual workings of a computer in the physical world may, in highly simplified terms, be explained as follows: 一) Electricity from an industrial power source is supplied to the physical system capable of representing mathematical calculations, which then 二) runs its calculations based on its physical and (hopefully) logical constitution together with input received and mediated by electrical impulses (or in the case of fiber optics, light produced by electric lamps) running through its circuits and producing physical effects on its processor and memory parts, and then, finally 三) communicates the resultant calculations to a physical system which in turn produces an effect in the world (such as a line of text or an image on a lighted glass screen, or a robotic movement.)

(iii) The immediate foundations of today’s ICT were being laid down in the late 1960s. Interest in computers brought one key invention after the other, such as microprocessors, microcomputers, operating systems, digital switches, the satellite (in 1957) and optic fibre. This in turn provided incentive for even more information to be digitized. It would still be a few decades until computers and the internet diffused on a wide scale. But already in the 1970s and 80s adventuresome companies were beginning to implement digital technologies. This process, which is still culminating today, is known as the digital revolution. Its gradual but fairly swift progress is well demonstrated by the coming into existence of its second most central feature (after the computer): The internet.

The internet was first deployed in 1969 by the US Defense Department, as a defensive measure in the event of Soviet nuclear attacks on communication infrastructure. Originally, it could only be accessed by a chosen few, mostly people in the Defense Department and in universities. Those adept enough to navigate an early computer were not too many. Even if you could get access, there was not really any content or interaction taking place. But gradually – as personal computers were diffused, the bandwidth of telecommunications grew, and user-friendly software was developed – social demand for digital networking began to rise. The World Wide Web server and browser was launched in 1990, perhaps marking the definitive beginning of a new era. Now, not only do computers process information on their own but also together, in a vast web of digitized information.

The internet today is thus a wireless, global network of interconnected computers, coming together to create a global multimedia library (“cyberspace“), navigated by use of web browser software. A tiny percentage of the information made available on the internet everyday gains the attention of billions of people, and among the heaps of other information available, people have the means to dig up highly specific information about almost everything. When necessary, parts of the web may be walled off and reserved for a smaller audience. (Unless someone figures out a way to scale the wall, that is.) This virtual landscape is used by companies as a means to make profit, by governments as a means to exercise power, and by people as part of life. 

Important ICTs:


• Computers

• The internet

• Satellites

• Smartphones

• Television

• Radio

• Telephones

• Digital cameras

• Digital audio recording devices

• Quantum computers

It has been argued, most prominently by sociologist Manuel Castells, that these developments in ICT have played a major part in producing a new economic and social paradigm. Castells calls this the informationalist paradigm. His observation is that accumulation of knowledge and higher levels of complexity in information processing – i.e., continuous technological development – has become a goal in itself, complementing the usual orientation towards economic growth.

This, in turn, has given rise to what Castells calls the network society and network economy. As information processing has become more advanced, the flexible, decentralized organizational form of the network is increasingly outcompeting organizations that operate along traditional, hierarchical lines. Simply put, the benefits to communication and information processing that traditional organizations provide are increasingly made redundant by ICT advances. This provides incentive both for increased networking and for continued technological development.

Furthermore, Castells argues that the increasing use of ICT in our everyday lives contributes to cultural changes. This happens as ICT changes our perception of time and space, and as we come in contact with a multitude of cultural expressions in the virtual world which often stand in opposition to our own sense of identity. More on this in other parts of the Radius platform.


As the companies, governments and peoples of the world come to use ICT more and more, massive amounts of data are being accumulated. Right at the start we mentioned artificial intelligence (AI) as the dividing line between the technologies of old and today’s situation, as regards social implication. AI runs on data. As we communicate, share, and store information in our day-to-day lives, AI watches and learns.

But how? And why? This brings us neatly to our next topic.


Cognitive Technologies


AI is the name for intelligence demonstrated by machines, in that they perceive their environment and take actions that maximize their chances of successfully achieving their goals. Cognitive technologies are the products of the field of AI.

“Machine” and “environment” need not be taken too literally however: There are AI programs that navigate cyberspace through the use of software, rather than physical spaces by use of robot parts. In the case of Open AI’s ChatGPT, the machine is the ChatGPT algorithm running on your phone or computer, and the environment is textual input by a human user.

Also, we should stop for a second and take note of the phrase “achieving their goals”. This should not be taken to mean literally the same thing as when a human being does so. The way we usually talk about humans having goals, we assume that the individual will plays some part in the equation. In the case of AI, on the other hand, the goal is set by a programmer. The AI will attempt to “achieve its goal” – but only in the sense of its programming being run until the goal-state is achieved. To once again take the case of ChatGPT, the goal of ChatGPT is to produce a string of words that are perceived as an appropriate response to the input given by the human user.

The AI of a given cognitive technology uses signal processing techniques to receive data input. It then processes this input data, using its programming and vast amounts of (hopefully relevant) sample data. The relevance of the data is decided largely by the AI’s goal: some data has been identified as conducive to the AI reaching its goal (“You are getting there, almost there now!”) and some as detrimental to the AI’s goal (“No more of this!”). This has happened through reinforcement training: The programmer has “taught” the AI by indicating to it whether the data is helping it reach its goal. Deciding on an output which the AI deems will optimize its chances of reaching the goal-state, the AI finally produces changes in its environment. This can be done, for instance, by producing text or speech, or by moving a robotic arm.

Important cognitive technologies:


• Speech recognition

• Object recognition

• Human–computer interaction

• Dialogue generation

• Narrative generation

• Machine learning (including deep learning, neural networks)

• Industrial robots

• Social robots

• Self-driving cars


Let us take a simple example. We have a cognitive technology, called Greet-O, which is programmed to recognize common verbal greetings and respond to them. A woman is standing in front of a microphone attached to Greet-O and says: “Hello”. The microphone delivers this sound to Greet-O’s AI programming as input data. The AI algorithm processes the sound to decide whether it is a greeting or not. By comparing the sound to huge amounts of previous data, that is, other sounds expressing either greetings or not greetings, which the AI has been trained to recognize by reinforcement training, the AI decides that the sound that this woman made was a greeting. Following its programming, this decision produces a data output. The output comes in the form of a pre-recorded “Greetings to you”, actuated through a speaker, together with a wave of Greet-O’s robotic arm.

Whenever a certain task can be carried out by AI processing input data to make reliable output data, this is referred to as automation. Certain tasks are well-suited for automation. Others are not. Whenever something is highly repetitive and requires semi-high mental activity, for instance going through thousands of excel files searching for certain numbers in certain boxes, automation is perfect. Even tasks that are highly mathematically advanced can be easily automated, as long as the data is kept fairly simple. But the more complex behavior that is required, especially if it involves social behavior, the more difficult it is to automate using AI. Tasking an AI with achieving, for instance, the goal state “Child X raised to be maximally successful within the value framework of this particular society” might be near-impossible. That is, if the goal can even be formulated in a manner that is precise enough.

Artwork by Simon Stålenhag, from his book Europa Mekano (in development).

What it comes down to, simply put, is the amount and complexity of input and output data required to achieve the goal state in question. It is safe to say that no programmer in the world could provide the sample data training needed to achieve a goal state like the one above.

This, however, is where it gets interesting.

By training on sample data, the algorithms that constitute AI may “learn” to make predictions or decisions without being explicitly programmed to do so. All they do is aim for their goal state, e.g., generate revenue, make a cup of coffee, raise a child with traits X, Y and Z, or generate maximally positive polling results. The different input and output data that the AI comes across is valued at random. Only after the AI has gradually acquired enough experience, through trial and error, to decide what the significance of certain data is in relation to the goal state, does the AI begin to reach the goal state. Machine learning is the study of computer algorithms that improve automatically in this way. The people involved in machine learning research are confident that this technique will open the door to achieving AI goal states and behavior that are wildly complex beyond human understanding.

To be sure, we are not talking just a handful of iterations. Complex goal states require tons and tons of data for the AI algorithm to learn successfully. This brings us back to where we began: The increasing ICT usage which provides the data about everything that we do.

What, then, might the consequences be?

Put bluntly, AI will with some degree of certainty outperform humans in a steadily increasing number of tasks. Of course, this increases business revenue. It also frees up time for humans to do more advanced, meaningful or fun stuff. Yet, at the same time it includes making decisions for us, and influencing our behavior. (See more about this under Work & Leisure.) Paraphrasing former Google employee Tristan Harris, AI does not need to match humans at their finest. Most of the time, matching humans at their worst is more than enough.

So how do we tackle this? AI will outperform and outmaneuver us humans, and that will be the end of it?

Depending on who you ask, it might not be that simple. This is not least because what it means to be human may soon come to change drastically.


Biotechnologies


Have you ever wished that you were taller, smarter, or healthier? Before you know it, these things may no longer be a matter of genetic luck.

Biotechnology is the study of, and practices geared towards, developing products out of living systems and organisms. It is increasingly being employed in medicine, agriculture and industry. And its proponents have grand visions.


CRISPR is a family of DNA sequences found in the genomes of prokaryotic organisms. It plays an important role in the immune system, as it detects and destroys DNA from bacteriophages that have previously infected the organism. Cas9 is an enzyme which, when used together with CRISPR, may be used for gene editing. Acting as a tiny pair of scissors, quite literally, CRISPR/Cas 9 can cut out specific pieces of DNA and place them in a new sequence. This in turn provides the organism in question with whatever ability is tied to the new DNA sequence.

There is still a lot more to learn about the human genome. But as we learn more, CRISPR biotechnology means that the genome may be altered at our discretion. Certain features that are highly noticeable in a person may be tied to only one gene. Being “ginger”, light-skinned with red hair, is one such feature. Others, like height or intelligence, appear to result from a complex interplay of many genes.

This possibility of gene editing might be what will allow us to keep up with the fast development of AI. Some futurists hold that the most likely scenario is one in which we merge with AI while at the same enhancing our biological capacities beyond the merely human. Superintelligence, super-resilience, and super-longevity – all of these are real projects in the field of biotech.

Present-day biotech products:


• Genetically modified organisms (GMO)

• Bio medical technologies

• Biodegradable materials

• Biofuels

• Directed use of microorganisms in manufacture

• Bioleaching (bacteria extracting metal from ores)

• Winemaking

• Cheesemaking

• Brewing

• Bioweapons

• Bioremediation


Biotech products of tomorrow:


• Human gene editing

• Life-extension

• Artificial biological intelligence

Biotech of today is not quite there yet, it is true. As of now, it has however proven useful in combatting environmental disasters (e.g., using bioremediation to clean up a chemical leak.) To be sure, it may also pose an environmental danger (e.g., in that genetically modified organisms upset eco systems.)

Such is the peculiarity of new technologies: They pose both as threat and opportunity. In the case of biotech, mistakes might seriously muck up the biological world. And yet, this has nothing on the risks associated with the fourth and last of the fields of development we will be looking into. Making a mistake in nanotech, the very fabric of material reality may cease to be what it was.


Nanotechnologies


Nanotechnology is the study of how materials function on the infinitesimally small level. It is concerned with creating materials and devices on the nanoscale, the level of atoms and molecules – devices small enough to enter our bloodstreams, for instance.

To get an idea of how small the nanoscale is, we might note that one nanometer equals one billionth of a meter, that is, something like what the diameter of a marble is to the diameter of planet Earth.

What nanotech does is manipulate structures on the nanoscale, changing its structure and therefore also its function. Opaque materials, copper for instance, can be made transparent; insoluble materials, like gold, can be made soluble; and stable materials, like aluminum, can be made combustible. The fabric of reality is altered, opening up a world of possibility.

Some examples of the application of nanotechnology today are tennis-, golf- and bowling balls made to be more durable, cars being manufactured with less materials and needing less fuel, and trousers and socks made to last longer and keeping people cool in warm weather.

A few years from now, new deployment of nanotech could have manifested dramatically. Nanorobotics could be sending infinitesimally tiny robots surging through the air, as well as any other material, including our bodies. Matter itself could become programmable matter, having materials change aspect at the flick of a finger. These advances would have drastic implications for every human endeavor.

Imagine objects appearing out of thin air, perhaps only at a wave of your hand. Hungry? Have an apple. This certainly seems like magic to us. We may be tempted to say that it is impossible. But taking a moment to think about it, we know for a fact that similar replication is already part of our lives. Somehow, the average woman can put together food (junk food, even) to make a baby over the course of nine months. Nanotech aims to understand this process in more than miniscule detail, and to develop it further. This could make for endless possibilities.

But as we said before, the risks associated with nanotech are huge, too. The perhaps most graphic dystopian scenario is one that is usually referred to as gray goo – a situation in which nanobots bent on replicating themselves turn all materials on earth (as well as in the solar system and so on) into more nanobots. This might of course seem far-fetched. More tangible are the hazards of toxicity of materials that have been manipulated on the nanoscale. It appears that even very small adjustments, on a very very small scale, might have great consequences that cannot easily be foreseen.

Present-day nanotech products:


• Cars needing less fuel

• Cars being manufactured with less materials

• Solar cells needing less silicone

• Display technology

• Pharmaceuticals and polymers

• Better precision and more durable golf-, tennis- and bowling balls

• Ever tinier semiconductors

• Gecko tape

• Food packaging

• Disinfectants

• Sunscreen

• Cosmetics

• Clothes lasting longer and keeping cool in heat

• Furniture varnishes

• Bandages made to heal faster

• Biomedical applications such as tissue engineering, drug delivery, antibacterials and biosensors


Nanotech products of tomorrow:


• Nanorobotics

• Molecular nanotechnology

• Productive nanosystems (producing the parts for other nanosystems)

• Programmable matter



How Fast Might Things Go?  


One final word should be said about the prospect of current technological advances widely overshooting even our wildest expectations.

The scientific community is, roughly, divided into sceptics, moderates and futurists with regard to the speed at which technological advance will occur, and pessimists, moderates and optimists with regard to the benefits or harm that technological advance will bring about.

Overall, it seems that there might be a slight tilt towards futurism and optimism in the scientific community as a whole. The quintessential futurist and optimist finds expression in renowned inventor Ray Kurzweil, who claims that a technological Singularity will have occurred before the year 2030, in which human beings will merge with AI, which will by then have achieved superintelligence. The sceptics, on the other hand, derogatorily call this vision “intelligent design for people with an IQ of 140” and maintain that highly advanced AI of the kind that Kurzweil prophesies will take much longer to develop. Some of them, like the late Hubert Dreyfus, maintain that AI will never be capable of so-called “general” intelligence in the manner of humans, and never capable of so-called “super” intelligence, measured in thousands of IQ.

Even so, tech essayist Tim Urban has suggested that weighing skeptical and futuristic assessments together, the scientific community as a whole appears to be expecting highly advanced AI – capable of general or perhaps even “super” intelligence – around the year 2060.

Where will we be then? Let us figure it out.


Read more:

Work & Leisure
Knowledge & Education
Politics & Crises
Romance & Family Life
Home


Sources for the above:

Official documents


EU Commission. Communications COM(2016) 381, COM(2018) 237, COM(2018) 795, COM(2019) 168, COM(2020) 64, COM(2020) 65 White Paper, COM(2021) 118, COM(2021) 205; Expert reports “The Future of Work? Work of the Future!” (2019), AIHLEG “Ethics Guidelines for Trustworthy AI” (2019)
EU Parliament. Draft report 2015/2103 (INL), Res 2015/2103 (INL), Res 2020/20 -12, -14 and -15 (INL)


Literature


Castells, Manuel. The Information Age Trilogy I: The Rise of the Network Society (2nd ed, Wiley-Blackwell 2010)
Castells, Manuel. The Information Age Trilogy II: The Power of Identity (2nd ed, Wiley-Blackwell 2010)
Castells, Manuel. The Information Age Trilogy III: End of Millenium (2nd ed, Wiley-Blackwell 2010)
Giddens, Anthony. Sociology (6th ed, Polity 2009)
Harari, Yuval Noah. 21 Lessons for the 21st Century (Spiegel and Grau 2018)


Online resources (visited April 2021)


Artificial Intelligence News. https://artificialintelligence-news.com/
MIT Technology Review. https://www.technologyreview.com/
State of AI Conference. https://www.stateof.ai/
Wait But Why. The AI Revolution (Part 1): The Road to Superintelligence https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
The AI Revolution (Part 2): Immortality and Extinction https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Wikipedia. Entries on “Artificial intelligence”, “Automation”, “Biotechnology”, “Camera”, “Cognitive science”, “Computer”, “Computer science”, “CRISPR”, “CRISPR gene editing”, “Data”, “Digital Revolution”, “Digitization”, “Genetically modified organism”, “Information and communications technology”, “Information technology”, “Internet”, “Life-extension”, “Machine learning”, “Nanotechnology”, “Nanorobotics”, “Radio”, “Robotics”, “Satellite”, “Self-driving car”, “Smartphone”, “Telegraph”, “Telephone”, “Television”, “Turing machine”.

Leave a comment