Radius is a tech blog exploring the social effects of 21st century tech development. We aim to provide followers with a steady flow of relevant news in the field. We also do exclusive, in-depth interviews with people in tech, business and academia.
On the Radius website you find information on the most significant technologies of today and how they relate to each other. You also find information on vital areas of social impact: Work & Leisure, Knowledge & Education, Politics & Crises, and Romance & Family Life.
Radius gains from your interaction with its content in two ways: By advancing our perspective on technological development, and by forming a network of actors who are interested in the topic. We are highly interested in your personal or professional experience and opinion, so please do not hesitate to contact us.
Radius is a tech blog exploring the social effects of 21st century tech development. We aim to provide followers with a steady flow of relevant news in the field. We also do exclusive, in-depth interviews with people in tech, business and academia.
On the Radius website you find information on the most significant technologies of today and how they relate to each other. You also find information on vital areas of social impact: Work & Leisure, Knowledge & Education, Politics & Crises, and Romance & Family Life.
Do families still play an important role in today’s society?
In most Western countries, divorce rates rose steadily over the second half of the 20th century. The only people not very likely to get divorced today are those with high education and incomes. Data on children of divorced parents are clear: They fare worse in aggregate.
On the other hand, the fragmentation of the family and of the patriarchal norms that underlie it has opened up for freedoms vastly greater than before. People are free to form identities and pursue life goals that would have interfered with traditional family roles, giving rise to new innovation and cultural expressions.
What appears to have happened is that the family is no longer considered necessary as a source of financial and emotional safety. Rather, it has become a hindrance to freedom.
Technology lies at the root of these developments. First, by making possible the kind of work and abundance in industrial and post-industrial societies which make the safety functions of the family redundant. Second, by providing contraceptives, making the policing of female sexuality redundant. And third, by providing brand new opportunities for romance and managing of family life.
Here at Radius, we will be chiefly concerned with the third kind of developments: The most recent technologies of romance and family life.
Tinder logo.
Artwork by Simon Stålenhag, from his book Things from the Flood (2016).
Tinder, sex robots, pre-natal diagnostics, gene-editing and parental surveillance apps are all transforming the ways in which dating, sex, child-bearing and family life function. In the dance between safety and freedom, between the will to express oneself individually and the need in children for stability and emotional support, all of these technologies have the potential both for great improvement and major disaster. Which of the two takes precedence is up to us to decide.
Beck, Ulrich. Risk Society (Sage 1992) Castells, Manuel. The Information Age Trilogy II: The Power of Identity (2nd ed, Wiley-Blackwell 2010) Giddens, Anthony. Sociology (6th ed, Polity 2009)
Online resources (visited June 2021)
Wikipedia. Entries on “CRISPR gene editing”, “Divorce”, “Effects of divorce”, “Marriage”, “Parental controls”, “Prenatal testing”, “Sex robot”.
Artificial intelligence (AI) will have an enormous impact on the way people live and work in the coming decades. This reasoning is the basis of the European strategy on AI, which was launched in April 2018 and has been confirmed since.
So begins one of the most recent communications on AI by the EU Commission, COM(2021) 205, produced in April 2021. It continues:
The potential benefits of AI for our societies are manifold, from less pollution to fewer traffic deaths, from improved medical care and enhanced opportunities for persons with disabilities and older persons to better education and more ways to engage citizens in democratic processes, from swifter adjudication to a more effective fight against terrorism and crime, online and offline, as well as enhancing cybersecurity. AI has demonstrated its potential by contributing to the fight against COVID-19 [bold lettering in the original], helping to predict the geographical spread of the disease, diagnose the infection through computed tomography scans and develop the first vaccines and drugs against the virus … At the same time, the use of AI also carries certain risks, such as potentially exposing people including children, to significant mistakes that may undermine fundamental rights and safety, as well as to our democratic processes.
While being ahead in terms of legislation and perhaps also policy, the EU is however lagging behind the US and East Asia in terms of actually developing the new technology. The US has long been leading in technology development and prefers to let the market and the academic research associated with it work its magic over legislation. China has been pursuing a conscious strategy of claiming large market shares in new technological fields for a long time, seeing a window of opportunity as the game field is shifting. Out of the nine or so companies that are driving worldwide AI development – Google, Amazon, Apple, Facebook, Microsoft, IBM, Baidu, Tencent and Alibaba – none are European.
The three pillars of the European brand of Trustworthy AI.
But there may still be time for the EU to catch up. The total amount of data in the world is projected to reach around 80 zettabytes in 2021 (it’s a lot.) But already by 2025, the total amount could be 180 zettabytes – more than double. As we explored under The Technologies, vast amounts of data are the key to developing truly powerful AI. The EU, the US and China are all competing for a piece of this steadily growing cake.
It is between these three giants that the struggle stands. But there is no shortage of other players – a couple of years ago, Russia’s Vladimir Putin welcomed the prospect of developing AI and robotics weapon systems. This would, he suggested, make warfare a question of one side’s robots beating the other side’s robots, after which it would be game over. To this must be added the prospect of war changing in more subtle ways, as new techniques of cyberwarfare and information warfare are pursued, especially by those players who wish to change the status quo.
Amid the race towards developing the new technology – for economic as well as military purposes – at least three major crises appear to loom on our horizon. In the 21st century, we run the risk of environmental disaster, nuclear war, and societal disruption due to rapid technological advances. If we get either of them wrong, getting the other two right will be of little comfort. At Radius, we nonetheless focus on the risk of technological disruption. With a bit of luck, getting this right can help us solve the other two.
How are the nations of the world rising to meet these challenges? Initially, we should note that modern technological development has certainly contributed to the integration of world markets, peoples and governments. (This we usually call “globalization”.) But there are distinct developments in the opposite direction, towards compartmentalization, as well.
As the Cold War ended around the year 1990 with the collapse of the Soviet Union, what has been called “the End of History” (by Francis Fukuyama) began. The US rolled out its free trade policies around the world, and it was predicted that all the world soon all be one great liberal democratic market economy.
Thirty years later, things no longer appear that simple. Hence, we may well talk about “theReturn of History”.
Beginning sometime around the financial crisis of 2008, the US policing of the world at large began to wane (or at least become less forceful.) So it was with belief in the liberal narrative. A nationalist reaction against the globalization which had been accelerated in the 1990s can now be seen in most Western countries. In Russia, China and Singapore, just to name a few, a distinct alternative to democracy has proven resilient.
Tackling the three issues of environmental catastrophe, nuclear war and technological disruption, it seems far from clear that the nation states of the world will be working as one. To the side of the issues at hand are political intrigues and local interests, and a reluctance to enter into global cooperation with the UN, the IMF, and the US, which are taken to be wolves in sheep’s clothing. Against the tendency towards integration, states are maintaining nationalist, cultural and religious identities that are at odds with the liberal narrative.
This conflict between, on the one hand, global networks of power and capital and, on the other hand, local identities are present also within societies. Sociologist Manuel Castells has dubbed this a conflict between the Net (faceless, globally networked processes) and the Self (a meaningful local identity.) It is not the case that the new technologies are only providing global networks with more powerful tools of communication and control. They are also creating the impetus for locally based identities to act as a counterpower, formed either in defiance of the power of global networks or around a project seeking to shape that power. Examples range from the Bible Belt conservatives of the US to Al Qaeda and ISIS, from the Catalonian nationalist movement to the Extinction Rebellion. And these counterpowers make creative use of the new technology as well.
Artwork by Simon Stålenhag, from his book The Labyrith (2020).
We should note that a failure to integrate the states of the world need not be decisive. The same goes for integrating different identities within societies. Rather than putting all the eggs in one basket, as the saying goes, a certain degree of discord and competition might help to make sure that a multitude of solutions are tried out. It is a fine balance act, avoiding war and otherwise excessive conflict, while at the same time resisting the temptation to make everything uniform and subject to the one true order.
In an important sense, the liberal narrative fails to provide that intense sense of absolute meaning and purpose that other creeds tend to offer. It asks a lot of its proponents, perhaps too much. The individual is left on her own, forced to amass the courage and self-sufficiency required to navigate through an absurd world. This appears to have become more apparent in the past 30 years. Economic brute force or superstitious categorical demands on action tend to win out more often than not. Nevertheless, the heart of the liberal narrative is not tied to the trajectory of any given system of meaning. It only insists on the mediation between ethical enterprises, never a final judgment. As such it is likely to prove resilient as the power and the counterpowers of the online and offline worlds continue to clash.
If we are to maintain a tentative belief in liberal democracy, understanding the implications of The Technologies of today is key to knowing how to navigate the landscape ahead. Even though global politics may appear to be distant and unimportant for one’s individual life, seeing the bigger picture of global cooperation and competition is important for recognizing the difference one may make in the everyday.
EU Commission. Communications COM(2016) 381, COM(2018) 795, COM(2021) 205; Expert report “The Future of Work? Work of the Future!” (2019) EU Parliament. Draft report 2015/2103 (INL), Res 2015/2103 (INL); IPOL Study “European Civil Law Rules in Robotics (2016), STOA Policy Briefing “Legal and ethical reflections concerning robotics” (2016)
Literature
Beck, Ulrich. Risk Society (Sage 1992) Beck, Ulrich. “The Reinvention of Politics: Towards a Theory of Reflexive Modernization”, in Reflexive Modernization (Polity 1994) Castells, Manuel. The Information Age Trilogy I: The Rise of the Network Society (2nd ed, Wiley-Blackwell 2010) Castells, Manuel. The Information Age Trilogy II: The Power of Identity (2nd ed, Wiley-Blackwell 2010) Castells, Manuel. The Information Age Trilogy III: End of Millennium (2nd ed, Wiley-Blackwell 2010) Giddens, Anthony. “Living in a Post-Traditional Society”, in Reflexive Modernization (Polity 1994) Harari, Yuval Noah. 21 Lessons for the 21st Century (Spiegel and Grau 2018)
Whatever your answer to this question, you will likely find something searching on the web. Depending on the topic, you might in fact find more information than you could ever go through in a lifetime.
The problem, then, is not to find information. The problem is to orient oneself.
Knowledge, understood as the capacity to understand, remember and effectively communicate or utilize information, is of central importance to all human activity. Questions of knowledge and truth have been at the forefront of public debates forever. But with the advent of the internet, social media and AI geared towards influencing our behavior, these questions have been given a new dimension.
Claims to truth tend to be extremely important for us – especially if they happen to coincide with what we already believe. Why is this? In part, it appears to have to do with our need to orient ourselves in a sea of information. In order to tell useful information from mere white noise, we need a frame of reference. Without such a frame of reference, we quickly run into a problem which is yet more acute than not knowing – we would not know how to act.
Optimally, we would like to start from a solid foundation – from the bedrock, so to speak, certain truths that cannot be put into question. Using these truths as a measuring rod, we could then tell fake news from real ones. Science might be held up as an example to follow: A steadily growing body of knowledge produced by use of a rigorous method.
But on the other hand, not all things that we want to figure out as true or false appear to be as clear-cut as the truths dealt with in science. Whereas science appears to be highly successful at telling material things apart, or constructing mechanical or digital solutions to problems that we run into, it cannot define those problems for us, or, strictly speaking, weigh different problems against one another. Put somewhat differently, science offers no account of how to value things. Therefore, against the scientific outlook and endeavor, there seems to be cause also for skepticism. Whenever someone claims that their actions follow from bedrock truth, someone else may very likely be able to dig beneath that bedrock truth, and throw a new light on things.
There is a great temptation for individuals to simply allow themselves to be swallowed up by an identity, with its accompanying narrative, and to allow for this identity and narrative (through the people they associate with and the social media feed which perpetuates it) to tell true from false. To some extent, it may even be inevitable. But understanding the dynamics by which the internet, social media and AI of today function are the first steps towards thinking more critically about one’s situation.
Scene from the documentary The Social Dilemma (2020).
The myriad of perspectives that the internet presents us with may mean that orientation has perhaps never been more difficult. But this difficulty need not be a bad thing. For one thing, it requires each individual to reflect critically not just on the information that they are presented with, but also their own actions.
Under the topic of The Technologies, we explored how information and communications technology (ICT) has been transforming our economies, politics and societies at large. In short, the increase of ICT capacity has become a goal in itself, alongside economic profit. Never have we had access to so much information, and yet there is much more to come. But the issue of how to value this information, how to decide which way it will inform our actions, remains an open question. Markets will provide their answers, as will totalitarian regimes and national and global policies at large.
At Radius we believe that you as an individual have the capacity to value the same information for yourself – and to act accordingly. Tentatively, we hold that the key lies in developing critical thinking alongside a capacity for informed action.
Critical thinking is a normative discipline, and as such, merely stating its central ideas will not provide a solution to the problem of critically evaluating information. This said, becoming acquainted with the discipline might still be of some use to you. The fundamentals of critical thinking may be said to be the following:
(1) Critical thinking operates on the assumption that logic provides a means of navigating the world. By reasoning rather than daydreaming, i.e., actively linking thoughts together rather than just letting them float free, we increase our chances of avoiding trouble and coming across great opportunity.
(2) Reasoning is externalized through the formulation of arguments. Put somewhat differently, the argument is the form of rational thinking. By pitting different arguments, and different speakers, against one another we can often collectively increase our understanding of complex issues in ways that are otherwise inaccessible to us.
(3) All arguments consist of premises, lending support to at least one conclusion. The premises purport to contain information about the state of things, and if they are true and are connected to one another and the argument’s conclusion in a logically strong manner, sufficient support for the conclusion is likely provided.
(4) In order to assess an argument, we need to develop a sub-set of three skills. The ability to interpret statements is needed to understand what a speaker intends to say. The ability to assess truth claims is needed to judge whether the premises of an argument are acceptable or not. The ability to think logically, finally, is needed to work out the relevance and adequacy of premises in relation to each other and to the conclusion. A useful shortcut to acquiring a good part of these skills is learning about common fallacies, i.e., kinds of arguments which may sound convincing but which are almost always wrong.
(5) When comparing different arguments, we need to always employ the principle of charity. This means interpreting the statements of a speaker in the most charitable way possible. In this way we dramatically increase our chances of obtaining and sharing useful information, as we minimize the risk of getting bogged down in futile misunderstandings.
(6) In order to correctly assess arguments in today’s world, an understanding of how the internet, AI, media (including social media) and the scientific community function is required. Here, other parts of the Radius platform might be of use to you.
(7) Finally, it is important to remember that acting in the world requires more than a knack for critical thinking. Having desires, letting your thoughts run free, going with a hunch – these things form an important part of life, too. You need to develop as a full person, not just a brain in a vat. Only in this way can your critical thinking amount to informed action.
Historian Yuval Noah Harari, in his popular scientific book 21 Lessons for the 21st Century, gives similar but slightly less abstract advice. First, he urges everyone to try harder to tell reality from fiction, not expecting everything to come out perfectly clear, but nonetheless i) devoting time and energy to uncovering our own biases, and ii) checking our sources of information. Second, he advises against reading news for free (if something in the network society is free, then this is because someone gains in other ways from your consumption of it.)1 Third, concerning questions that are of special importance to one, one should always read the relevantscientific literature.
Becoming resentful towards all attempts at making sense of the world happens easily. And yet, if we are to be able to orient ourselves, we need to continuously posit some things as true and act accordingly. In this way we are treading the knife’s edge between absolute skepticism and absolute dogmatism.
This problem becomes acute in the schools of the 21st century. How do we teach children to become critically minded and self-sufficient, without at the same time making them disoriented and resentful? How can children be provided with useful tools of thinking and learning, without being made into mindless puppets? We will have to find out.
(1) Being a clever reader, you do right to wonder what Radius gains from providing you with news and information for free. We gain in two ways: By advancing our perspective on current tech development, and by forming a network of actors interested in this topic. We are highly interested in your personal or professional experience and opinion, so please do not hesitate to get in contact with us.
Education
Concerning the overarching problem of critical thinking and orientation in the world of internet, social media and AI, Harari gives the following hands-on advice to high school children:
i) Do not put excessive trust in adults, since they are probably not better equipped to deal with the ongoing radical change than you are,
ii) Do not trust in technology, since it might be helpful, but at the same time follows a logic of its own which might turn you into a pawn in its game,
iii) Do not trust in yourself, since the idea of “yourself” is little more than a reflection of state propaganda, ideological brainwashing and commercial advertising, but
iv) Know Thyself [my paraphrase], as read the inscription on the Temple of Apollo at Delphi.
In order to maintain the control over your own life, and in order to make an impression of your own on the future of life, you need to understand your own operative system. You need to understand who you are and what you want from life. This means questioning and getting rid of unnecessary illusions. It is not the same as trusting yourself, since it requires taking a more critical outlook on your biases and influences.
Exactly what methods will turn out to be successful in achieving a high quality education for the future remains unclear. With a bit of luck, the application of the new technologyto educational environments might provide some benefits. The covid-19 pandemic meant a trial by fire for these kinds of activities – suddenly, millions of school children were forced to stay home and attend their classes online.
Prominent voices have suggested that, aiming to improve the critical thinking of students, education should be steered away from dispensing raw facts, which students may easily find on the internet anyway, towards teaching more advanced critical thinking and reasoning skills. Against this it could be argued that at least some raw facts, designated as true, are probably required if students are to be able to orient themselves.
To be sure, the education system already has another major challenge on its hands. With the ongoing changes in work tasks (see more about this under Work & Leisure) education will have to be fitted to the future of work.
Complex social and creative tasks, such as high-quality personal services, project development and existential guidance – these are the human work tasks of the 21st century. Constructing curricula to prepare students for these tasks will prove a challenge.
Concerning this problem, Harari advocates “the four C’s” of critical thinking, communication, collaboration and creativity. He holds that schools should generally ascribe less importance to technical abilities and instead emphasize general abilities. Harari underlines that the workers of tomorrow do not only need to come up with new ideas and invent stuff – above all, they need to be able to reinvent themselves over and over again. Similar ideas have also been put forward by founder of Alibaba Jack Ma.
So what is actually being done? In the EU at least, considerations analogue to the one’s outlined above seem to have gained some traction.
In a report on the future of work presented to the European Commission in May 2019, the most important thing for education was said to be the provision of the skills necessary to take on different, perpetually unknown activities – i.e., the same as Harari and others had suggested. Constructing curricula geared to meet the requirements of the emerging digital world would mean reviewing core subjects. It would also mean setting up an ecosystem of lifelong learning. In the face of a constant stream of new requirements on the job market, opportunity for lifelong education might help workers keep up.
Importantly, the report held that “the integration of computing into the school curriculum must not come at the expense of arts and humanities [my bold lettering], which hone the creative contextual and analytical skills that will probably become more, not less, important in a world shaped by AI.”
In a communication in August 2020, released together with its Digital Education Action Plan 2021-2027, the European Commission stated that “education and training are key for personal fulfilment, social cohesion, economic growth and innovation … the provision of digital skills for all during the digital and green transitions is of strategic importance for the EU”.
The plan itself appears to focus around two key points: i) The deployment of the new technology in schools with the aim of improving the quality of education, and ii) giving all students a basic set of digital competences, required to “live, work, learn and thrive in a world increasingly mediated by digital technologies”.
As one of its guiding principles, the plan states that digital skills are essential for life in a world such as this, going on to state that “information overload and the lack of effective ways to verify information make it all the more necessary for individuals to be able to critically approach, assess and filter information and be more resilient against manipulation”.
On the whole, these suggestions – pertaining to achieving all-rounded skill sets, lifelong learning, deployment of technology in education, and critical thinking – could well be pointing in the right direction.
Who knows, for sure anyway? One day we just might.
EU Commission: Communications COM(2020) 624 and COM(2021) 205, Expert report “The Future of Work? Work of the Future!” (2019), Digital Education Action Plan 2021-2027
Literature
Castells, Manuel. Communication Power (2nd ed, Oxford University Press 2013) Castells, Manuel. The Information Age Trilogy I: The Rise of the Network Society (2nd ed, Wiley-Blackwell 2010) Harari, Yuval Noah. 21 Lessons for the 21st Century (Spiegel and Grau 2018) Hughes, William; Lavery, Jonathan; Doran, Katheryn. Critical Thinking (6th ed, Broadview Press 2010)
Work is a central human activity. In all cultures it is the basis for the economy, and thus at the core of social structure. It plays a major part in the structuring of people’s lives, in identity formation, life experience and relationships.
In modern capitalist economies, work is characterized by a highly complex and diverse division of labor, which is distributed globally. The last few decades – beginning with the economic reforms of the 1970s – has seen a fast development towards increasing specialization and flexibility of companies, and also a growing rift between those people who produce great economic value, and those who have much less to offer to the market.
This rift is made wider by the technological developments that have taken place in the same period. Manual labor has been gradually replaced by automated machinery already for a long time. And now, increasingly, even relatively complicated mental work is being replaced by automation in the form of AI. (See more about this under The Technologies.)
As of right now, it is very hard to predict exactly what the future developments are going to be like. Predictions range from humans being almost entirely replaced by AI, to no humans being replaced because new kinds of jobs will emerge as technology improves. Moderate predictions land somewhere in the middle.1 It seems highly likely that a large number of jobs which have been considered somewhat advanced – like accountant, paralegal and insurance clerk – will be replaced by AI. Other jobs, especially ones which require social skills and creativity, will be harder for AI to replace, and will thus most likely be around longer.
These are the human work tasks of the 21st century: Complex social and creative tasks, such as high-quality personal services, project development and existential guidance.
As has been hinted above, the central conflict is the one between human capabilities and AI capabilities. Given that human capabilities are set, while AI is constantly improving, the solution to the problem seems to be to mitigate the potentially negative effects of replacing humans on the job market.2
Many such solutions center around a sort of expansion of the social support systems that are already in place in most developed countries. The perhaps most well-known suggestion is that of Universal Basic Income (UBI). This would be the practice of placing taxes on the individuals and companies that own the new AI and robotics, and then hand this out as income to the majority of people, who have been rendered unemployed by technological advances. As has been pointed out by historian Yuval Noah Harari, this would in fact be the Communist vision come to life without violent revolution.
It might however be objected that this is too good to be true. Unemployment is today one sure way towards disintegrating life satisfaction. How would human beings find meaning – or, if you like, distraction – in a world without work? Harari, and also physicist Max Tegmark, are optimistic on this point.
Harari notes the example of utra-orthodox Jews in Israel, who do no work and yet report highest scores of life satisfaction in all the country.
Tegmark refers to studies in positive psychology, which have shown that work has been found to provide a number of positive effects (like a social network, a healthy lifestyle, respect and self-confidence, and a sense of meaning) and notes that all of these effects can, in principle, be achieved outside of work as well. He gives as examples sports, hobbies, studies, and social interaction with family, friends, teams, clubs, social groups, schools, religious and humanist organizations, political movements and other institutions. Quite a long list!
(1) Leaving out the most extreme predictions, in a report on the future of work presented to the European Commission in May 2019, titled “The Future of Work? Work of the Future!”, Michel Servoz referred to five studies conducted between 2013 and 2018 according to which between 14 % and 47 % of jobs would disappear due to automation, after potential job creation effects had been taken into account.
(2) Under The Technologies, we explore an alternative option: Improving humans through the use of AI and biotech. Of course, the two solutions need not be mutually exclusive.
Leisure Time
To be sure, we do not seem to have any shortage of leisure time activities. Sports, hiking, cooking, drawing, Pessoan daydreaming, reading, writing, thinking, and meeting with people in the physical world, just to name a few, are activities that generally take place during leisure time and are enjoyed by billions.
In the last twenty years, all of this activity has been deeply influenced by the emergence of the internet and social media. No matter what your interests in the real world are, going online (even very briefly, just to look something up) will have drastic implications. The amount of knowledge and inspiration available is mind-boggling. Finding like-minded people is just one click away.
But as this transformation of leisure time has progressed, a market for data has emerged. Information about users, and the use of this data for changing behaviors in the real world (e.g. for buying certain products or voting a certain way), has become one of the most salient features of the first decades of our century. At the core, we once again find the conflict between human capabilities – largely instinctual and easily manipulated on the affective level, as it turns out – and the capabilities of AI, improving as the amount of collected data grows.
We do not yet have a very deep understanding of what the psychological consequences of living highly virtual and digitalized lives are. Certain studies have suggested that new generations of pre-teens and teens have worse mental health than previous generations did, and that this might have some connection with social media use. Without a doubt, constantly interacting with a network of billions of people and trillions of bits of information puts a strain on a species of social information gatherers like ourselves. It may be that we need to develop new cultural strategies for how to optimally interact with the technology, rather than leave it to the algorithms of large tech companies to program our behavior for us.
As we move further into this century’s 20s, the issue of surveillance and control through the use of AI is a major one. At the time of writing this, no straightforward solutions appear available. To be sure, the states of the world are taking different approaches. The issue relates to the classic conflict between freedom and security. On the one hand, giving up freedom might make us more secure. But on the other hand, it puts us at the mercy of whoever is providing security.
Artwork by Simon Stålenhag, from his book The Electric State (2017).
And the problem runs deeper than that. As we mentioned above, AI allows for influencing the behavior of individuals. This means that the very foundation of liberal democracy – the idea that the will of the single individual (whatever that is) should be considered as valuable in itself, and equal to the will of everyone else – is weakened considerably.
The implications for everyday life of the new technology are far-reaching. Not least, as has been suggested by sociologist Manuel Castells, a transformation of our perception of space and time appears to have taken place.
Places are nowadays rarely significant on account of what they look like to the naked eye, or their relative geographical position. Rather, what is significant about a place is how it relates to global flows of information and capital. What cannot be communicated through a Facebook post may still feel meaningful, but it will not inform your friends and acquaintances of your current status the way a picture or a few lines of text will. London’s business district has infinitely more in common with the business districts of Paris, Tokyo or the Pearl River Delta-megapolis than with the countryside just outside it.
Time used to be something we experienced as everyday events taking seconds, minutes, days, months or years to unfold. Nowadays, the same events may happen more or less instantaneously. Want to talk to someone who is not around? They will have your text in their phone instantly. Want to see what this new housing project will look like once it is done? Step inside the immersive simulation. Other events happen so as to perturb clock-time and biological time in similar ways. Don’t want to become pregnant at 20, but still have sex? Contraceptives are the way to go. Still want to become pregnant at 45? Assisted conception is your friend.
Not least, our perception of reality itself is being transformed by the new technology. Going online, we potentially come in contact with every cultural expression there is. This can be a major challenge for us to assimilate to, since our sense of identity is tied up with a much more limited set of cultural expressions. As our lives mingle with the Net in this way, everything takes on a multiplicity of new dimensions.
EU Commission. Communications COM(2016) 381, COM(2018) 237, COM(2018) 795, COM(2019) 168, COM(2020) 64, COM(2020) 65 White Paper, COM(2021) 118, COM(2021) 205; Expert reports “The Future of Work? Work of the Future!” (2019), AIHLEG “Ethics Guidelines for Trustworthy AI” (2019) EU Parliament. Draft report 2015/2103 (INL), Res 2015/2103 (INL), Res 2020/20 -12, -14 and -15 (INL)
Literature
Castells, Manuel. Communication Power (2nd ed, Oxford University Press 2013) Castells, Manuel. The Information Age Trilogy I: The Rise of the Network Society (2nd ed, Wiley-Blackwell 2010) Castells, Manuel. The Information Age Trilogy II: The Power of Identity (2nd ed, Wiley-Blackwell 2010) Castells, Manuel. The Information Age Trilogy III: End of Millenium (2nd ed, Wiley-Blackwell 2010) Harari, Yuval Noah. 21 Lessons for the 21st Century (Spiegel and Grau 2018) Tegmark, Max. Life 3.0 (Knopf 2017)
Online resources (visited April 2021)
Artificial Intelligence News. artificialintelligence-news.com MIT Technology Review. https://www.technologyreview.com/ State of AI Conference. https://www.stateof.ai/ The Social Dilemma. thesocialdilemma.com Wikipedia. Entries on “Artificial intelligence”, “Social media”.
Do you remember what life before your first smartphone was like?
Technological development in the past few hundred years has been incredibly swift. Zooming in on the last 30 years or so, it has been swifter still. And looking at just the past few years (or why not months?) changes are happening so fast that new technology barely has time to be implemented before it is obsolete.
But what is all the fuzz about here? What is new about this technological development? We will get more fancy machines and gadgets, sure. But we will nonetheless go on operating them has we have operated our cars, phones and computers in the past, right?
There is a major difference between the technologies emerging today and the technologies of the past. Whereas before, pretty much only human physical labor was being replaced, today, human mental labor is increasingly being outmatched by artificial intelligence (AI).
And it is not just that this AI takes over human tasks and carry them out as before. Rather, AI and the host of technologies that are associated with it has capabilities which will vastly transform our societies. Much of the new technology is bringing with it changes that profoundly affect our economies and politics, our desires and behaviors, our sense of time and space, and our identities.
What will life be like when many humans will not be needed for work? How will an economy largely run by AI develop? What political developments will result from AI use? What will merging our brains with AI be like?
“The Course 1”, artwork by Simon Stålenhag (2015).
Questions like these could be skipping too far ahead. Let us instead start by getting acquainted with the current situation. In the following will be given a brief overview of the state of present day technology, as it informs the investigation into social implications conducted elsewhere on the Radius platform.
To gain a useful overview of today’s technological developments, we might divide it into roughly the following four categories:
Information and communications technologies
Cognitive technologies
Biotechnologies
Nanotechnologies
In the following, we will explore what these different fields encompass, what implications they might have, and how they relate to one another. Simplifying somewhat, we might first say this:
(1) Information and communications technologies underlie all the other developments. This is because increasing capacity to gather and digitize information (i.e., turn information into digits which may be handled by computers), and to communicate the resulting data effectively, enables technological innovation as such to work at a faster and faster pace. Once you have gathered some information and possess a means to communicate it with others, gathering and communicating more information in this way becomes easier.
(2) Whenever a cognitive task involved in technological innovation has been sufficiently grasped and may be represented mathematically (in the form of an algorithm), the task may be automated using cognitive technologies, i.e., products stemming from the use of AI. The process of automating tasks may in turn enable innovation which would have been impossible without use of automation.
(3) Limitations to the scope of innovation present themselves in the form of the laws of physics and all things that follow necessarily from those laws. (A lot of things do.) Two major examples are limitations of biology – the lifespan, fertility, proneness to disease, genetic variation and so on of humans and other living things – and the limited properties of physical materials – feebleness, scarcity, weight and so on. Concerning biological limitations, biotechnologies are emerging to modify these and to open up new avenues of possibility for humankind. Concerning the limitations of the laws of physics and material properties, nanotechnologies are emerging to try and alter the physical world on the nanoscale.
What “the nanoscale” is? That is easy – one nanometer is to a meter what the diameter of one marble is to the diameter of planet Earth.
Let us dig into it.
Information and Communications Technologies (ICT)
Developments in information and communications technology (ICT) underlie the exponential development in all other technological fields. First, we might note that they follow a long tradition:
The written word, ……… invented c. 7 000 BC,
the printed word, ………. 1454 AD,
the telegraph, …………… 1830s,
phone, ……………………… 1876,
camera, ……………………. 1888,
radio, ……………………….. 1895,
radar, ……………………….. 1904,
television, …………………. 1927,
together with developments in transportation all did their part in improving information storage and communication.
That said, these precursors pale in comparison with the capacity and implications of what is going on today. Information (i) being stored and communicated using computers (ii), most often via the internet (iii) – this is today’s ICT in a nutshell.
(i) Numbers may be used to express most anything, such as an imagined picture of the world or a sort of translation of regular language. Another way of putting this is that digits may contain information about the world. Since numbers may be calculated in a (normatively) logical manner, anything that computes numbers (i.e., a computer) can arrive at the logical implications of the information in question. If the information is well-ordered and relevant for the computer’s aims, this is a big deal.
At first the only computer of interest for us humans was – well, us. But since the invention of the synthetic computer a revolution in ICT has taken place. Of course, we can as humans still process and communicate information in all the ways that are not a matter of digital computations, much like we did before. But with the advent of synthetic computers the scope and character of these activities have been drastically altered.
(ii) The principle governing the modern computer was first proposed by Alan Turing in a 1936 paper. Turing’s idea was simple: To build a machine which would be capable of computing anything computable by executing instructions stored on tape. About a decade later, machines like this were starting to be built. And over the course of the next few decades, they were set to improve – as proven, probably, by the device on which you are reading this. At bottom, all computers have two main parts: Processor parts, which perform the calculations (measured in “bits” of information per a unit of time, the number of binary variables – 1 or 0 – involved in the calculation and represented by switches turning on and off), and memory parts, which store information for future calculation (the amount of stored information being measured in “bytes”, usually 8 bits lumped together.) To these parts are attached an input function and an output function. It may seem baffling that advanced calculations can be performed using only 1s, 0s and physical parts in this way, but with the improvements in mathematics that took place in the late 1800s and early 1900s, this is really the case.
The actual workings of a computer in the physical world may, in highly simplified terms, be explained as follows: 一) Electricity from an industrial power source is supplied to the physical system capable of representing mathematical calculations, which then 二) runs its calculations based on its physical and (hopefully) logical constitution together with input received and mediated by electrical impulses (or in the case of fiber optics, light produced by electric lamps) running through its circuits and producing physical effects on its processor and memory parts, and then, finally 三) communicates the resultant calculations to a physical system which in turn produces an effect in the world (such as a line of text or an image on a lighted glass screen, or a robotic movement.)
(iii) The immediate foundations of today’s ICT were being laid down in the late 1960s. Interest in computers brought one key invention after the other, such as microprocessors, microcomputers, operating systems, digital switches, the satellite (in 1957) and optic fibre. This in turn provided incentive for even more information to be digitized. It would still be a few decades until computers and the internet diffused on a wide scale. But already in the 1970s and 80s adventuresome companies were beginning to implement digital technologies. This process, which is still culminating today, is known as the digital revolution. Its gradual but fairly swift progress is well demonstrated by the coming into existence of its second most central feature (after the computer): The internet.
The internet was first deployed in 1969 by the US Defense Department, as a defensive measure in the event of Soviet nuclear attacks on communication infrastructure. Originally, it could only be accessed by a chosen few, mostly people in the Defense Department and in universities. Those adept enough to navigate an early computer were not too many. Even if you could get access, there was not really any content or interaction taking place. But gradually – as personal computers were diffused, the bandwidth of telecommunications grew, and user-friendly software was developed – social demand for digital networking began to rise. The World Wide Web server and browser was launched in 1990, perhaps marking the definitive beginning of a new era. Now, not only do computers process information on their own but also together, in a vast web of digitized information.
The internet today is thus a wireless, global network of interconnected computers, coming together to create a global multimedia library (“cyberspace“), navigated by use of web browser software. A tiny percentage of the information made available on the internet everyday gains the attention of billions of people, and among the heaps of other information available, people have the means to dig up highly specific information about almost everything. When necessary, parts of the web may be walled off and reserved for a smaller audience. (Unless someone figures out a way to scale the wall, that is.) This virtual landscape is used by companies as a means to make profit, by governments as a means to exercise power, and by people as part of life.
Important ICTs:
• Computers
• The internet
• Satellites
• Smartphones
• Television
• Radio
• Telephones
• Digital cameras
• Digital audio recording devices
• Quantum computers
It has been argued, most prominently by sociologist Manuel Castells, that these developments in ICT have played a major part in producing a new economic and social paradigm. Castells calls this the informationalist paradigm. His observation is that accumulation of knowledge and higher levels of complexity in information processing – i.e., continuous technological development – has become a goal in itself, complementing the usual orientation towards economic growth.
This, in turn, has given rise to what Castells calls the network society and network economy. As information processing has become more advanced, the flexible, decentralized organizational form of the network is increasingly outcompeting organizations that operate along traditional, hierarchical lines. Simply put, the benefits to communication and information processing that traditional organizations provide are increasingly made redundant by ICT advances. This provides incentive both for increased networking and for continued technological development.
Furthermore, Castells argues that the increasing use of ICT in our everyday lives contributes to cultural changes. This happens as ICT changes our perception of time and space, and as we come in contact with a multitude of cultural expressions in the virtual world which often stand in opposition to our own sense of identity. More on this in other parts of the Radius platform.
As the companies, governments and peoples of the world come to use ICT more and more, massive amounts of data are being accumulated. Right at the start we mentioned artificial intelligence (AI) as the dividing line between the technologies of old and today’s situation, as regards social implication. AI runs on data. As we communicate, share, and store information in our day-to-day lives, AI watches and learns.
But how? And why? This brings us neatly to our next topic.
Cognitive Technologies
AI is the name for intelligence demonstrated by machines, in that they perceive their environment and take actions that maximize their chances of successfully achieving their goals. Cognitive technologies are the products of the field of AI.
“Machine” and “environment” need not be taken too literally however: There are AI programs that navigate cyberspace through the use of software, rather than physical spaces by use of robot parts. In the case of Open AI’s ChatGPT, the machine is the ChatGPT algorithm running on your phone or computer, and the environment is textual input by a human user.
Also, we should stop for a second and take note of the phrase “achieving their goals”. This should not be taken to mean literally the same thing as when a human being does so. The way we usually talk about humans having goals, we assume that the individual will plays some part in the equation. In the case of AI, on the other hand, the goal is set by a programmer. The AI will attempt to “achieve its goal” – but only in the sense of its programming being run until the goal-state is achieved. To once again take the case of ChatGPT, the goal of ChatGPT is to produce a string of words that are perceived as an appropriate response to the input given by the human user.
The AI of a given cognitive technology uses signal processing techniques to receive data input. It then processes this input data, using its programming and vast amounts of (hopefully relevant) sample data. The relevance of the data is decided largely by the AI’s goal: some data has been identified as conducive to the AI reaching its goal (“You are getting there, almost there now!”) and some as detrimental to the AI’s goal (“No more of this!”). This has happened through reinforcement training: The programmer has “taught” the AI by indicating to it whether the data is helping it reach its goal. Deciding on an output which the AI deems will optimize its chances of reaching the goal-state, the AI finally produces changes in its environment. This can be done, for instance, by producing text or speech, or by moving a robotic arm.
Important cognitive technologies:
• Speech recognition
• Object recognition
• Human–computer interaction
• Dialogue generation
• Narrative generation
• Machine learning (including deep learning, neural networks)
• Industrial robots
• Social robots
• Self-driving cars
Let us take a simple example. We have a cognitive technology, called Greet-O, which is programmed to recognize common verbal greetings and respond to them. A woman is standing in front of a microphone attached to Greet-O and says: “Hello”. The microphone delivers this sound to Greet-O’s AI programming as input data. The AI algorithm processes the sound to decide whether it is a greeting or not. By comparing the sound to huge amounts of previous data, that is, other sounds expressing either greetings or not greetings, which the AI has been trained to recognize by reinforcement training, the AI decides that the sound that this woman made was a greeting. Following its programming, this decision produces a data output. The output comes in the form of a pre-recorded “Greetings to you”, actuated through a speaker, together with a wave of Greet-O’s robotic arm.
Whenever a certain task can be carried out by AI processing input data to make reliable output data, this is referred to as automation. Certain tasks are well-suited for automation. Others are not. Whenever something is highly repetitive and requires semi-high mental activity, for instance going through thousands of excel files searching for certain numbers in certain boxes, automation is perfect. Even tasks that are highly mathematically advanced can be easily automated, as long as the data is kept fairly simple. But the more complex behavior that is required, especially if it involves social behavior, the more difficult it is to automate using AI. Tasking an AI with achieving, for instance, the goal state “Child X raised to be maximally successful within the value framework of this particular society” might be near-impossible. That is, if the goal can even be formulated in a manner that is precise enough.
Artwork by Simon Stålenhag, from his book Europa Mekano (in development).
What it comes down to, simply put, is the amount and complexity of input and output data required to achieve the goal state in question. It is safe to say that no programmer in the world could provide the sample data training needed to achieve a goal state like the one above.
This, however, is where it gets interesting.
By training on sample data, the algorithms that constitute AI may “learn” to make predictions or decisions without being explicitly programmed to do so. All they do is aim for their goal state, e.g., generate revenue, make a cup of coffee, raise a child with traits X, Y and Z, or generate maximally positive polling results. The different input and output data that the AI comes across is valued at random. Only after the AI has gradually acquired enough experience, through trial and error, to decide what the significance of certain data is in relation to the goal state, does the AI begin to reach the goal state. Machine learning is the study of computer algorithms that improve automatically in this way. The people involved in machine learning research are confident that this technique will open the door to achieving AI goal states and behavior that are wildly complex beyond human understanding.
To be sure, we are not talking just a handful of iterations. Complex goal states require tons and tons of data for the AI algorithm to learn successfully. This brings us back to where we began: The increasing ICT usage which provides the data about everything that we do.
What, then, might the consequences be?
Put bluntly, AI will with some degree of certainty outperform humans in a steadily increasing number of tasks. Of course, this increases business revenue. It also frees up time for humans to do more advanced, meaningful or fun stuff. Yet, at the same time it includes making decisions for us, and influencing our behavior. (See more about this under Work & Leisure.) Paraphrasing former Google employee Tristan Harris, AI does not need to match humans at their finest. Most of the time, matching humans at their worst is more than enough.
So how do we tackle this? AI will outperform and outmaneuver us humans, and that will be the end of it?
Depending on who you ask, it might not be that simple. This is not least because what it means to be human may soon come to change drastically.
Biotechnologies
Have you ever wished that you were taller, smarter, or healthier? Before you know it, these things may no longer be a matter of genetic luck.
Biotechnology is the study of, and practices geared towards, developing products out of living systems and organisms. It is increasingly being employed in medicine, agriculture and industry. And its proponents have grand visions.
CRISPR is a family of DNA sequences found in the genomes of prokaryotic organisms. It plays an important role in the immune system, as it detects and destroys DNA from bacteriophages that have previously infected the organism. Cas9 is an enzyme which, when used together with CRISPR, may be used for gene editing. Acting as a tiny pair of scissors, quite literally, CRISPR/Cas 9 can cut out specific pieces of DNA and place them in a new sequence. This in turn provides the organism in question with whatever ability is tied to the new DNA sequence.
There is still a lot more to learn about the human genome. But as we learn more, CRISPR biotechnology means that the genome may be altered at our discretion. Certain features that are highly noticeable in a person may be tied to only one gene. Being “ginger”, light-skinned with red hair, is one such feature. Others, like height or intelligence, appear to result from a complex interplay of many genes.
This possibility of gene editing might be what will allow us to keep up with the fast development of AI. Some futurists hold that the most likely scenario is one in which we merge with AI while at the same enhancing our biological capacities beyond the merely human. Superintelligence, super-resilience, and super-longevity – all of these are real projects in the field of biotech.
Present-day biotech products:
• Genetically modified organisms (GMO)
• Bio medical technologies
• Biodegradable materials
• Biofuels
• Directed use of microorganisms in manufacture
• Bioleaching (bacteria extracting metal from ores)
• Winemaking
• Cheesemaking
• Brewing
• Bioweapons
• Bioremediation
Biotech products of tomorrow:
• Human gene editing
• Life-extension
• Artificial biological intelligence
Biotech of today is not quite there yet, it is true. As of now, it has however proven useful in combatting environmental disasters (e.g., using bioremediation to clean up a chemical leak.) To be sure, it may also pose an environmental danger (e.g., in that genetically modified organisms upset eco systems.)
Such is the peculiarity of new technologies: They pose both as threat and opportunity. In the case of biotech, mistakes might seriously muck up the biological world. And yet, this has nothing on the risks associated with the fourth and last of the fields of development we will be looking into. Making a mistake in nanotech, the very fabric of material reality may cease to be what it was.
Nanotechnologies
Nanotechnology is the study of how materials function on the infinitesimally small level. It is concerned with creating materials and devices on the nanoscale, the level of atoms and molecules – devices small enough to enter our bloodstreams, for instance.
To get an idea of how small the nanoscale is, we might note that one nanometer equals one billionth of a meter, that is, something like what the diameter of a marble is to the diameter of planet Earth.
What nanotech does is manipulate structures on the nanoscale, changing its structure and therefore also its function. Opaque materials, copper for instance, can be made transparent; insoluble materials, like gold, can be made soluble; and stable materials, like aluminum, can be made combustible. The fabric of reality is altered, opening up a world of possibility.
Some examples of the application of nanotechnology today are tennis-, golf- and bowling balls made to be more durable, cars being manufactured with less materials and needing less fuel, and trousers and socks made to last longer and keeping people cool in warm weather.
A few years from now, new deployment of nanotech could have manifested dramatically. Nanorobotics could be sending infinitesimally tiny robots surging through the air, as well as any other material, including our bodies. Matter itself could become programmable matter, having materials change aspect at the flick of a finger. These advances would have drastic implications for every human endeavor.
Imagine objects appearing out of thin air, perhaps only at a wave of your hand. Hungry? Have an apple. This certainly seems like magic to us. We may be tempted to say that it is impossible. But taking a moment to think about it, we know for a fact that similar replication is already part of our lives. Somehow, the average woman can put together food (junk food, even) to make a baby over the course of nine months. Nanotech aims to understand this process in more than miniscule detail, and to develop it further. This could make for endless possibilities.
But as we said before, the risks associated with nanotech are huge, too. The perhaps most graphic dystopian scenario is one that is usually referred to as gray goo – a situation in which nanobots bent on replicating themselves turn all materials on earth (as well as in the solar system and so on) into more nanobots. This might of course seem far-fetched. More tangible are the hazards of toxicity of materials that have been manipulated on the nanoscale. It appears that even very small adjustments, on a very very small scale, might have great consequences that cannot easily be foreseen.
Present-day nanotech products:
• Cars needing less fuel
• Cars being manufactured with less materials
• Solar cells needing less silicone
• Display technology
• Pharmaceuticals and polymers
• Better precision and more durable golf-, tennis- and bowling balls
• Ever tinier semiconductors
• Gecko tape
• Food packaging
• Disinfectants
• Sunscreen
• Cosmetics
• Clothes lasting longer and keeping cool in heat
• Furniture varnishes
• Bandages made to heal faster
• Biomedical applications such as tissue engineering, drug delivery, antibacterials and biosensors
Nanotech products of tomorrow:
• Nanorobotics
• Molecular nanotechnology
• Productive nanosystems (producing the parts for other nanosystems)
• Programmable matter
How Fast Might Things Go?
One final word should be said about the prospect of current technological advances widely overshooting even our wildest expectations.
The scientific community is, roughly, divided into sceptics, moderates and futurists with regard to the speed at which technological advance will occur, and pessimists, moderates and optimists with regard to the benefits or harm that technological advance will bring about.
Overall, it seems that there might be a slight tilt towards futurism and optimism in the scientific community as a whole. The quintessential futurist and optimist finds expression in renowned inventor Ray Kurzweil, who claims that a technological Singularity will have occurred before the year 2030, in which human beings will merge with AI, which will by then have achieved superintelligence. The sceptics, on the other hand, derogatorily call this vision “intelligent design for people with an IQ of 140” and maintain that highly advanced AI of the kind that Kurzweil prophesies will take much longer to develop. Some of them, like the late Hubert Dreyfus, maintain that AI will never be capable of so-called “general” intelligence in the manner of humans, and never capable of so-called “super” intelligence, measured in thousands of IQ.
Even so, tech essayist Tim Urban has suggested that weighing skeptical and futuristic assessments together, the scientific community as a whole appears to be expecting highly advanced AI – capable of general or perhaps even “super” intelligence – around the year 2060.
EU Commission. Communications COM(2016) 381, COM(2018) 237, COM(2018) 795, COM(2019) 168, COM(2020) 64, COM(2020) 65 White Paper, COM(2021) 118, COM(2021) 205; Expert reports “The Future of Work? Work of the Future!” (2019), AIHLEG “Ethics Guidelines for Trustworthy AI” (2019) EU Parliament. Draft report 2015/2103 (INL), Res 2015/2103 (INL), Res 2020/20 -12, -14 and -15 (INL)
Literature
Castells, Manuel. The Information Age Trilogy I: The Rise of the Network Society (2nd ed, Wiley-Blackwell 2010) Castells, Manuel. The Information Age Trilogy II: The Power of Identity (2nd ed, Wiley-Blackwell 2010) Castells, Manuel. The Information Age Trilogy III: End of Millenium (2nd ed, Wiley-Blackwell 2010) Giddens, Anthony. Sociology (6th ed, Polity 2009) Harari, Yuval Noah. 21 Lessons for the 21st Century (Spiegel and Grau 2018)
Online resources (visited April 2021)
Artificial Intelligence News. https://artificialintelligence-news.com/ MIT Technology Review. https://www.technologyreview.com/ State of AI Conference. https://www.stateof.ai/ Wait But Why. The AI Revolution (Part 1): The Road to Superintelligencehttps://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html The AI Revolution (Part 2): Immortality and Extinctionhttps://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html Wikipedia. Entries on “Artificial intelligence”, “Automation”, “Biotechnology”, “Camera”, “Cognitive science”, “Computer”, “Computer science”, “CRISPR”, “CRISPR gene editing”, “Data”, “Digital Revolution”, “Digitization”, “Genetically modified organism”, “Information and communications technology”, “Information technology”, “Internet”, “Life-extension”, “Machine learning”, “Nanotechnology”, “Nanorobotics”, “Radio”, “Robotics”, “Satellite”, “Self-driving car”, “Smartphone”, “Telegraph”, “Telephone”, “Television”, “Turing machine”.