12 March 2020
Technology lights up our rooms, keeps our homes warm, takes us quickly from A to B and brings the world into our smartphones. And yet, it often also has serious consequences for both people and the environment. #explore spoke to Armin Grunwald, director of the Institute for Technology Assessment and Systems Analysis in Karlsruhe, about how to divine the possible impact of technologies in advance and why we often meet technical developments with hopes or fears.
#explore: Mr Grunwald, what are the essential aims of assessing the consequences of technology?
Armin Grunwald: The phrase “assessing the consequences of technology” is a bit clunky, but it basically describes what we do: we assess the consequences of technology – and this means consequences which don’t even exist yet. If they already existed, we’d be able to measure them, for example to use empirical tools to identify changes in the environment or the labour market. This is why we also don’t try to predict what will happen in the future. Instead, we describe the kinds of “futures” that are possible, so that society, politicians and the economic sector can talk to each other about them now: is this something we want, or not? What can we do to avoid risks and amplify opportunities?
Who are your investigations aimed at: politicians or the public at large?
As a scientific institute, one of the key addressees of our work is always the scientific community itself. But the consultancy work associated with this is very wide-ranging: for us in the Institute, political consultancy work is our highest priority – prominently represented by our work with the German parliament over the last 30 years in the office for assessing the consequences of technology. But we also work with a lot of ministries and the EU Commission. In the past 15 years, we’ve also established and developed a social consultancy branch. Our job here is to encourage citizens to formulate opinions about things and to discuss questions and possible solutions with them. In this process, we also work at the nuts-and-bolts level: For instance, in a district of Karlsruhe we’ve set up a real laboratory. Here, we team up with locals to run projects on increasing sustainability, for instance in transport. We also direct advice on the technological developments that are promoted by researchers at the Karlsruhe Institute for Technology (KIT) or third parties. The aim here is ensure that technological developments are guided in as wholesome a direction as possible, right from the outset, so that people don’t retrospectively have to battle with consequences which could have been prevented.
© Bernardo Cienfuegos/ITASThe ITAS at Karlstrasse 11 in Karlsruhe.
Almost no-one would have anticipated, for example, how quickly the Internet would change our daily lives. So how is it possible at all to assess the possible consequences of a given technology before it’s even been developed and disseminated?
This varies enormously and depends to a critical extent on the level of maturity and context of the technology in question. If, for example, someone is developing a power station which might not go on stream for another ten years, we can already very accurately assess its emission characteristics, the costs of the CO2 emission avoidance that will be incurred and the competition with other projects. This is because we know the current energy system so well. But if we’re dealing with a system which is just starting to boom and develops its own internal dynamic within a short space of time, this principle quickly comes up against its limitations. In such a case, all we can do is try to get some idea of the consequences, both positive and negative, as soon as possible. At the start of the millennium, we prepared a study for the German parliament on the subject of political communication on the Internet. At that time, we weren’t of course in the position to predict shitstorms, hate speech and fake news. But, even then, we expressed our view that the high utopian hopes that the Internet would automatically lead to more democracy should be treated with scepticism. A special case is that of the hype around particular technologies which, depending on whom you’re talking to, will either save the world or bring about its downfall. 20 years ago, this was nanotechnology, later succeeded by the idea of boosting human performance by means of so-called human enhancements; these days, it’s Artificial Intelligence. When it comes to stories about the future that range from the paradisiac to the apocalyptic, we’re no longer concerned with the question of who is right and what that has to say about how things will be in 30 years. What we ask instead is what it is that leads people to accept things, whether it’s plausible, unfounded or just plain stupid, and what fears, worries, diagnoses, perhaps actual knowledge or even vested interests lie behind it. In a further step, it’s about establishing which decisions are in the pipeline, for instance to manage the development of AI right now. Which security standards, redundancies and ethical guidelines do we need to prevent certain systems from getting out of hand?
"When it comes to stories about the future that range from the paradisiac to the apocalyptic, we’re no longer concerned with the question of who is right and what that has to say about how things will be in 30 years."
Why do we tend to impose our expectations, be they positive or negative, on technology?
Negative expectations result from historic experiences. What we get time and again is the experience that technology either doesn’t deliver what was promised or gives rise to unintended consequences. Climate change is arising not out of a system which, far from being defective, is perfectly functional – but which, stupidly enough, also happens to produce these serious environmental consequences. On the other hand, we have idealised notions of flawlessly functioning technology; this applies especially to people who don’t know much about technology. So then, if our car won’t start – which hardly ever happens these days – we get cross. And when we see a robot bus bumping up against its own limitations, we feel disappointed. This curious ambivalence in our relationship with technologies always reaches its public climax when the question is asked of whether technology is a curse or a blessing. This is an utterly absurd question, as technology is usually both.
Do we simply know too little to really be able to assess technologies?
Well, it goes without saying that not every citizen can offer a detailed assessment of questions relating to the energy transition or digitalisation. But this is also completely unnecessary. For this purpose, we’ve arranged for the social division of labour between experts, monitoring authorities and the public. What connects them all is the trust citizens have in the authorities in question. Now, some experts expect people to believe them just because they are experts. This is way too simple a view: after all, this trust must, of course, be earned. It’s for this reason that our studies for the German parliament include commentarial reports. For instance, the findings of one expert are validated by another expert with another perspective to allow them to be classified in a more sophisticated way.
Exactly how do you proceed when you want to assess the possible consequences of self-driving cars, for example?
We always start by drafting a state-of-the-art report for the technology in question. To do this, we collaborate with technical experts from the relevant discipline, who might be IT specialists or the automotive technology people here at KIT. We then go on to find out what the researchers are saying about the future: What’s in the making in the labs? When are the technologies due to mature enough to be marketed? We know, of course, that self-assessments always err on the side of optimism: but prototypes need to be developed and approval procedures completed – and all this takes time. Here, too, we involve experts who can assess the state of development. In a way, then, among other things we’re a kind of information broker who brings together different levels of knowledge. Our true expertise lies in assessing this future knowledge in terms of its validity, limits and possible one-sidedness by testing it for robustness using particular methods. On this basis, we get to the big question of what this all means for us today. For instance, is there a need for more regulation for autonomous vehicles: are there critical loopholes in the research that need closing to make the systems both safe and socially acceptable? But what we need if we’re going to ask these basic questions is a sound understanding of how the coevolution of technology and society works, where there are risks and sensitive areas, and which factors can promote or inhibit innovation processes.
What other key issues are you currently working with?
Well, of course, there’ s the whole area of digitalisation: ranging from machine learning, cyber security and privacy in the use of big data to nursing robots and autonomous vehicles. Other big issues for us are the changes in the jobs market and the energy transition. We’re also working particularly intensively in the medical and genetic fields – particularly with the new technical possibilities for changing the DNA building blocks in the genome to “improve” animals and plants, for example. What they call genetic editing is now also coming closer to use in human beings. For instance, we’ve had the case of Chinese doctor He Jiankui, who is alleged to have genetically modified twins.
"What they call genetic editing is coming closer to use in human beings."
You first studied physics and maths before moving on to philosophy. As someone who assesses the consequences of technology, do you need a basic understanding of both technical and physical issues and ethical questions?
My predecessor was an economist, but my successor could be an engineer. The fact that I’m both head of the Institute and a philosopher is just a matter of chance. But it’s certainly true that a basic attitude of openness and broad-based knowledge of a whole range of areas of life are important conditions for assessing the consequences of technology. I didn’t plan to get involved in this line of work – I just gradually moved towards it. Many of my colleagues in Karlsruhe and Berlin have similarly convoluted educational paths behind them.
Time travel stories in science fiction mostly take a deterministic view of the future: if you change one of the links in the chain of causality, the entire story will change accordingly. How would you describe your view of the future?
You often come across deterministic views, especially when it comes to digitalisation: that we need to get ourselves ready for digitalisation, as if it were an autonomous process. But the way digitalisation happens will be determined by people, and people could also choose to do it differently. In this sense, the future is a blank sheet that we’re just starting to fill out. And our job as assessors of the consequences of technology is to draw possibility lines on this sheet which lead into the future. We have something to say about the conditions, implications, costs, opportunities and risks of these possibilities. We don’t say that possibility 3 b) is the best way, for instance, to advance the energy transition. After all, there can be no such thing as an objectively “optimal” solution in a democratic society. It will look different to parties, groups and people with different values, interests and positions. So our mandate is to offer and classify knowledge. Decisions about which of these possibilities might be best are a matter for politicians and society in general.
ABOUT
© KIT
Armin Grunwald is director of the Institute for Technology Assessment and Systems Analysis at the Karlsruhe Institute for Technology (KIT). As head of the office for assessing the consequences of technology in the German parliament, the physicist, mathematician and philosopher advises politicians on the possible consequences of technologies.