Once primarily a communication network, the Internet can now link a variety of physical devices and everyday objects, from cars and appliances to crucial medical equipment. Known as “the Internet of Things,” this system blurs distinctions between the virtual and the real, and, as DeNardis argues in this groundbreaking study, confers something tantamount to political power on whoever controls it. Showing how countries can use this cyber infrastructure to reach across physical boundaries, DeNardis, a professor in American University’s School of Communication and one of Slate’s 2016 Most Influential People in the Internet, lays out the threats it poses and offers policy prescriptions to protect our future. DeNardis is in conversation with Shane Harris, national security reporter for The Washington Post.
I thought I’d say okay I saw senator54:46Pressler getting up oh did you have a54:48question senator boy let’s know thank55:07you for a very wonderful book a lot of55:09this in local politics is much more55:11serious for example if someone is55:12running for the school board in Missouri55:15a little town or in South Dakota or55:17someplace something appears on theinternet or Facebook or especiallyYouTube that’s a half truth about themand then it’s magnified and there’s nolocal media to go to to get a correctionthere’s no way to get it down and nobody55:31pays much attention to it except the55:32people who are targeted with it what can55:34we do about the getting people to run55:37for the school board or to run for55:38county commissioner in this atmosphere55:42well first let me say thank you for55:45authoring the telecom Act of 1996 and55:49[Applause]55:53that was really a major for the way that55:57the Internet has unfolded and you know55:59intermediary liability and and and56:02things like that but so it’s really56:04interesting that you are asking a56:06question that is local how do we get56:09people to get interested in running for56:11school boards around this particular56:12issue I think the way the way that this56:16is happening is in the most personal56:18realms like as people I’ll answer it56:22this way people ask me well what are the56:24potential harms of the Internet of56:27Things and especially around young56:29people like what is the big deal well56:32for someone who has you you probably56:35have heard of the person who was a56:40hacker that screamed in a baby through a56:41baby monitor and the parents were56:43horrified to come and find that you know56:45someone was screaming and monitoring the56:47baby right or the discriminatory56:51practices around insurance around56:54employment around racial issues in how56:57data from the internet of things is56:58happening like these personal things57:00that happen even though there’s no57:02catastrophic issue that has happened57:04these personal things57:06I think are what are going to and with57:08the help of the media exposing them I57:11feel like that the just the educational57:14awareness of not only the future risks57:16but the situation that we find ourselves57:18in now will motivate people to get57:20involved and starting local is57:23definitely part of that but you know57:25whether it’s local whether it’s a big57:27sweeping us thing like the telecom Act57:30of 1996 whether it’s acts in57:33transnational organizations like the57:36Internet Corporation for Assigned Names57:38and numbers or standard standard-setting57:40organizations or whether it’s at the UN57:42level I think the big takeaway is that57:45we have to as a society view57:47cybersecurity as the great human rights57:49issue of our time and frankly a big part57:52of educating people is writing books and57:54everyone here is enlightened by this57:56conversation from you and we’re grateful57:58you’re doing a public service by57:59explaining these important things to58:02people in a way that they can understand58:03and making these issues not so58:05intimidating and overwhelming and I58:07think everyone can agree58:08why Laura’s so highly regarded as58:11someone who is shaping the Internet and58:12I think probably for good so please58:14let’s give her a round of applause and58:23thank you all for being such a great58:25audience that Laura will be signing58:26books if you’d like to talk to her more58:28thanks for coming
Big Tech’s Harvest of Sorrow?
At the same time that science and technology have vastly improved human lives, they have also given certain visionaries the means to transform entire societies from above. Ominously, what was true of Soviet central planners is true of Big Tech today: namely, the assumption that society can be improved through pure “rationality.”CAMBRIDGE – Digital technology has transformed how we communicate, commute, shop, learn, and entertain ourselves. Soon enough, technologies such as artificial intelligence (AI), Big Data, and the Internet of Things (IoT), could remake health care, energy, transportation, agriculture, the public sector, the natural environment, and even our minds and bodies.
Applying science to social problems has brought huge dividends in the past. Long before the invention of the silicon chip, medical and technological innovations had already made our lives far more comfortable – and longer. But history is also replete with disasters caused by the power of science and the zeal to improve the human condition.
For example, efforts to boost agricultural yields through scientific or technological augmentation in the context of collectivization in the Soviet Union or Tanzania backfired spectacularly. Sometimes, plans to remake cities through modern urban planning all but destroyed them. The political scientist James Scott has dubbed such efforts to transform others’ lives through science instances of “high modernism.”
An ideology as dangerous as it is dogmatically overconfident, high modernism refuses to recognize that many human practices and behaviors have an inherent logic that is adapted to the complex environment in which they have evolved. When high modernists dismiss such practices in order to institute a more scientific and rational approach, they almost always fail.
Frontier technologies such as AI, Big Data, and IoT are often presented as panaceas for optimizing work, recreation, communication, and health care. The conceit is that we have little to learn from ordinary people and the adaptations they have developed within different social contexts.
The problem is that an unconditional belief that “AI can do everything better,” to take one example, creates a power imbalance between those developing AI technologies and those whose lives will be transformed by them. The latter essentially have no say in how these applications will be designed and deployed.
The current problems afflicting social media are a perfect example of what can happen when uniform rules are imposed with no regard for social context and evolved behaviors. The rich and variegated patterns of communication that exist off-line have been replaced by scripted, standardized, and limited modes of communication on platforms such as Facebook and Twitter. As a result, the nuances of face-to-face communication, and of news mediated by trusted outlets, have been obliterated. Efforts to “connect the world” with technology have created a morass of propaganda, disinformation, hate speech, and bullying.
But this characteristically high-modernist path is not preordained. Instead of ignoring social context, those developing new technologies could actually learn something from the experiences and concerns of real people. The technologies themselves could be adaptive rather than hubristic, designed to empower society rather than silence it.
Two forces are likely to push new technologies in this direction. The first is the market, which may act as a barrier against misguided top-down schemes. Once Soviet planners decided to collectivize agriculture, Ukrainian villagers could do little to stop them. Mass starvation ensued. Not so with today’s digital technologies, the success of which will depend on decisions made by billions of consumers and millions of businesses around the world (with the possible exception of those in China).
That said, the power of the market constraint should not be exaggerated. There is no guarantee that the market will select the right technologies for widespread adoption, nor will it internalize the negative effects of some new applications. The fact that Facebook exists and collects information about its 2.5 billion active users in a market environment does not mean we can trust how it will use that data. The market certainly doesn’t guarantee that there won’t be unforeseen consequences from Facebook’s business model and underlying technologies.
For the market constraint to work, it must be bolstered by a second, more powerful check: democratic politics. Every state has a proper role to play in regulating economic activity and the use and spread of new technologies. Democratic politics often drives the demand for such regulation. It is also the best defense against the capture of state policies by rent-seeking businesses attempting to raise their market shares or profits.
Democracy also provides the best mechanism for airing diverse viewpoints and organizing resistance to costly or dangerous high-modernist schemes. By speaking out, we can slow down or even prevent the most pernicious applications of surveillance, monitoring, and digital manipulation. A democratic voice is precisely what was denied to Ukrainian and Tanzanian villagers confronted with collectivization schemes.
But regular elections are not sufficient to prevent Big Tech from creating a high-modernist nightmare. Insofar as new technologies can thwart free speech and political compromise and deepen concentrations of power in government or the private sector, they can frustrate the workings of democratic politics itself, creating a vicious circle. If the tech world chooses the high-modernist path, it may ultimately damage our only reliable defense against its hubris: democratic oversight of how new technologies are developed and deployed. We as consumers, workers, and citizens should all be more cognizant of the threat, for we are the only ones who can stop it.
Historically, high-modernist schemes have been most damaging in the hands of an authoritarian state seeking to transform a prostrate, weak society. In the case of Soviet collectivization, state authoritarianism originated from the self-proclaimed “leading role” of the Communist Party, and pursued its schemes in the absence of any organizations that could effectively resist them or provide protection to peasants crushed by them.
Yet authoritarianism is not solely the preserve of states. It can also originate from any claim to unbridled superior knowledge or ability. Consider contemporary efforts by corporations, entrepreneurs, and others who want to improve our world through digital technologies. Recent innovations have vastly increased productivity in manufacturing, improved communication, and enriched the lives of billions of people. But they could easily devolve into a high-modernist fiasco.