Laura DeNardis, “The Internet in Everything”

Once primarily a communication network, the Internet can now link a variety of physical devices and everyday objects, from cars and appliances to crucial medical equipment. Known as “the Internet of Things,” this system blurs distinctions between the virtual and the real, and, as DeNardis argues in this groundbreaking study, confers something tantamount to political power on whoever controls it. Showing how countries can use this cyber infrastructure to reach across physical boundaries, DeNardis, a professor in American University’s School of Communication and one of Slate’s 2016 Most Influential People in the Internet, lays out the threats it poses and offers policy prescriptions to protect our future. DeNardis is in conversation with Shane Harris, national security reporter for The Washington Post.

 

 

I thought I’d say okay I saw senator
54:46
Pressler getting up oh did you have a
54:48
question senator boy let’s know thank
55:07
you for a very wonderful book a lot of
55:09
this in local politics is much more
55:11
serious for example if someone is
55:12
running for the school board in Missouri
55:15
a little town or in South Dakota or
55:17
someplace something appears on the
internet or Facebook or especially
YouTube that’s a half truth about them
and then it’s magnified and there’s no
local media to go to to get a correction
there’s no way to get it down and nobody
55:31
pays much attention to it except the
55:32
people who are targeted with it what can
55:34
we do about the getting people to run
55:37
for the school board or to run for
55:38
county commissioner in this atmosphere
55:42
well first let me say thank you for
55:45
authoring the telecom Act of 1996 and
55:49
[Applause]
55:53
that was really a major for the way that
55:57
the Internet has unfolded and you know
55:59
intermediary liability and and and
56:02
things like that but so it’s really
56:04
interesting that you are asking a
56:06
question that is local how do we get
56:09
people to get interested in running for
56:11
school boards around this particular
56:12
issue I think the way the way that this
56:16
is happening is in the most personal
56:18
realms like as people I’ll answer it
56:22
this way people ask me well what are the
56:24
potential harms of the Internet of
56:27
Things and especially around young
56:29
people like what is the big deal well
56:32
for someone who has you you probably
56:35
have heard of the person who was a
56:40
hacker that screamed in a baby through a
56:41
baby monitor and the parents were
56:43
horrified to come and find that you know
56:45
someone was screaming and monitoring the
56:47
baby right or the discriminatory
56:51
practices around insurance around
56:54
employment around racial issues in how
56:57
data from the internet of things is
56:58
happening like these personal things
57:00
that happen even though there’s no
57:02
catastrophic issue that has happened
57:04
these personal things
57:06
I think are what are going to and with
57:08
the help of the media exposing them I
57:11
feel like that the just the educational
57:14
awareness of not only the future risks
57:16
but the situation that we find ourselves
57:18
in now will motivate people to get
57:20
involved and starting local is
57:23
definitely part of that but you know
57:25
whether it’s local whether it’s a big
57:27
sweeping us thing like the telecom Act
57:30
of 1996 whether it’s acts in
57:33
transnational organizations like the
57:36
Internet Corporation for Assigned Names
57:38
and numbers or standard standard-setting
57:40
organizations or whether it’s at the UN
57:42
level I think the big takeaway is that
57:45
we have to as a society view
57:47
cybersecurity as the great human rights
57:49
issue of our time and frankly a big part
57:52
of educating people is writing books and
57:54
everyone here is enlightened by this
57:56
conversation from you and we’re grateful
57:58
you’re doing a public service by
57:59
explaining these important things to
58:02
people in a way that they can understand
58:03
and making these issues not so
58:05
intimidating and overwhelming and I
58:07
think everyone can agree
58:08
why Laura’s so highly regarded as
58:11
someone who is shaping the Internet and
58:12
I think probably for good so please
58:14
let’s give her a round of applause and
58:23
thank you all for being such a great
58:25
audience that Laura will be signing
58:26
books if you’d like to talk to her more
58:28
thanks for coming

 

 

Big Tech’s Harvest of Sorrow?

At the same time that science and technology have vastly improved human lives, they have also given certain visionaries the means to transform entire societies from above. Ominously, what was true of Soviet central planners is true of Big Tech today: namely, the assumption that society can be improved through pure “rationality.”

CAMBRIDGE – Digital technology has transformed how we communicate, commute, shop, learn, and entertain ourselves. Soon enough, technologies such as artificial intelligence (AI), Big Data, and the Internet of Things (IoT), could remake health care, energy, transportation, agriculture, the public sector, the natural environment, and even our minds and bodies.

Applying science to social problems has brought huge dividends in the past. Long before the invention of the silicon chip, medical and technological innovations had already made our lives far more comfortable – and longer. But history is also replete with disasters caused by the power of science and the zeal to improve the human condition.

For example, efforts to boost agricultural yields through scientific or technological augmentation in the context of collectivization in the Soviet Union or Tanzania backfired spectacularly. Sometimes, plans to remake cities through modern urban planning all but destroyed them. The political scientist James Scott has dubbed such efforts to transform others’ lives through science instances of “high modernism.”

An ideology as dangerous as it is dogmatically overconfident, high modernism refuses to recognize that many human practices and behaviors have an inherent logic that is adapted to the complex environment in which they have evolved. When high modernists dismiss such practices in order to institute a more scientific and rational approach, they almost always fail.

Frontier technologies such as AI, Big Data, and IoT are often presented as panaceas for optimizing work, recreation, communication, and health care. The conceit is that we have little to learn from ordinary people and the adaptations they have developed within different social contexts.

The problem is that an unconditional belief that “AI can do everything better,” to take one example, creates a power imbalance between those developing AI technologies and those whose lives will be transformed by them. The latter essentially have no say in how these applications will be designed and deployed.

The current problems afflicting social media are a perfect example of what can happen when uniform rules are imposed with no regard for social context and evolved behaviors. The rich and variegated patterns of communication that exist off-line have been replaced by scripted, standardized, and limited modes of communication on platforms such as Facebook and Twitter. As a result, the nuances of face-to-face communication, and of news mediated by trusted outlets, have been obliterated. Efforts to “connect the world” with technology have created a morass of propaganda, disinformation, hate speech, and bullying.

But this characteristically high-modernist path is not preordained. Instead of ignoring social context, those developing new technologies could actually learn something from the experiences and concerns of real people. The technologies themselves could be adaptive rather than hubristic, designed to empower society rather than silence it.

Two forces are likely to push new technologies in this direction. The first is the market, which may act as a barrier against misguided top-down schemes. Once Soviet planners decided to collectivize agriculture, Ukrainian villagers could do little to stop them. Mass starvation ensued. Not so with today’s digital technologies, the success of which will depend on decisions made by billions of consumers and millions of businesses around the world (with the possible exception of those in China).

That said, the power of the market constraint should not be exaggerated. There is no guarantee that the market will select the right technologies for widespread adoption, nor will it internalize the negative effects of some new applications. The fact that Facebook exists and collects information about its 2.5 billion active users in a market environment does not mean we can trust how it will use that data. The market certainly doesn’t guarantee that there won’t be unforeseen consequences from Facebook’s business model and underlying technologies.

For the market constraint to work, it must be bolstered by a second, more powerful check: democratic politics. Every state has a proper role to play in regulating economic activity and the use and spread of new technologies. Democratic politics often drives the demand for such regulation. It is also the best defense against the capture of state policies by rent-seeking businesses attempting to raise their market shares or profits.

Democracy also provides the best mechanism for airing diverse viewpoints and organizing resistance to costly or dangerous high-modernist schemes. By speaking out, we can slow down or even prevent the most pernicious applications of surveillance, monitoring, and digital manipulation. A democratic voice is precisely what was denied to Ukrainian and Tanzanian villagers confronted with collectivization schemes.

But regular elections are not sufficient to prevent Big Tech from creating a high-modernist nightmare. Insofar as new technologies can thwart free speech and political compromise and deepen concentrations of power in government or the private sector, they can frustrate the workings of democratic politics itself, creating a vicious circle. If the tech world chooses the high-modernist path, it may ultimately damage our only reliable defense against its hubris: democratic oversight of how new technologies are developed and deployed. We as consumers, workers, and citizens should all be more cognizant of the threat, for we are the only ones who can stop it.

Historically, high-modernist schemes have been most damaging in the hands of an authoritarian state seeking to transform a prostrate, weak society. In the case of Soviet collectivization, state authoritarianism originated from the self-proclaimed “leading role” of the Communist Party, and pursued its schemes in the absence of any organizations that could effectively resist them or provide protection to peasants crushed by them.

Yet authoritarianism is not solely the preserve of states. It can also originate from any claim to unbridled superior knowledge or ability. Consider contemporary efforts by corporations, entrepreneurs, and others who want to improve our world through digital technologies. Recent innovations have vastly increased productivity in manufacturing, improved communication, and enriched the lives of billions of people. But they could easily devolve into a high-modernist fiasco.