Ted Nelson’s Philosophy of Hypertext

Who invented the Internet?

lightbulbEvery American history buff knows who invented the the airplane, the telephone, and the light bulb. Americans like to think that their culture incentivizes and celebrates those inventors and creative geniuses that have propelled the world forward, yet about the piece of technology that has most influenced the world in the past 20 years, few Americans can answer the basic question “Who invented the internet?”

Oh, some people will sarcastically say “I thought Al Gore invented the internet“, not realizing that the political smear they are repeating obscures Gore’s very real role in converting the military’s ARPANET into the publicly-available system we today call the internet.

It is said that success has 100 fathers, but failure is an orphan.  The history of the internet is a variation on this theme – the internet’s size and complexity required the work of many, but the one man who most foresaw its potential now regards its design as a failure.  This is not to say that the internet we have today is not impressive or powerful, but as Ted Nelson writes in his 2002 Ph.D. thesis “Philosophy of Hypertext, today’s internet is tied to bad philosophy – backward ways of thinking that limit its potential.

Paul Saffo: Interview at Ted Nelson Book Launch

Who is Ted Nelson?

Ted Nelson is the person who most envisioned what we today think of as the internet.  His insight was the result of an epiphany he had while a graduate student at Harvard in 1960.  This epiphany stemmed partly from his individual genius, and partly from his background.  I leave it as an open invitation to scholars to explain the full source of Nelson’s genius; but what is easy to see is that Ted’s background as the son of Emmy Award-winning television director Ralph Nelson and Academy Award-winning actress Celeste Holm gave him an advantage in envisioning our electronic future.  When he was young, Nelson had the opportunity to shadow his father on the NBC and CBS television sets for shows that his father directed 1.  The camera and television monitor were part of his growing up; and he directed his own 33-minute film while a senior at Swarthmore College. When he arrived at Harvard for graduate school, it seemed obvious to him that the computer monitors he saw in the Harvard computer lab were not just textual displays for computer output, but could evolve to display movies and all manner of electronic media.

Nelson would go on to coin the word “hypertext”, which he says came to me in 1962 as a way to describe electronic text that can be read non-sequentially using hyperlinks — those typically blue links on the internet that a user can click to access a connected page.  In 1974, Nelson would go on to self-publish two classic books — Computer Lib/Dream Machines which had an enormous impact on the early computer industry.  Before Steve Jobs and Bill Gates started Apple and Microsoft, Nelson argued that the future of computers lay in personal computers (not Mainframes) and an  interconnected network of computers that would convey a “docuverse” of hypertext media.  He called this system “Xanadu®”, after the capital of Kublai Khan’s dynasty in Samuel Taylor Coleridge‘s poem about a vision in a dream, titled Kubla Khan.  To Nelson, the Xanadu® Project represents the correct way to do hypertext, even as the web has eclipsed it in the public mind.

Where did the Internet/Web come from?

Those readers with some technical knowledge will notice that up until this point, I’ve been conflating two terms — the “internet”, and the “world wide web”.  You see, when most people think about the “internet”, they usually mean the “word wide web” — the part of the internet that they can access through a web browser like Internet Explorer, Safari, Firefox, or Chrome.  To get a sense of what the internet is, consider how it developed and what it was before Tim Berners-Lee released the web in 1991.

ARPANET history: December 1969

The internet started in the early 1960s as ARPANET, a military program to design a way of communicating that could survive nuclear war.  A series of protocols was invented to allow messages to be broken into smaller chunks (packets) and sent across a network.  In the event that one link in the network was destroyed, the system would find a way to route around the outage.  While this was an important breakthrough, it was innovation at the very lowest level, for it did not say anything about how the contents of the messages would be structured. This changed with Tim Berners-Lee’s invention of the web (published on the internet in the summer of 1991).

Tim Berners-Lee was a physicist at CERN, the European Particle Physics Laboratory, who wanted a simple way to publish reports to share with other scientists.  His goal was not to lay the hypertext foundation of the internet, but he wanted to get something done, so he took what Ted Nelson had proposed for a hypertext system and radically simplified it to create HTML (hyper text markup language – the language web pages are constructed out of).  “Simplified” sounds good.  Nelson might say that the web “dumbed it down” to the point of triviality.

Hierarchical Folders in Windows 98

Hierarchical Folders in Windows 98

Before the web, Internet servers were (and still are) organized as hierarchical sets of folders, similar to what windows and mac users see when they browse their own hard drives. As anyone who has ever tried to locate a file in a hierarchy many folders deep knows, the organization of folders in a hierarchy can be un-intuitive, even arbitrary. As a demonstration of how abysmal hierarchy can be as an organizing principle, consider that it is often easier to find an internet document on google than it is to find a file within a series of folders on a corporate computer network because the way the hierarchical folders are organized gets in the way.2

HTML provides a document format for the web so that pages can be linked together, without having to manually browse through sets of folders. In essence, what Tim Berners-Lee did was to build a web of connections on top of an existing hierarchical foundation.

Ted’s 3 Epiphanies


1. To understand Nelson’s innovation, it is interesting to see how the unique and seemingly unlikely combination of events shaped the course of his life, as well as all three of his epiphanies, for these epiphanies shape his philosophy of hypertext.  The first epiphany came when he was very young — perhaps five years old.3  Nelson was in a rowboat with his grandparents when he paused to observe how the water was flowing around his hand, as he trailed his hand in the water.  Though many other boys his age have also trailed there hands in the water, few have paused to think about all the individual atoms circulating around their fingers.  Nelson experienced  a moment of “awe”, at the amazing complexity of the world, where even just the smallest interaction between a boy’s hand and water is indescribably complex. Nelson would carry this concept of interconnected complexity with him and develop it further when he went to college.

2. Nelson’s second epiphany occurred while in his third year at Swarthmore College, writing a paper for his philosophy class.  Nelson concedes that the paper was somewhat inspired and sweeping, but not fully baked. What made it so ambitious was how it broke from traditional Aristotelean philosophy.

One of Nelson’s most important conceptual steps was to argue that the use of categories (and later by implication, the use of “folders”) was an oversimplification.  Items could be placed in more than one category and categories could intersect with each other. Furthermore, in documents there may be no beginning and no end, but linear conventions such as “beginning” and “end” are forced upon us by the medium of paper.

Nelson would describe the writing of his 1958 Schematics paper4, at the age of 21, this way:

The experience of writing it was one of the most intense I have ever experienced, in an exalted state of excitement and inspiration.

The same epiphany I had experienced at the age of five, of the immensity and indescribability of the world, came to me again, but this time with regard to realizing how models and language and thought worked, a way of approaching the great complexity I had envisioned long before.

Ted Nelson's Kite-shaped Magazine, titled: 'Nothing'

Ted’s Kite-Shaped Magazine: “Nothing”
(larger version)

On his own initiative, Nelson would go on to complete a number of projects demonstrating his creativity, his versatility, and a unusual ability to conceptualize design in multiple spacial dimensions.  While at Swartmore, Ted created a kite-shaped magazine that had to be rotated to be read5; he wrote the world’s first rock musical6 and then in his senior year, Nelson directed a 33 minute film called “The epiphany of Slocum Furlow” in which he was able to visualize how the film would fit together when edited, a skill that was completely foreign to his fellow students. Though it is unclear whether he ever got college credit for all his projects, the experience of directing his own film gave Nelson the belief that he had the innate talent to direct movies, a skill related to, but different from his father’s career in network television.

But by Nelson’s own admission:

My grades were fairly poor. I had gone for breadth, not depth, and I thought it was my own business to judge my achievement, not anybody else’s. What mattered to me was studying what I chose, to the degree, I chose, and pursuing the excitement of new ideas and projects.


How Ted was at the Right Place at the Right Time

What is remarkable then is the path Nelson next took to put him in position to be admitted to Harvard, where he would work with computers in the fall of 1960.

In June of 1959, Nelson met with Ralph (his biological father) and was offered a job as an actor, although Ralph made the offer in such a critical way as to make Ted feel attacked. Ted turned down the offer and was forced to find summer work on his own as a typist in a securities cage on Wall Street.

University of ChicagoIn the fall of 1959, Nelson began his first year of grad school at the University of Chicago studying sociology. In Possiplex, Nelson recounts that the year was so awful:

I constantly felt my father’s curse like a sunlamp close to the back of my neck. I thought of suicide all the time, but I knew what that would do to my grandparents, and so I kept on.

Miller Analogies example

Bach : Composing :: Monet :

a. painting
b. composing
c. writing
d. orating

Chicago’s curriculum shunned theory, which was Nelson’s interest, so he decided he needed to transfer somewhere else: I took a test called the Miller Analogies, and, amazingly, that got me a fellowship to Harvard for the following year.8

3. At Harvard, at the age of 23, Nelson took a course called “Computers for the Social Sciences” and experienced his third epiphany:

The explosive moment came when I saw that you could hook graphical displays to computers. At once- over a few weeks- I saw that this would be the future of humanity: working at screens, able to read and write and publish from ever­ expanding new electronic repositories.

Ted would take the concept of indescribable interconnectivity and complexity and try to develop a piece of software that embodied this philosophy.

Ted’s Philosophy: “Intertwingularity”

In 2013, when Ted’s friend Douglas Engelbart died, Engelbart received press attention for, among other things, having invented the computer mouse. Paul Saffo says9 that Engelbart and Nelson’s contributions are much broader than people appreciate, like the proverbial “elephant” that the public grasps and mistakenly thinks it has understood.

Ted’s contribution of hypertext can’t be understood without understanding his philosophy of “intertwingularity“.

The world is not hierarchical; it’s intertwingled.


View “Intertwingled” Conference at Chapman University

In 1960, Ted saw an opportunity to rethink the role of paper media because for him paper was an obstacle to expressing his thoughts. Unlike traditional paper-based writing, Nelson believes that ideas are not naturally organized as a linear narrative under a series of hierarchical headings or “folders”. Rather ideas are “intertwingled” and any description of a topic will follow a web of connections between related topics.

As an example of intertwingularity in “Philosophy of Hypertext”, Nelson cites the example of the “Battle of the Alamo”, listing 3 pages of interconnected items. If you’re reading this on the web, you can explore how hypertext enables the linked entities to assert their own connections. (Links are almost expected now, but it was a foreign concept as recently as 20 years ago).

  • The Battle of the Alamo was fought on 6 March 1836, by about 189 Americans defending a compound in San Antonio, Texas. All the defenders were massacred by a force of Mexican soldiers.
  • The defenders of the Alamo are great heroes in American history, and included three famous men- William Barret Travis (who was in command), Davy Crockett (woodsman and former Congressman), and Jim Bowie (for whom the big Bowie knife was named).

  • The Battle of the Alamo was possibly the pivotal event in the War of Texas Independence. By slowing down the movement of the Mexican army of cadets under Santa Anna, it gave the Texians time to prepare.
  • San Jacinto was near the city which is now called Houston, in honor of
    the general who won the battle.
  • The song most people remember as “There was something in the air that night, the stars were bright, Fernando“- a Mexican folk song made popular most recently by the group ABBA– was from the Mexican side of that war.

  • Davy Crockett, a folk hero in his own lifetime, became a folk hero again
    circa 1953, when the Disney movie “Davy Crockett” became a big hit, and millions of children wanted raccoon-skin hats.
  • During a period of exile, Santa Anna stayed on Staten Island in New York with a photographer named Adams. Adams saw Santa Anna chewing chicle rubber, became interested in its commercial possibilities, and after several unsuccessful experiments created chewing gum, which was wildly successful.
  • Black Jack” chewing gum, created by Adams himself shortly after the Santa Anna stay, remained on sale for a hundred years.
  • Emily Morgan is revered as a treasured heroine of the state of Texas for her legendary role in defeating Santa Anna. According to the story, on the eve of the battle she seduced Santa Anna and next morning delayed his readiness- so that the battle for Texas independence was won with hardly a shot fired.
  • The song “Yellow Rose of Texas” is said to celebrate the achievement of Emily Morgan, who was “mulatto” (and thus, in the terminology of earlier days, “yellow”).

As you can see, these items don’t fit into a single paper-based linear narrative but flow in many directions. If you purchase a copy of “Philosophy of Hypertext”, you can also get the diagram Ted uses to conceptualize how items are connected.10 In my excerpt of them, I’ve been able to add HTML hyperlinks, but in a paper version one would have to refer to each with a footnote.

As Ted says, paper is a prison and:

Footnotes are small initiatives to reach out of some arrangement of content, but they are extremely limited, like hands reaching out of jailhouse windows, constrained by a different plan.

Problems to be Solved

Xanadu Design:
2-way links, Parallel documents

Nelson saw a need to tackle several problems related to hypertext:

  • Originally, All hypertext used 2-way links, meaning that the reader could see what pages link to the current document, not just outgoing links from the current page
  • Documents change; and Ted was searching for a way to make it possible to view any version of a page
  • Ted wanted to be able to view the citing work in parallel with the original, so that he could compare them. On the web now, it is possible to open a new window, but on a long page the reader has to manually find the quoted location and manually compare the citation to the original.
  • Paper readers can annotate their own paper copy. Ted wants hypertext readers to be able to make annotations and third-party links as easily as paper marginal notes. The system should then allow readers to share their annotations and links.
  • In 1960, Ted owned copyrights for his earlier work* and he believed it essential that a hypertext system support a way for authors to be automatically paid for both their original publication and its reuse by others.
copyright symbol

built-in Copyright from the start

To the publishing industry, the idea of an internet with a built-in copyright meter must sound like a dream, but in fact concern for copyright was built into Nelson’s original vision for Xanadu®. Every letter of every word would be metered and a royalty would be instantly paid to the copyright holder. 11 As of this writing, the articles I’m citing are not available in Xanadu® format, but if they were, the price would be proportional to the price of a paperback book: If a book containing 80,000 words costs $16.00, a 100 word excerpt would cost $.02. This price would be paid, not by the writer making the excerpt, but by the reader whose computer receives a pointer to the cited text and downloads the actual original text.

Comparing versions with a Diff Tool

Another important feature for Nelson was “versioning.” Nelson had thought about the publication process enough to work through all the problem areas. One potential problem area occurs when a work links to a source that then changes.

Suppose I write my original version of this article using a figure of 80,000 words. You link to my article and comment about the figure; and then I change the figure to 90,000. What should the reader see? On the web, the reader only sees the most recent, with the old version overwritten.

Nelson expects the hypertext system to keep track of both versions, display the original linked version, but allow the reader to find the most recent version. He also wants a world in which pages never go missing or altered, which is a problem for future historians, knowing that every year approximately 5% of all pages disappear to “linkrot”.

Design Approach: “Substruction”

In his “Philosophy of Hypertext”, Nelson talks about the concept of “substruction” — the process of finding unifying design patterns that bring together diverse goals to achieve a simpler design than what would otherwise be possible if these goals were pursued independently.

As an example of substruction, he cites the example of the early computer game “Pac-Man“. Two goals of the design were to create a “maze game” and a “chase game“. The question is how to fit them together into a seamless whole. The concept that the video game designer came up with was to create dots to be eaten, along with a special type of dot that allows the Pac-Man to switch modes and chase the ghosts. Starting from a blank slate, the way to unify the modes isn’t obvious, but in hindsight the special dot seems like a natural, or obvious element.

As he was designing his hypertext system, Nelson was committed to following good design, guided by heroes like Frank Lloyd Wright and Buckminster Fuller:

Following Bucky Fuller’s point of view, I believed that everything would be far better if only it could be redesigned completely.

What Nelson sought was a way to bring all these design goals together. An elegant way to:

  • Link Documents together using 2-way links
  • Keep track of different versions
  • Track copyright usage
  • Support annotations and third-party links as easily as paper marginal notes
  • Display quoted information side by side

Ted developed the basic design using a technique he called “transclusion” and an addressing scheme to allow any selection of text (or multimedia) to be specified from any version

It took decades of labor with various programmers, most notably Roger Gregory, who headed the effort up through 1988, when Ted made what he regards to be his biggest mistake. Nelson had gone without financing for Project Xanadu® for the entire time (decades) until 1988, when Autodesk (makers of AutoCAD) decided to finance an effort to commercialize Ted’s Xanadu® system. Papers were signed and a team was formed with the goal of finishing a codebase largely written by Roger Gregory.

I was told on the phone — I forget by whom — that my good friend Roger Gregory, who was in charge at XOC down south in Palo Alto, was throwing things and acting crazy. I heard that ‘everybody was ready to leave,’ possibly quit within a day or so.

What Nelson failed to realize was that Roger was upset because the other programmers wanted to scrap all existing code and start over from scratch. Roger’s goal had been to finish off the existing code base and publish within a year. The decision to start again from scratch would put them in a vulnerable position 4 years later, when the project was still incomplete; and Autodesk cut funding to Xanadu® after Autodesk ran into financial trouble.

The Web Appears

By 1991, when Tim Berners-Lee’s world wide web came out, Ted and the Xanadu® programmers were surprised at its popularity. Berners-Lee’s system had none of the advanced features that Ted believed were essential for a hypertext system. What’s more, it imitated paper under glass. There was no ability to take marginal notes. There was not version control. You could only follow links in one direction. And you could not compare documents side by side. Finally, there was no copyright mechanism to compensate authors.

Alternate Model: The Movie Industry’s EDL Model

The vast majority of people using the world wide web consider its imitation of paper to be a success, not imagining that there could be any other way, but if one takes a look at the movie business, it is possible to see an alternate model at work.

In print, content is divided into pages; and whenever content is referenced, a copy is made of the original. Ted came from a movie background and so he instead developed a referential model in which producers would note the starting and ending location of the clips they wanted to include in the final cut. 12

Each movie is a selection of a series of clips from many takes. *The producer* selects the scenes from the movie, noting the starting and ending location, and the ________ translates those instructions into a movie by retrieving the selected frames. In a similar way, Nelson envisions Xanadu® allowing authors to transclude content – reference text and multimedia from across the internet and combine them in a way so that all the citations remained intact and all connections are made visible.

Permanent Addresses for All Content

Street Address

This is possible because in Xanadu®, each letter of each word in each document has its own address, whereas on the internet the only types of content that have their address are pages (the entire page) and images.

By way of analogy, imagine if the postal system only used addresses comprised of towns, states, and zipcodes. The mail carrier in each town might be able to manually locate the recipient, but without the street and address it wouldn’t be nearly as easy. Each piece of mail couldn’t be automated to its final destination; and there would be ambiguity for multiple people with the same name.

Tumbler Example

Chapters in each book
could be similarly numbered,
so that Chapter Five in Volume Three
would be represented as 3.5.

This could be extended arbitrarily,
so that represents
Volume Three, Chapter Five,
Section Ten, Paragraph Six.

By using an addressing system called “Tumblers“, Nelson’s system is able to keep track of different versions, as well as allow any selection of text or multimedia to be cited at a granular level.

I’ve been discussing the big-picture, but if you dig into the technical details available in Ted’s book “Literary Machines”*, you can see how, as with the Pac-Man design, Xanadu® is able to achieve multiple design goals through substruction.

What does Xanadu® look like?

Xanadu® has never been deployed to a public network, but there are demonstrations that run on a single computer. As a demo of what this looks like, consider Ted’s demonstration how “Origins”, a piece by Moe Juste, interacts with several sources:

Getting Web Browsers like Chrome or Interent Explorer to support viewing multiple sources in parallel is a major challenge. Ted’s goal is to release his own editor and browser (called Possiplex). In the meantime, to demonstrate what Xanadu® could look like, one programmer, Nicholas Levin, was able to make a demo work in the browser. One should note that it loads slowly, and has some special keyboard controls, but it does allow the reader to peruse a text in parallel.

Instructions on using the Xanadu Web Demo

  • To see the related source, click on it
  • To go back to the prior source, click on it

View Demo (Wait. It loads slowly)

Where do we go from here?

Conclusion: incomplete

Will the web stay hierarchical, paper-imitating, non-versioned, using hierarchical embedded markup forever .. (need to finish)

Are we ready for the idea that everything is interwingled?

Do enough people want complex intertwingled media?
The popularity of portable devices like phones rewards simplicity and shallowness.

It is rare for a person to write an extended article like this, with citations
technology is political

Ted was sophisticated, read the New Yorker. One criticism is that he didn’t have a target user in mind. Perhaps Ted overestimated the public’s taste/capabilities. It may seem that way in a world of 140 character tweets and Facebook status messages, when flow is prized over stock, but stock will eventually make a comeback.

Can the web be good enough? Windows XP. Boeing 747.

Perhaps content creators will push for a better system, that doesn’t require signins for every site or setting up as an “app”.

Like a database without transactions

Ecclesiastes 9:11
Again I saw that under the sun the race is not to the swift, nor the battle to the strong, nor bread to the wise, nor riches to the intelligent, nor favor to the skillful; but time and chance happen to them all.

Footnote about having college education by age ___. Dropping out of school in 7th? grade.

There is an annotation effort underway. If it becomes common, it could change the web..

Would be equivalent to creating another “app” ecosystem

Check to see how Philosophy concludes

Talk about how Xerox/PARC had an interest in continuing paper

About the Neotext Quote-Context plugin used in this article

In this piece, I’ve used a wordpress plugin I wrote called Neotext to evoke the type of two-way intertextuality Ted has proposed. It’s very primitive in its design, but I hope it inspires readers to imagine what this article could look like if it were in Xanadu.


  1. Possiplex, page 68

  2. Thank goodness Microsoft and Apple have a “search” feature or locating files would be very difficult

  3. Possiplex, page 35

  4. One of the special things about Ted’s Ph.D. thesis, “Philosophy of Hypertext”, is that it includes an extended writeup of the Schematics paper, as well as a scan of the original

  5. Posiplex, page 71: See Photo | Excerpt: Chapter 4, 1957: A Kite-Shaped Nothing

  6. Possiplex, page 73

  7. Possiplex, page 85

  8. This is even more remarkable when you read that Ted dropped out of school in grade 7?. Bhttp://static.openpolitics.com/media/img/ted-nelson-blog/ut in ____, Nelson states that he received a college-level education from his Grandfather by the age of ___

  9. View the first video in this article: Paul Saffo Interview at Ted Nelson Book Launch.

  10. Philosophy of Hypertext, page 18, 21

  11. This would be a one-time fee, a pruchase, not a pay-per view

  12. The Edit Decision List (EDL) technology was developed by the movie business independently of Nelson

How Bad Data-Driven Decision-Making Led to the Mistake of “New Coke”

The Testing Threat

In the 1980s, Coca-Cola executives were shocked to learn that what Pepsi advertisements said was true — in a random taste test, people preferred Pepsi over Coke.  Coca-Cola executives responded with a massive retooling effort, resulting in a product dubbed “New Coke”.

“New Coke” turned out to be a major flop.  What we know in hindsight is that the way taste-tests are done is biased — in small amounts (sips) people prefer the sweeter drink, but in larger amounts (a 12 ounce can), people preferred the original Coca-Cola formula.

The Imperfect Metric

This is a phenomenon that happens all the time — An effort is made to quantify success.  The metric chosen is imperfect; yet people exert a lot of effort to maximize or minimize the metric, even if flaws in the metric are known.  I’ve talked to students who don’t understand the concepts they are studying, but simply memorize the “correct answers” because they know that is how they will be evaluated.  Teachers teach to the test; and students study to the test.

Music: Data Driven by Shazam

In a similar way, the author of an Atlantic article describes a smartphone app called Shazam.  A “Shazam” is equivalent to a google search for music; but the music industry treats search traffic for a song as if it were the same as a Facebook “like”.

So, what meaning does the Shazam metric really convey?  Quality? Novelty?  Attention?

The music industry has made “Shazam” the new “test”, and by “teaching to the test” the direction of the music industry has shifted.  The industry is now more data-driven, but the result is more repetitious music with predictable chord progressions — a sort of “comfort food” (6 min).

So like “New Coke”, does our our crude big-data analysis result in better music, or are we just making it simpler and “sweeter”?

Open Source Media

A lot of people think about software when they hear the words “open source,” but I’d like to extend the concept to “media”.  By that I mean books, tv, magazines, radio, etc.


The basic idea is simple — suppose you’re reading a book about Jack Kennedy that makes an interesting claim and then cites its source with a footnote to an “NBC Interview with Jack Kennedy: Chet Huntley and David Brinkley in the Oval Office in the White House, Sept 9, 1963.”

One of my first questions would be: “Can I get a transcript of the interview?”  A second would be: “Can a get a recording of the whole interview?”  Without the first, I can’t verify what the president said.  Without the second, I can’t contextualize what was said.

Two related questions this raises are: “What are the ground rules for the interview;” and “How much editing was done to produce the final product?”

It’s interesting that in Brinkley’s interview, the president was given a number of “mulligans,” although he appears not to have seen the questions ahead of time.

One of the commenters noted:

The media and politicos are in cahoots, rehearsing the interview.

Ground Rules for Interviewing

So I’ve been thinking: “What are fair ground rules for an interview?”  Here’s a few ideas:

  1. The full recording, including out-takes, should be available for the historical record.  (How soon is that?)
  2. Should anything be left out of the transcript?  Inevitably I think the answer will have to be yes, unless you get rid of all “off-the-record” interviews.  I also think the appropriateness of off-the-record remarks varies according to the degree of power that the interviewee has.  The secrets of the powerful often warrant less protection than the secrets of the weak.
  3. It may take time to gain the trust of the interviewee; and in real-life, the interviewer only begins recording when trust has been established and the interviewee is ready.

The Complete Record

I’ve sometimes wondered, what would happen if journalists tried to put everything on the record.  They would record their telephone calls asking for the interview. They would share all their email correspondence.  They would begin recording as they approached the office or home of the interviewee and then just keep filming until after they left.  And they would publish the entire contents of this “record” with every interview they did.  This is now feasible on the web, whereas it was impractical in the television or print-only world.

Now of course most people wouldn’t care to watch the whole thing; but a few would; and they might post notable things for the inspection of a wider group.  Is this what we want?

Paris Review Style Interview

An alternate model is employed by the literary journal “The Paris Review.”  It’s editors like to select their favorite authors to interview; and they give the authors full license to edit their answers.1

Naturally, the authors are used to choosing their words carefully; and this approach allows them to extend such care to the interview.  It allows the author to say exactly what they want, potentially resulting in more clarity, or alternatively less accountably.

Speaking about interviewing authors, David Fenza says:

A good literary interview is not faithful to the actual spoken event.  The transcript of the actual spoken interview should only serve as a draft of a dialogue that will, eventually, present the writer as completely and succinctly as possible.  A good literary interview is improvisational, but it’s also revisionary.  Writers are creatures who succeed through revision; they are most themselves when they revise; and this should carry over into the interview.2

When to allow a “Paris-Review” style interview depends on the type of interview desired. In any case, the ground rules should be disclosed.

If a President is given chances to “edit” their answers, there should be some indication of this when the interview is published.  But no matter how the interview is edited or revised, can the full historical record be preserved?

It is common to see something like “This is an edited and condensed version of the interview.” It would be interesting to see some sort of statistical disclosure about how much of the included text was changed; and how much was excluded.

Death Therapy

erasing death

Often when I hear of someone given multiple life sentences as punishment for a crime, I ask my self what the difference is between one life sentence and five life sentences.  I understand there is symbolism in the additional sentences; and in some cases people with a “life sentence” may be released early.1 In any case, I ask myself is whether there would actually be a way for multiple life sentences to be carried out.  Perhaps the sentence could mandate killing the criminal and then resuscitating them again, only to kill them again and then resuscitate them.  The process could be repeated enough times that the criminal could serve 5 life sentences in the course of a month.  (No, I’m not a lawyer.)

A new book by a doctor who specializes in resuscitation suggests that there is a common experience of dying that is consistent across cultures.  To be considered “dead”, one’s heart has to stop beating.  This stops brain activity, but it does not mean the the brain cells have died.  In fact, it is possible for a body to be chilled, and the person to be resuscitated several hours later.

Upon regaining consciousness, many patients have no memory; but others report seeing a bright light and feeling a very loving presence.  They recall having their life reviewed with them and feeling pain as they recall times when they caused others pain.  Some patients report being inspired to try to do better with their new lives.

So perhaps instead of giving our prisoners a lethal injection, we could give them “death therapy”.

I can imagine that were this resuscitation perfected, so that the risk that patients stay dead is reduced, many wealthy people would pay for such an experience.

  1. Wikipedia: [B]ack-to-back life sentences are two or more consecutive life sentences given to a felon. This penalty is typically used to prevent the felon from ever getting released from prison.
    .. this is effective because the defendant may be awarded parole after 25 years when he or she is eligible .. “