Science Fiction as Foresight: An Interview with Vernor Vinge
来自: 三丰SF(三丰SF) 2017-06-26 15:48:42
Science Fiction as Foresight: An Interview with Vernor Vinge
Vernor Vinge & Jim Euchner
Research-Technology Management
Volume 60, 2017 - Issue 1: Innovation Management—The State of the Art
Pages 11-17 | Published online: 19 Jan 2017
Vernor Vinge is a science fiction writer whose work includes A Fire Upon the Deep, Rainbows End, and “Fast Times at Fairmont High,” among others. He is also the author, in 1993, of “The Coming Technological Singularity.” In this interview, he discusses some of the concepts in his books that presaged the world we are now experiencing, as well as how he does what he does.
JIM EUCHNER [JE]:
It’s such a pleasure to talk with you. I want to start by thanking you for your books. I find them original, prescient, and a pleasure to read.
VERNOR VINGE [VV]:
Thank you. I appreciate it. That is one of the biggest pluses of writing: when people like it, especially people who are doing things in the real world.
JE:
People in corporations need to be able to see a little further into the future, I think. Science fiction can help.
I was just reviewing some of your writing, and the number of things that you anticipated is stunning: the first real description of what cyberspace might mean; concepts like second life and virtual reality; the deep web, which people are just now starting to understand; and the surveillance society. I think your books are very thought provoking. They’re standing the test of time, which is rare for your genre.
VV:
I’ve been pleased, in particular, that “Fast Times at Fairmont High” and Rainbows End have stood up as well as they have. I can point to other books I wrote that didn’t. I had a story about computer animation that, in many ways, is as foresightful as the other stories, but at the same time, it has some awful, awful gaps in it.
JE:
I guess that’s an occupational hazard.
VV:
Right. Mixed success is much more the usual track record.
JE:
I’d like to discuss some of the themes of your books, and then talk to you about how you do it. I’m interested in what might translate to people who are trying to use foresight inside companies. I’ll reference a number of your books, but I’d like to start with True Names, which explored what you called “the Other Plane,” and what later became known as cyberspace. One of the concepts that’s interesting to me there is the idea of deeper levels of secrecy and privacy in our lives. The issues with identity and surveillance are really in the news now. How do you think about some of them today?
VV:
They’re as unnerving as ever. I am more concerned generally about security; as we get to the Internet of Things, the issues of cybersecurity become the same as the issues of infrastructure security. They just merge. That is something that people are not looking at as seriously as they should.
There are different levels of risk that go beyond personal privacy. First of all, there is the risk that today is talked a lot about by security people, and that is just the ability to hack things and break them. As we get the Internet of Things—and in many ways, it is already here—you can break very important and serious things that could cause a lot of people to die. The step beyond that is the possibility that state actors, or even entities with less capability than state actors, could use the infrastructure to do bad things—even things that leave the infrastructure in place but result in military success.
JE:
That’s something you talked about in some depth in Rainbows End.
VV:
In Rainbows End, I considered a peculiar threat, a form of mind control. This was the “You Gotta Believe Me” technology, or YGBM. Short of that, and not magically, there’s the possibility of taking the large systems that people are using and trusting more and more and turning them against people. For instance, you could imagine nightmare scenarios in which somebody decides that everybody that was left-handed shouldn’t be allowed to exist.
JE:
And how would it play out?
VV:
By taking advantage of the effector aspect of the Internet, which is becoming more and more visible with the Internet of Things, to have artifacts take aggressive action against people the bad guys don’t like.
Vernor Vinge’s science fiction, which includes three Hugo Award–winning novels, offers a prescient vision of some of the issues we’re grappling with today.
At this stage, we mainly think of the Internet of Things as a collection of devices with a low level of active control. But as zillions of microprocessors that are networked together acquire, in addition to sensory modes, effector modes, you can imagine some horrifying science fiction. It’s a concern about moving beyond just exploiting bugs and loopholes to crash things to actually taking advantage of the abilities that the devices have to change things in the environment.
I’ve often said that a thoroughgoing Internet of Things would bring to physical reality the instability that we presently associate with financial markets.
JE:
That’s interesting. Can you say more about that?
VV:
Well, right now when you look at the world around you, you have expectations. If you’re holding a pencil and you let go of the pencil, you have a very good idea as to what is going to happen to it. But if that pencil is part of an Internet of Things that can act on the environment, then the result of dropping the pencil is largely a software issue. Politicians are sometimes accused of trying to redefine reality; now they can actually undertake to do so! And criminals can seek to undermine it.
A consequence would be jurisdictions that want to opt out of certain aspects of the legislated reality. You could get something that I don’t think is speculated about too much, namely serious virtual partitions. Right now there’s only a partial example of this, the Great Firewall of China.
JE:
Let me connect this back to another of your concepts from Rainbows End, which is the idea of citizen scientists and citizen journalists. They embody a sort of hopeful view against a power grab like this: that people will emerge and take the risks to prevent the central control of journalism or science.
VV:
It is a very optimistic and hopeful thing. Before Wikipedia, I would have doubted the possibility. The idea that it only takes a small number of griefers to break things made me skeptical. But the vast majority of people are good, at least in the sense that they want good outcomes for people in general. And now they are empowered by the Internet.
The outcome is still not certain, but I think that there are great possibilities for good in this. I wrote a short nonfiction essay about an extreme version based on a talk I gave at Foo Camp called “The Disaster Stack.” The subtitle is “Crowdsourcing Disaster Defense.”
JE:
One of the issues you have explored is mind control. The idea of the YGBM was built on the idea of some connection between human biology and the computing infrastructure. Is that your view of augmented reality or something deeper?
VV:
That was a magical MacGuffin in the story. You had to do something at the biological level to a person to make them react with credulity to some trigger pattern. Then you put that trigger pattern in a website or e-mail or video and every infected person who sees it has to believe your message.
There’s good reason to believe that something like this is very hard to do. We have millions of years of evolution that give us resistance against it. In normal environments, probably the most that can be done are the sorts of things very charismatic leaders can already do.
JE:
There’s an interface between the human as human and the machine as machine. You explore it with virtual reality and augmented reality. It has both a positive side—it’s the people who retain some degree of control of their world—but there is the risk that it could slide into something like YGBM.
VV:
In fact, in Rainbows End, the villain was a person who was so shaken by the threat of technology that at one point he says to himself, “There needs to be adult supervision here.” And he considers himself to be that adult. I can imagine a person becoming so concerned by the risks of technology that they feel that technology itself has to be taken over—which, I would say, is an impossible goal.
JE:
I would like to explore with you one of your most startling observations. It’s in one of your stories in which people are casually talking about how they hadn’t lost a city in five years, and so things must be getting safer. It’s very dystopic. And yet there are optimistic views in your writing, too. How do you think technology will play out from a “credible freedom” perspective, which is a term you’ve used?
VV:
I think it is reasonable to be optimistic about the future as a whole. The up sides of the technology we have are enormous. There are possibilities for total failures that are credible—where life on earth is destroyed and things like that—but the alternatives are there, and they are every bit as compelling as those the most extremely optimistic philosophers in history have ever described. In fact, some of the things technology can support are so good that part of what bothers people is the idea of finally being confronted with the reality of getting what we have wanted since the beginning of time. When you finally have the possibility of doing some of these things, you have to look those possibilities in the face and ask, “What did we mean when we said we wanted that?”
JE:
Can you give an example?
VV:
A prime example is living forever. If you really mean forever, it raises fundamental questions about what it means to be.
JE:
Yes, it does. And what forever means. There’s the myth about Sybil, who was granted eternal life but not eternal youth. At the end, all she wanted was to die.
VV:
That is certainly in the category of examples that illustrate that people haven’t properly articulated what they want. Answering the question about what we really want when we have the potential to get it becomes a very hard problem. In fact, some of the very interesting work that’s being done now concerning superhumanly intelligent AI presents the possibility that it’s dangerous just because humans don’t know what they want (see, for example, Nick Bostrom’s book Superintelligence). Even if you grant humans the ability to make machines do exactly what the humans want, you have a gap: the humans don’t know what they want.
JE:
When people talk about the Singularity, living forever is part of the aspiration, as well as computers that are superintelligent and that in some ways supplant humans. That’s something you explored in Fire Upon the Deep. Can you talk a little bit about that?
VV:
I think there are a number of different ways that technology can get to superintelligence. Some of them may be a lot safer and more congenial to humanity and life in general than others. One of the paths to the singularity is a transition period where it’s human-computer combinations that are superhumanly intelligent. This is often called intelligence amplification (IA).
We’re rushing headlong into an era where issues about personality and personality integration within the mind and across minds are going to be practical issues.
David Brin has in interesting take on IA: computers would do what they’re good at, namely very fast reasoning about things in the real world, and humans would do what they’re good at, namely wanting things. (The question of wanting the right things still remains, alas.)
Ultimately, I would judge that when it comes to intelligence, biology doesn’t have legs. Eventually, intelligence running on nonbiological substrates will have more power. Perhaps we can continue to participate as uploads. Or perhaps the transition is something like what Danny Hillis and Hans Moravec have imagined: this next generation of intelligence are our children—our “mind children” was Moravec’s term for it. Like most children, they will not end up doing exactly what their parents wanted, and the parents may be fearful of the children part of the time. But very often, the children do much better than their parents, rising off the efforts that their parents made. That’s a credible and certainly an optimistic way of looking at the future.
JE:
An interesting thing for me in your books is the variety of characters that you develop. In Fire Upon the Deep, there is a character, Wickwrackrum, who is the integration of several subcharacters. He is not only an aggregation of their physical parts but also of their integrated personalities. The concept you considered was the separation and reintegration of personalities, as beings like Wickwrackrum break apart and recombine. I guess that will be one of the challenges for human/computer combinations. People will have to integrate their personalities with whatever the intelligence is they’re working with. It will be interesting to see how that changes us beyond just making us smart enough to look things up quickly.
VV:
I think we’re rushing headlong into an era where issues about personality and personality integration within the mind and across minds are going to be practical issues.
JE:
It’s not easy even to think about.
VV:
True! It addresses fundamental issues about what we absolutely believe in.
JE:
We also have a strong idea of our identity as integrated individuals. To give that up would be very difficult.
That uncertainty and duality I see persisting into the farthest future and at the largest scale. And the healthiest approach at every stage is realistic optimism.
VV:
That fundamental philosophical point—“I think, therefore I am”—is the basic premise that some philosophers use to prove that you really know something. It’s been one of the strongest arguments of AI skeptics. They argue that even if a machine and its program did some wonderful thing, and even if it did so better than any human, it would not be truly intelligent because the machine didn’t know that it had done it.
JE:
That’s imposing our definition of what it is to be intelligent on machines.
VV:
Right. Its only “virtue” is that it’s totally undisprovable.
JE:
One of the things you explore in your writing is the pairing of a relentless evil with some good impulse, an antithesis that emerges and has some chance of preventing the dominance of the evil. For example, in Fire Upon the Deep, when the intelligence is awakened and then starts to grow, there’s a weak inkling in a few of the intelligences that are awakened that “We shouldn’t be here, but perhaps we’re enough.”
VV:
There were elements of goodness there that were fighting against the big villain in Fire Upon the Deep, which I kept mainly offstage because it was so big.
JE:
Do you see that as an essential thing, that as technologies evolve, evil will always couple and generate its opposite, or is that just a theme you used as a story artifice?
VV:
I do believe something like that. But I have also come to the unhappy conclusion that another way of saying this is that, with today’s technology, you’re playing for very high stakes.
I think that the three greatest human technological inventions are writing, money, and computers. Each of those has downsides that have been illustrated by history, and obviously, each of them has enormous upsides. Looking forward, I think that it’s a high-stakes game, and you don’t know how it’s going to play out, but it could turn out to be very good. It could lead to everything that philosophers have ever wanted for mankind. That uncertainty and duality I see persisting into the farthest future and at the largest scale. And the healthiest approach at every stage is realistic optimism.
JE:
As things get to be more and more complex, will that limit our ability to get them right? Is complexity the ultimate limiting force on technological evolution? Or do you think our tools might help us get beyond that?
VV:
The second. If we don’t get the Singularity, then technological progress will just sort of level off. The level of complexity will be such that we can’t manage it, and so we won’t. Things might just fumble on into an equilibrium, perhaps not a bad place, but a place where things do not get better. There’s a very interesting book that came out in 1969, by molecular biologist Gunther Stent, The Coming of the Golden Age: A View of the End of Progress.
JE:
That’s an interesting title.
VV:
Yes, especially from a molecular biologist. He argues that we’re entering an era where intellectual pursuits make bigger and bigger strides, and then turn turtle. You can tell because they become more ornate and more navel gazing; they focus on looking at the past and putting together the parts in different ways. I’m very big on scenarios, and I think he does a wonderful job with that particular scenario.
It’s a scenario he thinks is going to happen, but in the last chapter, he also says he could be wrong if we get faster-than-light space travel or we figure out how to make superhumanly intelligent brains.
JE:
At least the second seems plausible at this point.
VV:
I thought his observation was admirable because I think that’s what everybody should do: no matter how committed they are to a vision, they should make a list of deal breakers.
JE:
Let me use that to segue into how you do what you do. How do you go about getting the science pretty much right? You have a technical background, but how did you manage to see all the way from green screens to the deep web, for example?
VV:
I have people that I talk to about technology. I have friends who are electrical engineers who I was talking to at the right time about miniaturization and networks. I think that made it a lot easier to get things right about computer stuff. One of the nice things about being a science fiction writer is that you end up getting to talk to people who really know things about things. That goes in the other direction, too; I like to say that writing, more than any other present-day art form, co-opts the mind of the user as a display device.
JE:
Can you say more about that?
VV:
Many people who are very smart and technologically oriented enjoy reading science fiction. If you can engage their emotions in the story, you get something that the science fiction people call the “willing suspension of disbelief.” In other words, readers are willing to play ball with what you’re proposing. And once you have them emotionally, if they see something that is garbage, that just doesn’t make sense or is technologically wrong, they will spin up their high IQ and figure out an explanation for why it might be true after all.
The mind of the reader is a sort of display device. Readers are looking at what you write, and they are applying their intelligence to figure out a way for it to make sense. Occasionally, a writer will run into a very smart reader, someone much smarter than the writer, who will say, “Oh, I read your story, and it was so cool. I saw on page 55 that you understood about blah, blah, blah.” And the writer can smile benignly and say, “Well, yes, I’m glad you were able to see that.”
The flip side of this can be ugly. That’s when you have a very smart reader who’s reading along, and you lose them; they become emotionally disengaged from the story. When you lose someone and they turn their IQ against you, they can find what’s wrong with everything; no matter how well you thought something out, they can figure out a reason why it’s baloney.
JE:
Do you continue to reach out to different communities? You have a lot of access now, because of your work. But seeing the future takes more than having access to smart people; you have to actually listen and understand them and then project into the future.
VV:
I’m not that sociable, so I have not exploited the access as much as I should. But for Rainbows End, since I’m very weak on bioscience topics, I found molecular biologists who very kindly were willing to talk to me. That’s probably the closest I’ve come to actually seeking science people out and saying, “Hi, I write science fiction. Can we talk about this or that?” But that does happen. Writers who are more sociable than I, I think, very consciously do that.
And there are people in research areas who are very much into science fiction and find the time to talk to writers, even in the midst of a research schedule.
JE:
Once you have the bits and pieces of understanding, what do you do to inspire your stories? Or do the stories have a life of their own?
VV:
As I was writing more and more science fiction, from the ‘60s on, I would ask questions like that of writers I respected because I wanted to find out what the secret sauce was. One thing I learned was that there are a number of work styles. One that is fairly common applies to me. I have a general idea of what’s going on; I have particular cool ideas that I’ve had in the run-up to writing the story. But when I get to the sentence and paragraph level, it is amazing how often contingency rears its head. Some small question suddenly resolves or determines a major part of the story. The most spectacular example of that in my case was a story I wrote called “The Blabber.”
JE:
I have not read that one.
VV:
It’s about a solitary member of a Tines pack who is taken as a pet by an explorer. And no one knows about Tines or pack minds; they just have this one animal. It’s basically a story about a boy and his dog. And as I was writing the first page of the story, I had a big problem and a small problem. The big problem was that I didn’t know what the villains in the story were up to, and that was going to be the MacGuffin of the story. Why are the villains doing the things they are? I had to get that answer before I got done with the story.
But on the first page of the story, I ran across a very small problem: what was the sex of the pet dog, the pet creature, because I had to use a pronoun—am I going to call it he, she, or it? When I figured out the answer to that, I also knew what the villains were up to.
JE:
So partly, it’s making sure, as you trace this story, that you’re creating some sort of credible consistency within the area of suspended disbelief?
VV:
Right, exactly. What you just said is a very important thing. The other thing is to write something that somebody wants to read because they want to find out what happens next and because they are emotionally interested in the characters.
JE:
I know that some companies have experimented with using science fiction to help the executives in the company think into the future. They hire an author who writes a story that relates to their industry. Have you ever thought about how businesses might use science fiction to help them make better decisions?
VV:
Yes. I think it was the Global Business Network [GBN] that opened my eyes to the possibilities. When GBN did their scenario-planning sessions with industry clients, they often would invite a science fiction writer to participate. When they did that with me, they never asked me to do a story write-up. But during a scenario-planning session, it can be useful to have a loose cannon around. By loose cannon, I mean someone who doesn’t really know the real-world complications.
Suppose the client company is doing studies about video technology, and the science fiction writer that you’ve invited hasn’t participated in the years and years of effort that it has taken to get the industry to where it is today, and that science fiction writer has only a naïve idea, maybe a very vivid notion, but only a naïve idea about where things could go. As they’re listening to the ideas, they will occasionally pop up with things that are locally disruptive. I think GBN found that that could actually be helpful.
During a scenario-planning session, it can be useful to have a loose cannon around.
We science fiction writers are the first occupational group to be impacted by the Singularity, even if it never happens.
JE:
In your essay on the Singularity, you said you thought the Singularity would be no earlier than 2005 and no later than 2030. We’re now pretty close to the middle of that range. Do you still think that the Singularity will happen in that timeframe?
VV:
I grant, first of all, that it’s not certain; the Singularity might never happen. Certainly there are disasters that could postpone it a long way. But in the absence of those sorts of extreme situations, I would be surprised if it hasn’t happened by 2030. I think we are pretty much on schedule for it to happen. But if you were to ask people who are working in the field, most of them regard 2030 as overly optimistic; most actual researchers do not believe that it will happen that soon.
JE:
What we’re seeing in industry after industry, and I guess it will happen at an accelerated pace when the Singularity happens, is disruption all over the place.
VV:
Right. Disruption. And with it, real technological unemployment and things like that. These consequences can masquerade as other issues. It’s been sort of strange to me, to watch things that seem to me to be caused by very large and rather impartial technological forces masquerading as all sorts of other things.
JE:
I agree with you. It’s easier to be angry at some concrete entity than at something diffuse like technology. I guess the Luddites went after the looms. Do you have thoughts about how the acceleration of technology will shift organizations? We talked about the fact that what it means to be an individual will have to shift in some ways. What will the implications be for organizations?
VV:
I do have opinions about that, but they are even more naïve than usual, and ill formed. I can run through some of the issues.
I think that we’re in a situation now where the corporate goal to promptly satisfy stockholders is shifting; there are companies that are investing in things that, in the old days, would have been regarded as irresponsible—planting feelers way out from the center of the core expertise of the company, things that may or may not be successful. It’s very possible that in this century, or in this decade, the sort of company that appears scatterbrained from the standpoint of classical stewardship is actually the company that is going to be the winner.
A related question is, what do you do when the humans are still on top, but there is no real employment except for a very small percentage of the most talented? And that small percentage is itself becoming smaller as the general level of automation gets higher? Hans Moravec had the notion that perhaps the machines would be given the rest of the universe, but there would be an agreement, perhaps out of some green sentiment (on the part of the AIs!), that the humans and their way of looking at things would be allowed to persist on the surface of the Earth. Anybody who wanted to emigrate into space would be allowed to do so either physically or as a mind upload, but if they did, they would be giving up the Earth-bound safety net provided to normal humans.
A different scenario: one could imagine that money would be given to the humans, even though they weren’t doing anything anymore. When economic capital wakens, that is when it can be converted into thought and creativity more effective than any human labor, it might be that only capital pays taxes.
JE:
It’s interesting to explore that idea. At some level, I think people derive their meaning and their fulfillment from work. Even if people are not enslaved or impoverished, it’s not a very happy future if they have nothing meaningful to do except to want things.
VV:
I agree with you, and that’s why, for myself and probably for a lot of people, the intelligence amplification route, in which humans continue to participate, is preferred. Humans who like to do neat things can be players on larger and larger scales.
There’s an interesting book by Robin Hanson, an economist at George Mason University. The title of the book is The Age of Em: Work, Love and Life when Robots Rule the Earth. It discusses what he calls the age of emulations and describes one path to the Singularity. But unlike other paths, the early stages of what he describes can be shown very concretely. Suppose that we don’t get superhuman intelligence, but we just have programs that are humanly intelligent, perhaps made from very good models of particular human personalities. This is not about immortality, although that comes along with it, depending on what you believe immortality is. Hanson’s idea is that if you had these human-level intelligences, but running on a machine, then running them faster ought to be doable. And if you could do that, you have what some might call “weak superhumanity.” Intellectually, such minds could do in a minute what you and I could do in a year—but if you have the time, they are knowable.
Society as a whole, the intellectual center of gravity, would move to these entities. A great strength of the book is that Hanson concentrates on the economics of the scenario. This is knowable economics because there’s nothing there that’s really superhumanly intelligent. He regards this as an era that will pass also. But for this period, you can do fairly conventional economics and say all sorts of things about what it would be like, if you also assume that the rule of law prevails, which he does.
JE:
That does sound very interesting. It sounds like it would be helpful for people who are trying to think forward, by giving them something to sink their teeth into.
How about your writing? Are you finding it harder to write science fiction because everything is changing so fast? I think you said at one point that “what was once unrecognizably strange is now everyday reality.”
VV:
I definitely do find that. One thing I like to say is that we science fiction writers are the first occupational group to be impacted by the Singularity, even if it never happens. Because if we’re writing hard science fiction—science fiction that is supposed to follow technology honestly, with some magic thrown in maybe, but follow technology honestly—then if you believe in the Singularity, you run into a wall across the future. (And even if you don’t believe the Singularity will happen, you have readers who do, so you still must deal with the possibility if only to dismiss it.)
We science fiction writers have devised various ways of working around the problem. One is to have some great physical disaster that postpones the Singularity. I’ve done that. Another is to say, “Oh yeah, the Singularity can really happen, but not in this part of the universe,” which is what I did in The Zones of Thought stories.
JE:
When I was younger, I did stage magic, and that’s another thing that has bitten the dust. It’s still there, but a lot of the things that used to be sleight of hand or require a very specific, hard-to-understand apparatus, are now easy to imagine as a natural consequence of wireless connections and miniaturized computers. When people say they’ve figured a trick out, you’re reduced to saying, “Okay, I guess you’re right. It could be done that way.” It isn’t, but who cares. It doesn’t make for spectacular magic to watch. Magic is now something we experience every day.
VV:
That’s exactly right. Did you see the Big Bang episode where Sheldon’s friends mystify him with a classic clairvoyant magic trick?
JE:
I did. It was very funny, wasn’t it?
VV:
At one point, Sheldon manages to simulate the trick with technology. And his two friends just demolish him with about 30 seconds of explanation of the sort that you just described.
JE:
And of course, there was a confederate; that’s how Sheldon’s friends did the original trick.
VV:
That was such a beautiful story on so many levels. The use of a secret confederate highlighted a fundamental gullibility of some tech-oriented people, me included. If you confront an engineer or a scientist with a trick like that, an ESP-type trick, they regard it as a challenge: what can explain this?
JE:
It’s similar to what you said about the mind as a display device. They’re trying to figure it out in the way that they figure things out.
VV:
I think it was James Randi who wrote an essay about a wealthy engineer who had donated money to fund an investigation of the paranormal. It was an honest investigation. Randi was called in as a consultant, and he realized the investigators were being flimflammed by some of their test subjects. And the idea that someone would cheat in a real search for truth was alien to these engineers.
JE:
That’s pretty telling. I’ve enjoyed this. We wandered far across the landscape, and I’m sure we could talk for another two hours.
VV:
I enjoyed the conversation very much. I appreciate your reaching out.
JE:
Thank you very much.
你的回应
回应请先 登录 , 或 注册最新讨论 ( 更多 )
- 建个微信群,交流思想换换书… (自在先生)
- 找一篇小说 (melodyfangtasy1)
- 寻一篇科幻中篇小说🙏 (饭冰冰)
- 提问,喜欢科幻的都是哪些人呢? (乔cz)
- 千字50+寻科幻奇幻电影解说文案朋友~ (闪电豚鼠)