Edge 208, April 25, 2007
THE THIRD CULTURE
Fresh Air with Terry Gross
THE NEW YORK TIMES
THE COLBERT REPORT
In the Middle Ages, we were told what we knew by the Church; after the printing press and the Reformation, by state censors and the licensers of publishers; with the rise of liberalism in the 19th and 20th centuries, by publishers themselves, and later by broadcast media—in any case, by a small, elite group of professionals.
But we are now confronting a new politics of knowledge, with the rise of the Internet and particularly of the collaborative Web—the Blogosphere, Wikipedia, Digg, YouTube, and in short every website and type of aggregation that invites all comers to offer their knowledge and their opinions, and to rate content, products, places, and people. It is particularly the aggregation of public opinion that instituted this new politics of knowledge.
WHO SAYS WE KNOW: ON THE NEW POLITICS OF KNOWLEDGE
There are a lot of things that "everybody knows." Everybody knows that Everest is the tallest mountain on Earth, that 2+2=4, that most people have two eyes—and a lot of other things. If I were to go on, it would get tedious very fast, because, after all, these are things that everybody knows.
But there are also a lot of other things that "everybody knows," except that not everybody agrees that everybody knows them. For example, everybody knows not only that there has been significant global warming recently, but also that human beings caused this by burning fossil fuels. We know that evolution is as solidly proven as most of the rest of science, and that intelligent design isn't science at all; that Iraq never had any weapons of mass destruction (after they destroyed them); and that the U.S. government had nothing to do with the destruction of the World Trade Center. Except that, for each of these things "we all know," significant minorities insist that they're false.
Those dissenters, however, don't matter much when it comes to most journalism, reference, and education. Society forges ahead, reporting and teaching things without usually mentioning the dissenters, or only in a disparaging light. As a result, certain claims that some of us don't accept end up being background knowledge, as I'll call it. If you question such background knowledge, or even express some doubt about it, you'll look stupid, crazy, or immoral. Maybe all three.
To be able to determine society's background knowledge—to establish what "we all know"—is an awesome sort of power. This power can shape legislative agendas, steer the passions of crowds, educate whole generations, direct reading habits, and tar as radical or nutty whole groups of people who otherwise might seem perfectly normal. Exactly how this power is wielded and who wields it constitutes what we might call "the politics of knowledge." The politics of knowledge has changed tremendously over the years. In the Middle Ages, we were told what we knew by the Church; after the printing press and the Reformation, by state censors and the licensers of publishers; with the rise of liberalism in the 19th and 20th centuries, by publishers themselves, and later by broadcast media—in any case, by a small, elite group of professionals.
But we are now confronting a new politics of knowledge, with the rise of the Internet and particularly of the collaborative Web—the Blogosphere, Wikipedia, Digg, YouTube, and in short every website and type of aggregation that invites all comers to offer their knowledge and their opinions, and to rate content, products, places, and people. It is particularly the aggregation of public opinion that instituted this new politics of knowledge. In the 90s, lots of people posted essays on their personal home pages, put up fan websites, and otherwise "broadcasted themselves." But what might have been merely vain and silly a decade ago is now, thanks to aggregation of various sorts, a contribution to an online mass movement. The collected content and ratings resulting from our individual efforts give us a sort of collective authority that we did not have ten years ago.
So today, if you want to find out what "everybody knows," you aren't limited to looking at what The New York Times and Encyclopedia Britannica are taking for granted. You can turn to online sources that reflect a far broader spectrum of opinion than that of the aforementioned "small, elite group of professionals." Professionals are no longer needed for the bare purpose of the mass distribution of information and the shaping of opinion. The hegemony of the professional in determining our background knowledge is disappearing—a deeply profound truth that not everyone has fully absorbed.
The votaries of Web 2.0, and especially the devout defenders of Wikipedia, know this truth very well indeed. In their view, Wikipedia represents the democratization of knowledge itself, on a global scale, something possible for the first time in human history. Wikipedia allows everyone equal authority in stating what is known about any given topic. Their new politics of knowledge is deeply, passionately egalitarian.
Today's Establishment is nervous about Web 2.0 and Establishment-bashers love it, and for the same reason: its egalitarianism about knowledge means that, with the chorus (or cacophony) of voices out there, there is so much dissent, about everything, that there is a lot less of what "we all know." Insofar as the unity of our culture depends on a large body of background knowledge, handing a megaphone to everyone has the effect of fracturing our culture.
I, at least, think it is wonderful that the power to declare what we all know is no longer exclusively in the hands of a professional elite. A giant, open, global conversation has just begun—one that will live on for the rest of human history—and its potential for good is tremendous. Perhaps our culture is fracturing, but we may choose to interpret that as the sign of a healthy liberal society, precisely because knowledge egalitarianism gives a voice to those minorities who think that what "we all know" is actually false. And—as one of the fathers of modern liberalism, John Stuart Mill, argued—an unfettered, vigorous exchange of opinion ought to improve our grasp of the truth.
This makes a nice story; but it's not the whole story.
As it turns out, our many Web 2.0 revolutionaries have been so thoroughly seized with the successes of strong collaboration that they are resistant to recognizing some hard truths. As wonderful as it might be that the hegemony of professionals over knowledge is lessening, there is a downside: our grasp of and respect for reliable information suffers. With the rejection of professionalism has come a widespread rejection of expertise—of the proper role in society of people who make it their life's work to know stuff. This, I maintain, is not a positive development; but it is also not a necessary one. We can imagine a Web 2.0 with experts. We can imagine an Internet that is still egalitarian, but which is more open and welcoming to specialists. The new politics of knowledge that I advocate would place experts at the head of the table, but—unlike the old order—gives the general public a place at the table as well.
We want our encyclopedias to be as reliable as possible. There's a good reason for this. Ideally, we'd like to be able to read an encyclopedia, believe what it says, and arrive at knowledge, not error. Now, according to one leading account of knowledge called "reliabilism," associated with philosophers like Alvin Goldman and Marshall Swain, knowledge is true belief that has been arrived at by a "reliable process" (say, getting a good look at something in good light) or through a "reliable indicator of truth" (say, proper use of a calculator).
Reliability is a comparative quality; something doesn't have to be perfectly reliable in order to be reliable. So, to say that an encyclopedia is reliable is to say that it contains an unusually high proportion of truth versus error, compared to various other publications. But it can still contain some error, and perhaps a high enough proportion of error that—as many have said recently—you should never use just one reference work if you want to be sure of something. Perhaps, if one could know that an encyclopedia were perfectly reliable, one could get knowledge just by reading, understanding, and believing it. What a wonderful world that would be. But I doubt both that there is a way of knowing that about an encyclopedia, and also that humanity will ever be blessed with such a reference work. Call such a thing a perfect encyclopedia. Well, there is no such thing as a perfect encyclopedia, and if there were, we'd never know if we were holding one.
Why not? Well, when we say that encyclopedias should state the truth, do we mean the truth itself, or what the best-informed people take to be the truth—or perhaps even what the general public takes to be the truth? I'd like to say "the truth itself," but we can't simply point to the truth in the way we can point to the North Star. Some philosophers, called pragmatists, have said there's no such thing as "the truth itself," and that we should just consider the truth to be whatever the experts opine in "the ideal limit of inquiry" (in the phrase of C. S. Peirce). While I am not a pragmatist in this philosophical sense, I do think that it is misleading to say simply that encyclopedias aim at the truth. We can't just leave it at that. Unfortunately, statements do not wear little labels reading "True!" and "False!" We need a criterion of encyclopedic truth—a method whereby we can determine whether a statement in an encyclopedia is true.
Let's suppose our criterion of encyclopedic truth is encoded in how encyclopedists decide whether to publish a statement. The method no doubt used by Encyclopedia Britannica and many other reference works goes something like this. If an expert article-writer states that p is true, and the editors find p plausible, and p gets past the fact-checkers (who consult other experts and expert-compiled resources), then p is true, at least as far as this encyclopedia is concerned.
The problem is that this is a highly fallible process. Sometimes, we discover that p is false. Sometimes, it's false because somebody made a typo or misinterpreted expert opinion; but sometimes it's false because, though faithful to expert opinion, expert opinion itself turned out to be false. Even if there were a beautifully reliable method of capturing expert opinion, that wouldn't be an infallible criterion of encyclopedic truth, because expert opinion is frequently wrong. Unfortunately, as a society, we usually can't do any better: if we learn that expert opinion is wrong, the corrected view becomes the new expert opinion. Besides, experts disagree about a lot of things. It is presumptuous, and a great disservice to readers, for editors to choose one expert to believe over another.
So we shouldn't say that encyclopedias aim to capture either the truth itself or any perfectly reliable indicator of truth. That is too much to hope for from an encyclopedia. Instead, consider: what do we most want, as responsible, independent-minded researchers, out of an encyclopedia? Primarily, I think most of us want mainstream expert opinion stated clearly and accurately; but we don't want to ignore minority and popular views, either, precisely because we know that experts are sometimes wrong, even systematically wrong. We want well-agreed facts to be stated as such, but beyond that, we want to be able to consider the whole dialectical enchilada, so that we can make up our own minds for ourselves.
Notice that the word is used in various ways. For instance, journalists, interviewers, and conference organizers—people trying to gather an audience, in other words—use "expert" to mean "a person we can pass off as someone who can speak with some authority on a subject." Also, we say the "local expert" on a subject is the person who knows most, among those in a group, about the subject. Neither of these are the very interesting senses of "expert."
We also speak of experts in the credentials sense, that is, any person who meets a (vague) standard of credentials, or evidence of having studied (or practiced) some matter, to whatever extent is thought needed for expertise—for example, as defined by professional organizations. And finally, surely we also speak of experts in a more objective sense: someone who really does have expert knowledge of a subject, whatever that amounts to. On my view, objective expertise amounts to something like this: if we rely on the expert's opinion in matters of their expertise, that really does increase the probability we have the truth.
The hope is that expertise in the credentials sense is a good but imperfect sign of expertise in the objective sense. Personally, I am not so cynical as to deny this. So, I believe that if someone meets a certain standard of credentials about some topic, then that person is probably more reliable on that topic than someone picked at random. Bear in mind, however, that "credentials" should be construed very broadly, and can mean much more than simply degrees and certifications.
Encyclopedias should represent expert opinion first and foremost, but
also minority and popular views. Here, surely we are stuck
with the credentials sense of "expert opinion." Just
as statements do not bear labels announcing their truth values, people
do not bear labels announcing their objective expertise. When
decision makers have to decide whether a person really is, objectively,
an expert, they have to use evidence that they can agree upon. But
any such evidence can count as a "credential" in a broad
sense. No doubt some wholly uncredentialed people have expertise
in the objective sense—some autodidacts must fit the bill. Moreover,
it's surely possible for other people to come to recognize such
hidden expertise. But when groups must make decisions about who
is an expert, they must have evidence; if evidence of expertise, or
credentials, is lacking, the decision makers cannot be expected to acquaint
themselves deeply with each person individually. And what if
someone who is unquestionably an expert does interview, and
declare to be an expert, a wholly uncredentialed autodidact? Then
that opinion is the autodidact's first credential.
First, why open up encyclopedia projects to the general public? While the whole body of people called "experts" (in any very restrictive sense) are probably capable of writing about and representing the interests and views of the larger public, the trouble is that they won't actually want to do so, or they lack the time to do so, in as much detail as the public itself is capable of. It is difficult and tedious enough for experts to cover their own areas. While there are people with expertise about popular culture—from celebrity journalists, to video game designers, to master craft workers—there are far more people who can do a good job summarizing information about "popular" topics than there are experts about them. Similarly, there are usually a number of experts about theories that are far out of the mainstream—one thinks of people who have expert knowledge of astrology, or some kinds of alternative medicine—but again, the quantity of non-expert people able to write reasonably well about such theories is much greater.
I'll have no truck with the view that simply because something is out of the mainstream—unscientific, irrational, speculative, or politically incorrect—it therefore does not belong in an encyclopedia. Non-mainstream views need a full airing in an encyclopedia, despite the fact that "the best expert opinion" often holds them in contempt, if for no other reason than that we have better grounds on which to reject them. Moreover, as we are responsible for our own beliefs, and as the freedom to believe as we wish is essential to our dignity as human beings, encyclopedias do not have any business making decisions for us that we, who wish to remain as intellectually free as possible, would prefer to make ourselves.
There is another reason to engage the public: due to its sheer size, the public can also contribute enormous breadth and extra eyeballs for all sorts of the more usually "expert" topics, too. The general public may add a far greater assortment of topics and perspectives than one would get if one assigned only experts to write about only their areas of expertise. Moreover, the sheer quantity of eyeballs gazing at obvious mistakes means that such mistakes will be fixed more quickly and reliably than if one engages only experts working only on their areas of expertise. Finally, and perhaps most importantly, the inclusion of the general public in an encyclopedia project, and ensuring that all subjects are treated at once, will tend to reduce the insularity common to many specialized fields: the result is that the encyclopedia's readers will be subjected less to dogmatic presentations of wrongheaded intellectual fads.
Therefore, the assistance of the general public is needed in encyclopedia projects. Now let's turn to the other group: why are experts needed? Or perhaps a better question is: why it is important to ensure that experts are involved?
Experts, or specialists, possess unusual amounts of knowledge about particular topics. Because of their knowledge, they can often sum up what is known on a topic much more efficiently than a non-specialist can. Also, they often know things that virtually no non-specialist knows; and, due to their personal connections and their knowledge of the literature, they often can lay their hands on resources that extend their knowledge even further.
Another thing that experts can do, that few non-experts can, is write about their specializations in a style that is credible and professional-sounding. Frequently, students and dabblers possess an adequate understanding of a topic, but they are wholly incapable of saying much about it without revealing their inexpert knowledge, in one way or another—even if they are superb writers and even if what they say is correct, strictly speaking. This is a common problem on Wikipedia. Furthermore, while a great many specialists are terrible writers, some of the very best writers on any given topic are specialists about that topic. Many experts take great pride in their ability to write about their own fields for non-experts.
Finally, experts are—albeit fallibly—the best-suited to articulate what expert opinion is. It is for the most part experts who create the resources that fact-checkers use to check facts. This makes their direct input in an encyclopedia invaluable.
For these reasons, I believe experts should share the main responsibility of articulating what "we know" in encyclopedia projects; but they should share this responsibility with the general public. Involving both groups in a content production system has the best chance of faithfully representing the full spectrum of expression. To exclude the public is to put readers at the mercy of wrongheaded intellectual fads; and to exclude experts, or to fail to give them a special role in an encyclopedia project, is to risk getting expert opinion wrong.
The most massive encyclopedia in history—well, the most massive thing often called an encyclopedia—is Wikipedia. But Wikipedia has no special role for experts in its content production system. So, can it be relied upon to get mainstream expert opinion right?
Wikipedia's defenders are capable of arguing at great length that expert involvement is not necessary. They are entirely committed to what I call dabblerism, by which I mean the view that no one should have any special role or authority in a content creation system simply on account of their expertise. I apologize for the neologism, but there is no word meaning precisely this view. I did not want to use "amateurism," since that word is opposed to "professionalism," and the view I want to discuss attacks not the privileges of professionals, per se, but of experts. The issue here is not whether people should make money from their work, but whether their special knowledge should give them some special authority. To the latter, dabblerism says no.
Wikipedia's defenders have a great many arguments for dabblerism: non-experts can create great things; the "wisdom of crowds" makes deference to experts unnecessary; studies appear to confirm this in the case of Wikipedia; there is no prima facie reason to give experts any special role; it is only fair to judge people by what they do, and not by their credentials; and making a role for experts will actually ruin the collaborative process.
Not one of these arguments is any good.
First, it is absolutely true that dabbleristic (if you will), expert-spurning content creation systems can create amazing things. That's what Web 2.0 is all about. While many might sneer at these productions generally, Web 2.0 has created some quite useful and interesting websites. Wikipedia and YouTube aren't popular for nothing, and for many people they are endlessly fascinating.
This does not go the slightest way toward showing, however, that some sort of expert guidance is neither needed, nor would be a positive addition to, content creation systems, and particularly to encyclopedia projects. Many people have looked at Wikipedia's articles and concluded that they sure could use work by experts and real editors. It's one thing to say that Wikipedia is amazing and useful; it is quite another to say that we couldn't do better by adding a role for experts.
At this point, my opponent might pull out a very interesting and popular book called The Wisdom of Crowds by James Surowiecki, and say that it shows that Wikipedia has no need of expert reviewers. Surowiecki explains some fascinating phenomena, but nowhere does he say that Wikipedia doesn't need experts. And no surprise: by Surowiecki's own criteria, there's no reason to think that Wikipedia displays "the wisdom of crowds." Let me explain.
In the introduction of the book, Surowiecki describes an agricultural fair in England in 1906, at which all manner of people competed to guess the weight of an ox. There were many non-experts in the crowd, so the average of the guesses should have been ridiculously off; but in fact, while the ox actually weighed in at 1,198 pounds, the average of the guesses was 1,197 pounds. This, Surowiecki says, illustrates a widely-recurring phenomenon, in which ordinary folks in great numbers acting independently can display behavior that, in aggregate, is more "wise," or accurate, than the greatest expert among them.
Of course, Surowiecki is no fool. His claim isn't that whatever data "crowds" produce are reliable, regardless of circumstances. Among other things, each member of a "crowd" needs make decisions independently of each other. But this is precisely how Wikipedia doesn't work. As he writes:
But that's exactly what happens on wikis, and on Wikipedia. To be able to work together at all, consensus and compromise are the name of the game. As a result, the Wikipedian "crowd" can often agree upon some pretty ridiculous claims, which are very far from both expert opinion and from anything like an "average" of public opinion on a subject. I don't mean to say that the Wikipedia process is not robust and does not produce a lot of correct answers. It is and it does. But the process does not closely resemble the "wise crowd" phenomena that Surowiecki is explaining.
Besides, the standard examples demonstrating the strength of group guessing—say, that a classroom's average guess of the number of jelly beans in a jar is better than all individual guesses, or that experts cannot outperform financial markets—do not lend the slightest bit of support to the notion that experts and editors are not needed for publishing or content creation. There are objective facts about the number of jelly beans, or about market prices, that experts can be right or wrong about. But what facts are Wikipedians attempting to describe? Objective facts that you can point to like a stock price in a newspaper? Only rarely. The facts they want to amass are facts contained in the books and articles that, it so happens, they are so keen on citing. Who writes those books and articles? Experts, mostly. To say that expert guidance is not really needed in encyclopedia construction is like saying the opinion of the person who counted out the jelly beans before putting them in the jar is not really useful.
It's easy to be impressed with the apparent quality of Wikipedia articles. One must admit that some of the articles look very impressive, replete with multiple sections, surprising length, pictures, tables, a dry, authoritative-sounding style, and so forth. These are all good things (except for the style). But these same impressive-looking articles are all too frequently full of errors or half-truths, and—just as bad—poor writing and incoherent organization. (Jaron Lanier was eloquent on the latter points in his interesting Edge essay, "Digital Maoism.") In short, Wikipedia's dabblerism often unsurprisingly leads to amateurish results.
Some might point to Nature's December 2005 investigative report—often billed as a scientific study, though it was not peer-reviewed—that purported to show, of a set of 42 articles, that whereas the Britannica articles averaged around three errors or omissions, Wikipedia averaged around four. Wikipedia did remarkably well. But the article proved very little, as Britannica staff pointed out a few months later. There were many problems: the tiny sample size, the poor way the comparison articles were chosen and constructed, and the failure to quantify the degree of errors or the quality of writing. But the most significant problem, as I see it, was that the comparison articles were all chosen from scientific topics. Wikipedia can be expected to excel in scientific and technical topics, simply because there is relatively little disagreement about the facts in these disciplines. (Also because contributors to wikis tend to be technically-minded, but this probably matters less than that it's hard to get scientific facts wrong when you're simply copying them out of a book.) Other studies have appeared, but they provide nothing remotely resembling statistical confirmation that Wikipedia has anything like Britannica-levels of quality. One has to wonder what the results would have been if Nature had chosen 1,000 Britannica articles randomly, and then matched Wikipedia articles up with those.
Let's set aside the question whether Wikipedia's quality does, at present, rival Britannica's. One might argue that, even if it doesn't, there is still no prima facie reason to give experts any special role in the project. To give authority to people simply on the basis of their expertise is—as Wikipedians often say—simply "credentialism," and no more rational than rejecting an application from a stellar programmer simply because he lacks a B.S. in Computer Science. People should be judged based on their demonstrated abilities, not degrees.
But I can agree with that. There is no reason whatsoever to insist on any simpleminded approach to identifying experts. Some of the finest programmers in the world lack any computer science degrees, and it would be silly to fail to recognize that fact. But there is no reason why a content creation system could not recognize as a "credential," or as proof of expertise, all manner of evidence, not just degrees.
Similarly, Wikipedians have a sort of moral argument for their dabblerism: they say, sometimes, that it is only fair to judge people based on what they do, not who they are. Meritocracy is the only fair way to justify differing levels of editorial authority in open projects; and a genuine meritocracy would assign authority not based on "credentials," but only based on what people have demonstrated they can do for the project. It is wrong and unfair to hand out authority based on credentials.
But, interestingly, Wikipedians can't help themselves to this argument. If they are fully committed to dabblerism, then they cannot justify different levels of editorial authority on any grounds. Dabblerism, as I said, is the view that no one should have any special role or authority in a content creation system simply on account of their expertise. But we can easily identify, as a kind of expertise, the proven ability to do excellent work. So dabblerism, as I defined it, is incompatible with meritocracy itself. There's another way to state this line of thought. Define "credential" as "evidence of expertise." If we reject the use of credentials, we reject all evidence of expertise; ergo, lacking any means of establishing who is an expert, we reject expertise itself. Meritocrats are necessarily expert-lovers.
I find the moral argument annoying for another reason, however. It implies that degrees, certificates, licenses, association memberships, papers, books, presentations, awards, and all other possible evidence of expertise—the whole gamut of "credentials"—just don't matter. They don't constitute good evidence of anything. But if they don't count as good evidence of expertise, why should the ability to do something on behalf of a mere Internet project count as good evidence? There is a bizarre reversal in the insular world of Wikipedia: mere quantity of work is a credential there, but not for academic tenure and advancement committees; meanwhile, degrees and peer-reviewed papers are credentials for tenure and advancement committees, but not for Wikipedia and its ilk. (Wikipedians will protest that quantity of work doesn't really matter. But, of course, it very much does.)
The last hope for rescuing dabblerism might come in the form of an argument that the use of experts will render the project less collaborative; it will "kill the goose that lays the golden eggs." Wiki-style collaboration requires that there be no differences in authority. According to this argument, we are committed to dabblerism if we want to enjoy the fruits of bottom-up collaboration.
But this is little better than an untested prejudice. The notion that experts cannot play a gentle guiding role in a genuinely bottom-up collaborative project seems to be plain old bigotry. No doubt this prejudice stems from a fear that experts will twist what should be an efficient process into the sort of slow, top-down, bureaucratic drudgery that they are used to. But this needn't be the case. Surely it isn't impossible for professors to exit the cathedral—to borrow Eric Raymond's metaphor in his essay "The Cathedral and the Bazaar"—and wander the bazaar, offering guidance and highlighting what is excellent. Will that necessarily make the bazaar less of a bazaar?
None of these arguments, dismissing special roles for experts in encyclopedia projects, is any good. The support for dabblerism—as I've defined the term—would appear irrational. Is it really?
Here's a little dilemma. Wikipedia pooh-poohs the need for expert guidance; but how, then, does it propose to establish its own reliability? It can do so either by reference to something external to itself or else something internal, such as a poll of its own contributors. If it chooses something external to itself—such as the oft-cited Nature report—then it is conceding the authority of experts. In that case, who is it who says "we know"? Experts, at least partially: their view is still treated as the touchstone of Wikipedia's reliability. And if it concedes the authority of experts that far, why not bring those experts on board in an official capacity, and do a better job?
If, on the other hand, Wikipedia proposes to establish its own reliability "internally," for example through polls of its contributors, or through sheer quantity of edits, they have a ridiculously untenable position. The position entails that the word of an enormous, undifferentiated, and largely anonymous crowd of people is to be trusted, or held reliable, for no other reason than that it is such a crowd. It is one thing to argue for "the wisdom of crowds" by reference to an objective benchmark. It is quite another thing to maintain that crowds are wise simply because they are crowds. That is a philosophical view, a variety of relativism, according to which the only truth there is, the only facts there are, are literally "socially constructed" by crowds like the contributors to Wikipedia.
It's this view that Stephen Colbert was able to mock so effectively and hilariously as "wikiality": reality is what the wiki says it is. Colbert has in effect added to what "we all know." By brilliantly skewering the notion that facts are whatever Wikipedians want them to be, Colbert has added to our culture's modest stock of background knowledge—about philosophy. Thanks to Colbert, we all know now that reality isn't created by a wiki. That's no mean feat for a humorist.
But nobody really believes that reality is constructed by Wikipedia. Instead, Wikipedians attempt to take my dilemma by the horns, supporting the credibility of Wikipedia's content through a combination of both external and internal means. They insist that footnotes suffice to support an article. If a fact has been supported by a footnote, then, apparently, it is credible. This, we might say, is an external means of fact-checking; but it is up to rank-and-file Wikipedians, not any fancy experts, to add and edit the footnotes, and so it's also an internal means of fact-checking. So, where's the dilemma?
The dilemma is easy to apply here, too. If Wikipedians actually believe that the credibility of articles is improved by citing things written by experts, will it not improve them even more if people like the experts cited are given a modest role in the project? And, on the other hand, if (somehow) it is not the fact that the cited references were created by experts, one has to wonder what the references are for. They have a mysterious, talismanic value, apparently. It seems that we all know that footnotes makes articles much more credible—but why? Whatever the reason, Wikipedians wouldn't want to say that it's because the people cited are credible authorities on their subjects.
The dilemma Wikipedia finds itself in, then, is that if it wants to establish its credibility by reference to expert opinion, then it has no reason not to invite experts to join in some advisory capacity. But this is completely intolerable for Wikipedians. Now, why is that?
Wikipedia is deeply egalitarian. One of its guiding principles is epistemic (knowledge) egalitarianism. According to epistemic egalitarianism, we are all fundamentally equal in our authority or rights to articulate what should pass for knowledge; the only grounds on which a claim can compete against other claims are to be found in the content of the claim itself, never in who makes it.
Notice that (on my account) this is a doctrine about rights or authority, not about ability; it would be simply absurd to say that we are equal in ability to declare what should pass for knowledge. Someone who has never had a course in physics is unlikely to be equal to a Nobel laureate in physics in his ability to declare what is known about physics. But epistemic egalitarianism would hold them equal in rights—for example, in the right to change a wiki page about a topic in physics—nonetheless.
Note also that epistemic egalitarianism doesn't declare we have the right to say what really is known—that too would be absurd—but only what passes for knowledge, or what is presented as known, for example through Wikipedia's mechanisms, or through a Blogosphere that operates much like a democratic popularity contest. In fact, Wikipedia is the perfect vehicle for epistemic egalitarianism, since it allows virtually everyone to edit virtually any page. Granted, Wikipedia's "administrators" have rights that others do not have; but it is perhaps as egalitarian as it's possible for a project of its scale to be.
It is precisely the fact that it speaks about our rights to declare what passes for knowledge that makes epistemic egalitarianism a doctrine about the politics of knowledge. So, who says "we know"? We all do.
Put that way, perhaps the appeal of the doctrine should be plain. I began this essay by saying that the power to declare society's background knowledge is awesome, and that many consequential decisions, including political decisions, are deeply influenced by that background knowledge. If the Internet now makes it possible for society's background knowledge to be shaped by a far broader, more open and inclusive group of people, that would seem to be a good thing. Indeed, perhaps it is only an accident of history, not any good reason, that placed the epistemic leadership of society almost exclusively in the hands of a fairly small class of professionals. But now, through another accident of history—the rise of the Internet—the general public may partake in the conversations that determine what "everybody knows." I think this is mostly a positive development.
No doubt the main philosophical reason for epistemic egalitarianism is, like the reason for egalitarianism generally, the now-common and overarching desire for fairness. The desire for fairness creates hostility toward any authority—and not just when authority uses its power to gain an unfair advantage, but toward authority as such. That is, the most radical egalitarians advocate that our situations be made as equal as possible, including in terms of authority. But, in our specialist-friendly modern society, expertise can confer much authority not available to non-experts. Perhaps the most important and fundamental authority experts have is the authority to declare what is known. This authority, then, should be placed in the hands of everyone equally, according to a thoroughgoing egalitarianism.
I support meritocracy: I think experts deserve a prominent voice in declaring what is known, because knowledge is their life. As fallible as they are, experts, as society has traditionally identified them, are more likely to be correct than non-experts, particularly when a large majority of independent experts about an issue are in broad agreement about it. In saying this, I am merely giving voice to an assumption that underlies many of our institutions and practices. Experts know particular topics particularly well. By paying closer attention to experts, we improve our chances of getting the truth; by ignoring them, we throw our chances to the wind. Thus, if we reduce experts to the level of the rest of us, even when they speak about their areas of knowledge, we reduce society's collective grasp of the truth.
It is no exaggeration to say that epistemic egalitarianism, as illustrated especially by Wikipedia, places Truth in the service of Equality. Ultimately, at the bottom of the debate, the deep modern commitment to specialization is in an epic struggle with an equally deep modern commitment to egalitarianism. It's Truth versus Equality, and as much as I love Equality, if it comes down to choosing, I'm on the side of Truth.
GEORGE DYSON [4.24.07]
"The day when an energetic journalist could gather together a few star contributors and a miscellany of compilers of very uneven quality to scribble him special articles, often tainted with propaganda and advertisement, and call it an Encyclopaedia, is past."
So proclaimed H. G. Wells, to the Royal Institution of Great Britain, on November 20th, 1936. With darkness descending across Europe, Wells called for "a World Encyclopaedia... carefully assembled with the approval of outstanding authorities in each subject... alive and growing and changing continually under revision, extension and replacement from the original thinkers in the world."
"How did I come to know what I know about the world and myself?" asked Wells. "What ought I to know? What would I like to know that I don't know? If I want to know about this or that, where can I get the clearest, best and latest information? And where did these other people about me get their ideas about things, which are sometimes so different from mine?"
Wells foresaw (70 years ahead of Web 2.0) that "the whole human memory can be, and probably in a short time will be, made accessible to every individual" and urged us to build a "universal organization and clarification of knowledge and ideas... a World Brain which will... have at once the concentration of a craniate animal and the diffused vitality of an amoeba."
But the wisdom of crowds did not look promising in 1936. Wells's World Encyclopedia—more Citizendium than Wikipedia—would be governed by an editorial board, with experts at the helm. Non-specialists need not apply. "In a burning hotel or cast away on a desert island they [non-specialists ] would probably do quite as well. And yet collectively they would be ill-informed."
Gresham's Law (see Wikipedia) holds that bad money drives out good. Wales's Law (that's Jimmy Wales, founder—or is it co-founder—of Wikipedia) holds that bad information will be driven out by good. But we have ample evidence of the reverse. Wikipedia or Citizendium? Wells or Wales? The difference is in the metadata—and the meta-metadata that determines who has expertise over expertise. Endless regress, or an asymptotic approach to the limits of truth?
"Ten years ago I started a company based on the assumption that people are basically good," announced E-Bay founder Pierre Omidyar in 2004. "And now I have the data to prove it." Anyone who has been defrauded on E-Bay is entitled to disagree. But, overall, the evidence from E-Bay is that most people will deal fairly. Yet a few will always cheat. So it is with Truth.
The Wikipedia community seeks equality and truth. Can they make this work? Many see evidence of failure. I see evidence that it could work. The current ailments are ones that better layers of metadata—attributions, references, and differentiation of facts from beliefs—could largely cure.
It is not just what we know that's important, it is what we don't. We not only need an encyclopedia of knowledge, we need an encyclopedia of ignorance, too. If our ignorance is not mislabeled, cataloging it in one place can be a useful tool. It is not just what we know that's important, it is what we don't. We not only need an encyclopedia of knowledge, we need an encyclopedia of ignorance, too. If our ignorance is not mislabeled, cataloging it in one place can be a useful tool. Wikipedia is "the Encyclopedia for the rest of us"—thus its popular appeal, and its aggravation of some who are professionals at seeking truth.
"We are still too close to the beginning of the universe to be certain about its death," wrote J. D. Bernal in 1929. And we are too close to the beginning of Wikipedia (and Citizendium) to determine which—if either—is the path to truth.
JARON LANIER [4.22.07]
He's charitable in characterizing his opponents as "egalitarian." By way of analogy, a clunky communist economy that makes everyone equally poor is not egalitarian in any admirable way, and neither is a sloppy information architecture that gives everyone equal access to creating and receiving mediocre information.
My problem with the Wikipedia was not primarily with the questions of expertise or accuracy when I wrote "Digital Maoism." Instead I was worried about the reduced expectations people seemed to have of themselves in the context of "Web 2.0." Why tweak a wiki or add data to some other conglomerate site when you now have the ability to really write and be read? Why choose to become part of an anonymous mush when you can finally be known?
Since I wrote the essay, I have paid more attention to the question of quality on the Wikipedia, and I must say, it is worse than I thought based on my earlier experience. In the areas where I have detailed knowledge, such as regarding certain obscure musical instruments, the Wikipedia is not just unreliable but unreliable in an insidious way.
Entries are often just askew enough to screw someone up who might be trying to appreciate a recording better, or trying to get the background on an exotic instrument seen on stage. The Wikipedia has found a way to efficiently enable the fallacy of specious accuracy in text. (This fallacy used to be more familiar in the domain of numbers.) Numerous mistakes occur below the threshold of detail found in conventional encyclopedias or online sources, so are hard to check, making "edit wars" excruciating. But at the same time the Wikipedia is made to appear vast and authoritative.
Another way to make the same point: The value of a good summary article is in the choice of what details to leave out. The Wikipedia is useless in this regard.
I know, I know, why don't I just go in to try to fix the problem entries I come upon? Because when you do that you have to engage in the aforementioned edit wars with anonymous people who are typically headstrong and have more time than I do to fight (but not enough time to do sufficiently thorough independent research, it seems.)
Well, ok, I just looked up one instrument (chosen at random by spinning a bottle in my instrument room); the not-at-all-obscure Chinese mouth organ "Sheng." As of this evening, the entry is typical for the Wikipedia. There is plenty of circumstantially selected, impressively detailed information, including names of some Europeans who brought shengs to the West in the 1700s and so on. But the overall effect is misleading. The emphasis is random.
For instance, the models and tunings of shengs listed are relevant in some recent contexts (when there have been Chinese instrument factories innovating to serve a modern and somewhat Western-influenced movement of music education and performance) but even within that framework, the details are hardly complete. The hot news among my Sheng-playing friends in the last few years has been the amazing innovation in models like the Hong Liang Zhao 38 key gaoyin, which are changing ideas about what can be played on the instrument.
An online exposition of modern keyed shengs ought to at least mention that the sheng world is caught up right now in a period of rapid transformation. Much more importantly, the very long history of the sheng, which includes many forms, tunings, and earlier influences on the West (going back to classical times) is not even suggested.
Of course once a Wikipedia inadequacy gets publicized, like the charming but incorrect claim that I'm a filmmaker, it gets fixed right away. It's like when a politician publicly helps a needy family now and then in front of the cameras, while leaving millions of other invisible families without health insurance. At some point you want to stop feeding such a politician families to use for publicity—and I feel the same way about trying to get faulty Wikipedia entries fixed by publicizing them.
For me, though, there is a more profound problem, and in this case my concerns are not entirely addressed by Sanger's project: Why recreate something which already exists, like an encyclopedia, when there are opportunities to create profoundly new things, like virtual simulations of the world, such as the Mirror Worlds proposed by David Gelernter? The same question can be asked about the open software community's obsessions with such things as UNIX and browsers. Maybe the human spirit isn't quite expansive enough to be revolutionary in the creative sense and the economic sense at the same time.
If there is a choice to be made, I am with Sanger. Economics and politics are only means to an end, so they shouldn't be prioritized over deeper, more beautiful stuff.
...the third culture is alive and in the heat of development. Books by Richard Dawkins, Daniel C. Dennett, Jared Diamond, Brian Greene, Stephen Pinker, Martin Rees, etcetera, are indispensable not only for their information, but they are also great successes in the bookstore. Their subjects deal wth the main controversies of the western world in the last decades: abortion and euthanasia, demographic policies, the increase of differences between rich and poor countries, pacifism, migrations, racism and xenophobia, the causes of the ecological crisis and the implications of the technology that lead to a postulation of an ethics of the responsibility and the social control of the scientific policies.
The world-wide phenomenon of the third culture is not only the interruption by the natural scientists of the postmodern intellectual scene, but a movement towards a global intellectual vision caused by the intensive use of the images and hypermedia in the communication between the human beings, which has allowed the scientific knowledge of second half of 20th century to permeate all society, providing for the utlization of information for confronting the great universal challenges of 21st century.
Moroccan authorities believe the village of Tetuan has sent as many as 30 suicide bombers from the North African village to Iraq. Scott Atran, senior fellow at City University of New York's Center on Terrorism, briefed the National Security Council on the issue in March.
random chaos from hindsight
...YET IT IS Taleb's assault on traditional historiography that is most relevant here. Since Thucydides, it is true, historians have encouraged us to explain low-probability calamities (like wars) after the fact. Such storytelling helps us to make sense of a random disaster. It also enables us to apportion blame. Generations of historians have toiled in this way to explain the origins of such great calamities as, say, World War I, constructing elegant narrative chains of causes and effects, heaping opprobrium on this or that statesman.
There is something deeply suspect about this procedure, however. It results in what Taleb calls the "retrospective distortion." These causal chains were quite invisible to contemporaries, to whom the outbreak of war came as a bolt from the blue. The point is that there were umpteen Balkan crises before 1914 that didn't lead to Armageddon. Like Cho, the Sarajevo assassin Gavrilo Princip was a black swan — only vastly bigger. ...
brutal truth is out at last
...So, Professor Zimbardo stopped the experiment because he risked losing the woman he loved. He calls Dr Maslach a hero for challenging the wisdom that the experiment was a justifiable study of human nature. And it is has led him, he tells the Edge website (www.edge.org), to consider the flip side of evil: the psychology of heroism. ...
Safe Is the Race To Send Tourists into Space?
But how safe is the space tourism business? The Wall Street Journal Online invited Patricia Smith, who heads the Federal Aviation Administration office responsible for overseeing the nascent industry, to discuss the topic with space entrepreneur Peter Diamandis, a co-founder of Space Adventures and chairman of the X PRIZE Foundation, which awarded a $10 million prize to Burt Rutan's SpaceShipOne in 2004. Their conversation, carried out via email, is below. ...
Larry Brilliant, M.D., M.P.H
...What are we to do? We need to reduce population growth through education, choices and widely available contraception. We must invent better desalinization methods and prepare for a new Green Revolution with seeds that will thrive in a brackish world. The attack on climate change must dramatically increase financial flows to clean energy--particularly in the developing world, most vulnerable to the impacts of climate change. Carbon trading systems can spur this clean energy transition, as well as restore forest and agricultural lands. And we must develop an early-disease-detection-and-response system, a sort of AWACs for epidemics. That requires networked technology plus a human network of cooperative efforts by scientists, governments, nonprofit groups, businesses and ordinary people. ...
...Openness and trust are at the core of Wikia Search. But how do you maintain a friendly system and still keep out troublemakers? Pretty much the same way Wikipedia is policed. Curse words, blanking pages, false inserts and other forms of delinquency are pretty easy to deal with. Admins, the thousand or so cops on the site, can block difficult users, delete entries, take away editing privileges from certain users, revert (or restore) entries. The core community of contributors is vigilant, too, and fixes improprieties quickly--often within seconds. That's because this community rallies around a couple of core concepts: neutrality and quality. The system isn't perfect, of course, as anyone who has followed various Wikipedia news headlines knows. There are errors, and sometimes quite embarrassing ones. But the system has a way of ferreting out errors and correcting them. ...
...The next Web--the Worldbeam, we call it--will resemble today's Web imploded or, if you prefer, turned inside out. It will be a single global "information beam." Every Web page ever posted is in this beam. Whenever someone updates a page or designs a new one, it is added to the end. The Worldbeam is a stream of many separate documents--or a beam with many documents dissolved in it, held in suspension. Both metaphors are useful. ...
Strange looks and funny lines from the past week
...N.N.T., who lives in New York and has taught at the University of Massachusetts at Amherst, previously traded derivatives on Wall Street. The academics who drive him to tears are the ones who have explained—or misexplained—his old profession. They think that markets are from Mediocristan when in fact they inhabit Extremistan.
Say what? Mediocristan is the terrain of the ordinary, the part of the world that conforms to the bell curve. It answers to statistics and knowable probabilities. Height resides in Mediocristan. You may find one 7-footer on your block, almost certainly not two. Experience (and biology) enable us to frame the odds. Weight is also from Mediocristan. Pick any 1,000 people and their average weight will be close to that of the general population (even if you include the world’s fattest person). Personal wealth, however, is from Extremistan. For instance, the average wealth of 1,000 people will be very different if one of those people is Bill Gates.
This distinction is potent. In Extremistan, past events are a faulty guide to projecting the future. Gates may be the world’s richest person, but it isn’t unthinkable that someday, someone (at Google, perhaps?) will be twice as rich. Wars also reside in Extremistan. Prior to World War II, the planet had never experienced a conflict as terrible. Then we did. Suppose you frequent a pond. Day after day you see swans—always white. Naturally (but incorrectly) you presume that all swans are white. World War II was a black swan—horrific and unpredictable.
John Horgan and George Johnson discuss Virginia Tech and gun control: John is horrified by Mickey's idiocy; Are we natural-born killers?; A history of violence (theories); The nature and nurture of evil; A primitive Amazonian tribe asks: Does Noam Chomsky know what he's talking about?; A spiritually uplifting conclusion. (1:03:35)
Jimmy Wales helped create Wikipedia, the interactive online encyclopedia founded in 2001. Users write and edit Wikipedia entries themselves; the site also has a dedicated corps of editors. There are often "edit wars" over entries — some, including the one headlined "2006 Lebanon War," have been edited and then re-edited thousands of times — and Wikipedia's accuracy has been questioned by some professors and colleges, who forbid students to cite it as a source. But Wikipedia, with versions in 250 languages, is one of the top 10 sites on the Internet.
A slump in Chinese stocks on Feb. 27 triggered the worst week for U.S. equities in more than four years and the biggest one-day jump in volatility ever -- the financial equivalent of a butterfly's flapping wing in New Delhi causing a hurricane in North Carolina.
In "The Black Swan: The Impact of the Highly Improbable,'' Nassim Nicholas Taleb argues that we are dangerously blind to the possibility of unlikely events, and reluctant to accept their unpredictability when they do occur. It is a seductive thesis.
In an original EDGE essay, Wikipedia co-founder Larry Sanger claims that the Web's ability to aggregate public opinion and knowledge into some form of "collective intelligence" is leading to a new politics of knowledge. According to Sanger, the power to establish what "we all know'" is shifting out of the hands of a small elite group and becoming more of a conversation open to anyone with a Net connection. However, Sanger is also the founder of Citizendium, a competitor to Wikipedia that, according to its Web site, "aims to improve on (the Wikipedia) model by adding 'gentle expert oversight' and requiring contributors to use their real names." In this essay, titled "Who Says We Know: On The New Politics Of Knowledge," Sanger argues that a lack of "expert" oversight leads to unreliable information, something he sees as a major flaw in knowledge egalitarianism. I'm sure this essay will spark as much fiery debate as the previous essay in this EDGE series, Jaron Lanier's "Digital Maoism. ...
In short, the killings at Virginia Tech happen at a moment when we are renegotiating what you might call the Morality Line, the spot where background forces stop and individual choice — and individual responsibility — begins. The killings happen at a moment when the people who explain behavior by talking about biology, chemistry and social science are assertive and on the march, while the people who explain behavior by talking about individual character are confused and losing ground.And it’s true. We’re never going back. We’re not going to put our knowledge of brain chemistry or evolutionary psychology back in the bottle. It would be madness to think Cho Seung-Hui could have been saved from his demons with better sermons.
But it should be possible to acknowledge the scientists’ insights without allowing them to become monopolists. It should be possible to reconstruct some self-confident explanation for what happened at Virginia Tech that puts individual choice and moral responsibility closer to the center.
After all, according to research by David Buss, 91 percent of men and 84 percent of women have had a vivid homicidal fantasy. But they didn’t act upon it. They don’t turn other people into objects for their own fulfillment.There still seems to be such things as selves, which are capable of making decisions and controlling destiny. It’s just that these selves can’t be seen on a brain-mapping diagram, and we no longer have any agreement about what they are.