WHO SAYS WE KNOW: ON THE NEW POLITICS OF KNOWLEDGE
I
There are a lot of things that "everybody knows." Everybody knows that Everest is the tallest mountain on Earth, that 2+2=4, that most people have two eyes—and a lot of other things. If I were to go on, it would get tedious very fast, because, after all, these are things that everybody knows.
But there are also a lot of other things that "everybody knows," except thatnot everybody agrees that everybody knows them. For example, everybody knows not only that there has been significant global warming recently, but also that human beings caused this by burning fossil fuels. We know that evolution is as solidly proven as most of the rest of science, and that intelligent design isn't science at all; that Iraq never had any weapons of mass destruction (after they destroyed them); and that the U.S. government had nothing to do with the destruction of the World Trade Center. Except that, for each of these things "we all know," significant minorities insist that they're false.
Those dissenters, however, don't matter much when it comes to most journalism, reference, and education. Society forges ahead, reporting and teaching things without usually mentioning the dissenters, or only in a disparaging light. As a result, certain claims that some of us don't accept end up being background knowledge, as I'll call it. If you question such background knowledge, or even express some doubt about it, you'll look stupid, crazy, or immoral. Maybe all three.
To be able to determine society's background knowledge—to establish what "we all know"—is an awesome sort of power. This power can shape legislative agendas, steer the passions of crowds, educate whole generations, direct reading habits, and tar as radical or nutty whole groups of people who otherwise might seem perfectly normal. Exactly how this power is wielded and who wields it constitutes what we might call "the politics of knowledge." The politics of knowledge has changed tremendously over the years. In the Middle Ages, we were told what we knew by the Church; after the printing press and the Reformation, by state censors and the licensers of publishers; with the rise of liberalism in the 19th and 20th centuries, by publishers themselves, and later by broadcast media—in any case, by a small, elite group of professionals.
But we are now confronting a new politics of knowledge, with the rise of the Internet and particularly of the collaborative Web—the Blogosphere, Wikipedia, Digg, YouTube, and in short every website and type of aggregation that invites all comers to offer their knowledge and their opinions, and to rate content, products, places, and people. It is particularly the aggregation of public opinion that instituted this new politics of knowledge. In the 90s, lots of people posted essays on their personal home pages, put up fan websites, and otherwise "broadcasted themselves." But what might have been merely vain and silly a decade ago is now, thanks to aggregation of various sorts, a contribution to an online mass movement. The collected content and ratings resulting from our individual efforts give us a sort of collective authority that we did not have ten years ago.
So today, if you want to find out what "everybody knows," you aren't limited to looking at what The New York Times and Encyclopedia Britannica are taking for granted. You can turn to online sources that reflect a far broader spectrum of opinion than that of the aforementioned "small, elite group of professionals." Professionals are no longer needed for the bare purpose of the mass distribution of information and the shaping of opinion. The hegemony of the professional in determining our background knowledge is disappearing—a deeply profound truth that not everyone has fully absorbed.
The votaries of Web 2.0, and especially the devout defenders of Wikipedia, know this truth very well indeed. In their view, Wikipedia represents the democratization of knowledge itself, on a global scale, something possible for the first time in human history. Wikipedia allows everyone equal authority in stating what is known about any given topic. Their new politics of knowledge is deeply, passionately egalitarian.
Today's Establishment is nervous about Web 2.0 and Establishment-bashers love it, and for the same reason: its egalitarianism about knowledge means that, with the chorus (or cacophony) of voices out there, there is so much dissent, about everything, that there is a lot less of what "we all know." Insofar as the unity of our culture depends on a large body of background knowledge, handing a megaphone to everyone has the effect of fracturing our culture.
I, at least, think it is wonderful that the power to declare what we all know is no longer exclusively in the hands of a professional elite. A giant, open, global conversation has just begun—one that will live on for the rest of human history—and its potential for good is tremendous. Perhaps our culture is fracturing, but we may choose to interpret that as the sign of a healthy liberal society, precisely because knowledge egalitarianism gives a voice to those minorities who think that what "we all know" is actually false. And—as one of the fathers of modern liberalism, John Stuart Mill, argued—an unfettered, vigorous exchange of opinion ought to improve our grasp of the truth.
This makes a nice story; but it's not the whole story.
As it turns out, our many Web 2.0 revolutionaries have been so thoroughly seized with the successes of strong collaboration that they are resistant to recognizing some hard truths. As wonderful as it might be that the hegemony of professionals over knowledge is lessening, there is a downside: our grasp of and respect for reliable information suffers. With the rejection of professionalism has come a widespread rejection of expertise—of the proper role in society of people who make it their life's work to know stuff. This, I maintain, is not a positive development; but it is also not a necessary one. We can imagine a Web 2.0 with experts. We can imagine an Internet that is still egalitarian, but which is more open and welcoming to specialists. The new politics of knowledge that I advocate would place experts at the head of the table, but—unlike the old order—gives the general public a place at the table as well.
II
We want our encyclopedias to be as reliable as possible. There's a good reason for this. Ideally, we'd like to be able to read an encyclopedia, believe what it says, and arrive at knowledge, not error. Now, according to one leading account of knowledge called "reliabilism," associated with philosophers like Alvin Goldman and Marshall Swain, knowledge is true belief that has been arrived at by a "reliable process" (say, getting a good look at something in good light) or through a "reliable indicator of truth" (say, proper use of a calculator).
Reliability is a comparative quality; something doesn't have to be perfectly reliable in order to be reliable. So, to say that an encyclopedia is reliable is to say that it contains an unusually high proportion of truth versus error, compared to various other publications. But it can still contain some error, and perhaps a high enough proportion of error that—as many have said recently—you should never use just one reference work if you want to be sure of something. Perhaps, if one could know that an encyclopedia were perfectly reliable, one could get knowledge just by reading, understanding, and believing it. What a wonderful world that would be. But I doubt both that there is a way of knowing that about an encyclopedia, and also that humanity will ever be blessed with such a reference work. Call such a thing a perfect encyclopedia. Well, there is no such thing as a perfect encyclopedia, and if there were, we'd never know if we were holding one.
Why not? Well, when we say that encyclopedias should state the truth, do we mean the truth itself, or what the best-informed people take to be the truth—or perhaps even what the general public takes to be the truth? I'd like to say "the truth itself," but we can't simply point to the truth in the way we can point to the North Star. Some philosophers, called pragmatists, have said there's no such thing as "the truth itself," and that we should just consider the truth to be whatever the experts opine in "the ideal limit of inquiry" (in the phrase of C. S. Peirce). While I am not a pragmatist in this philosophical sense, I do think that it is misleading to say simply that encyclopedias aim at the truth. We can't just leave it at that. Unfortunately, statements do not wear little labels reading "True!" and "False!" We need a criterion of encyclopedic truth—a method whereby we can determine whether a statement in an encyclopedia is true.
Let's suppose our criterion of encyclopedic truth is encoded in how encyclopedists decide whether to publish a statement. The method no doubt used by Encyclopedia Britannica and many other reference works goes something like this. If an expert article-writer states that p is true,and the editors find p plausible, and p gets past the fact-checkers (who consult other experts and expert-compiled resources), then p is true, at least as far as this encyclopedia is concerned.
The problem is that this is a highly fallible process. Sometimes, we discover that p is false. Sometimes, it's false because somebody made a typo or misinterpreted expert opinion; but sometimes it's false because, though faithful to expert opinion, expert opinion itself turned out to be false. Even if there were a beautifully reliable method of capturing expert opinion, that wouldn't be an infallible criterion of encyclopedic truth, because expert opinion is frequently wrong. Unfortunately, as a society, we usually can't do any better: if we learn that expert opinion is wrong, the corrected view becomes the new expert opinion. Besides, experts disagree about a lot of things. It is presumptuous, and a great disservice to readers, for editors to choose one expert to believe over another.
So we shouldn't say that encyclopedias aim to capture either the truth itself or any perfectly reliable indicator of truth. That is too much to hope for from an encyclopedia. Instead, consider: what do we most want, as responsible, independent-minded researchers, out of an encyclopedia? Primarily, I think most of us want mainstream expert opinion stated clearly and accurately; but we don't want to ignore minority and popular views, either, precisely because we know that experts are sometimes wrong, even systematically wrong. We want well-agreed facts to be stated as such, but beyond that, we want to be able to consider the whole dialectical enchilada, so that we can make up our own minds for ourselves.
Notice that the word is used in various ways. For instance, journalists, interviewers, and conference organizers—people trying to gather an audience, in other words—use "expert" to mean "a person we can pass off as someone who can speak with some authority on a subject." Also, we say the "local expert" on a subject is the person who knows most, among those in a group, about the subject. Neither of these are the very interesting senses of "expert."
We also speak of experts in the credentials sense, that is, any person who meets a (vague) standard of credentials, or evidence of having studied (or practiced) some matter, to whatever extent is thought needed for expertise—for example, as defined by professional organizations. And finally, surely we also speak of experts in a more objective sense: someone who really does have expert knowledge of a subject, whatever that amounts to. On my view, objective expertise amounts to something like this: if we rely on the expert's opinion in matters of their expertise, that really does increase the probability we have the truth.
The hope is that expertise in the credentials sense is a good but imperfect sign of expertise in the objective sense. Personally, I am not so cynical as to deny this. So, I believe that if someone meets a certain standard of credentials about some topic, then that person is probably more reliable on that topic than someone picked at random. Bear in mind, however, that "credentials" should be construed very broadly, and can mean much more than simply degrees and certifications.
Encyclopedias should represent expert opinion first and foremost, but also minority and popular views. Here, surely we are stuck with the credentials sense of "expert opinion." Just as statements do not bear labels announcing their truth values, people do not bear labels announcing their objective expertise. When decision makers have to decide whether a person really is,objectively, an expert, they have to use evidence that they can agree upon. But any such evidence can count as a "credential" in a broad sense. No doubt some wholly uncredentialed people have expertise in the objective sense—some autodidacts must fit the bill. Moreover, it's surely possible for other people to come to recognize such hidden expertise. But when groups must make decisions about who is an expert, they must have evidence; if evidence of expertise, or credentials, is lacking, the decision makers cannot be expected to acquaint themselves deeply with each person individually. And what if someone who is unquestionably an expert does interview, and declare to be an expert, a wholly uncredentialed autodidact? Then that opinion is the autodidact's first credential.
Even given this goal, why not simply grant the authority to articulate what we know to experts, as Britannica does? Can't experts do a good job of representing mainstream and minority expert views as well as popular views? Or, on the other hand, why not give this authority to the general public, as Wikipedia does? Can't the general public in time get expert opinion right?
First, why open up encyclopedia projects to the general public? While the whole body of people called "experts" (in any very restrictive sense) are probably capable of writing about and representing the interests and views of the larger public, the trouble is that they won't actually want to do so, or they lack the time to do so, in as much detail as the public itself is capable of. It is difficult and tedious enough for experts to cover their own areas. While there are people with expertise about popular culture—from celebrity journalists, to video game designers, to master craft workers—there are far more people who can do a good job summarizing information about "popular" topics than there are experts about them. Similarly, there are usually a number of experts about theories that are far out of the mainstream—one thinks of people who have expert knowledge of astrology, or some kinds of alternative medicine—but again, the quantity of non-expert people able to write reasonably well about such theories is much greater.
I'll have no truck with the view that simply because something is out of the mainstream—unscientific, irrational, speculative, or politically incorrect—it therefore does not belong in an encyclopedia. Non-mainstream views need a full airing in an encyclopedia, despite the fact that "the best expert opinion" often holds them in contempt, if for no other reason than that we have better grounds on which to reject them. Moreover, as we are responsible for our own beliefs, and as the freedom to believe as we wish is essential to our dignity as human beings, encyclopedias do not have any business making decisions for us that we, who wish to remain as intellectually free as possible, would prefer to make ourselves.
There is another reason to engage the public: due to its sheer size, the public can also contribute enormous breadth and extra eyeballs for all sorts of the more usually "expert" topics, too. The general public may add a far greater assortment of topics and perspectives than one would get if one assigned only experts to write about only their areas of expertise. Moreover, the sheer quantity of eyeballs gazing at obvious mistakes means that such mistakes will be fixed more quickly and reliably than if one engages only experts working only on their areas of expertise. Finally, and perhaps most importantly, the inclusion of the general public in an encyclopedia project, and ensuring that all subjects are treated at once, will tend to reduce the insularity common to many specialized fields: the result is that the encyclopedia's readers will be subjected less to dogmatic presentations of wrongheaded intellectual fads.
Therefore, the assistance of the general public is needed in encyclopedia projects. Now let's turn to the other group: why are experts needed? Or perhaps a better question is: why it is important to ensure that experts are involved?
Experts, or specialists, possess unusual amounts of knowledge about particular topics. Because of their knowledge, they can often sum up what is known on a topic much more efficiently than a non-specialist can. Also, they often know things that virtually no non-specialist knows; and, due to their personal connections and their knowledge of the literature, they often can lay their hands on resources that extend their knowledge even further.
Another thing that experts can do, that few non-experts can, is write about their specializations in a style that is credible and professional-sounding. Frequently, students and dabblers possess an adequate understanding of a topic, but they are wholly incapable of saying much about it without revealing their inexpert knowledge, in one way or another—even if they are superb writers and even if what they say is correct, strictly speaking. This is a common problem on Wikipedia. Furthermore, while a great many specialists are terrible writers, some of the very best writers on any given topic are specialists about that topic. Many experts take great pride in their ability to write about their own fields for non-experts.
Finally, experts are—albeit fallibly—the best-suited to articulate what expert opinion is. It is for the most part experts who create the resources that fact-checkers use to check facts. This makes their direct input in an encyclopedia invaluable.
For these reasons, I believe experts should share the main responsibility of articulating what "we know" in encyclopedia projects; but they should share this responsibility with the general public. Involving both groups in a content production system has the best chance of faithfully representing the full spectrum of expression. To exclude the public is to put readers at the mercy of wrongheaded intellectual fads; and to exclude experts, or to fail to give them a special role in an encyclopedia project, is to risk getting expert opinion wrong.
III
The most massive encyclopedia in history—well, the most massive thingoften called an encyclopedia—is Wikipedia. But Wikipedia has no special role for experts in its content production system. So, can it be relied upon to get mainstream expert opinion right?
Wikipedia's defenders are capable of arguing at great length that expert involvement is not necessary. They are entirely committed to what I call dabblerism, by which I mean the view that no one should have any special role or authority in a content creation system simply on account of their expertise. I apologize for the neologism, but there is no word meaning precisely this view. I did not want to use "amateurism," since that word is opposed to "professionalism," and the view I want to discuss attacks not the privileges of professionals, per se, but of experts. The issue here is not whether people should make money from their work, but whether their special knowledge should give them some special authority. To the latter, dabblerism says no.
Wikipedia's defenders have a great many arguments for dabblerism: non-experts can create great things; the "wisdom of crowds" makes deference to experts unnecessary; studies appear to confirm this in the case of Wikipedia; there is no prima facie reason to give experts any special role; it is only fair to judge people by what they do, and not by their credentials; and making a role for experts will actually ruin the collaborative process.
Not one of these arguments is any good.
First, it is absolutely true that dabbleristic (if you will), expert-spurning content creation systems can create amazing things. That's what Web 2.0 is all about. While many might sneer at these productions generally, Web 2.0 has created some quite useful and interesting websites. Wikipedia and YouTube aren't popular for nothing, and for many people they are endlessly fascinating.
This does not go the slightest way toward showing, however, that some sort of expert guidance is neither needed, nor would be a positive addition to, content creation systems, and particularly to encyclopedia projects. Many people have looked at Wikipedia's articles and concluded that they sure could use work by experts and real editors. It's one thing to say that Wikipedia is amazing and useful; it is quite another to say that we couldn't do better by adding a role for experts.
At this point, my opponent might pull out a very interesting and popular book called The Wisdom of Crowds by James Surowiecki, and say that it shows that Wikipedia has no need of expert reviewers. Surowiecki explains some fascinating phenomena, but nowhere does he say that Wikipedia doesn't need experts. And no surprise: by Surowiecki's own criteria, there's no reason to think that Wikipedia displays "the wisdom of crowds." Let me explain.
In the introduction of the book, Surowiecki describes an agricultural fair in England in 1906, at which all manner of people competed to guess the weight of an ox. There were many non-experts in the crowd, so theaverage of the guesses should have been ridiculously off; but in fact, while the ox actually weighed in at 1,198 pounds, the average of the guesses was 1,197 pounds. This, Surowiecki says, illustrates a widely-recurring phenomenon, in which ordinary folks in great numbers acting independently can display behavior that, in aggregate, is more "wise," or accurate, than the greatest expert among them.
Of course, Surowiecki is no fool. His claim isn't that whatever data "crowds" produce are reliable, regardless of circumstances. Among other things, each member of a "crowd" needs make decisions independently of each other. But this is precisely how Wikipedia doesn't work. As he writes:
Diversity and independence are important because the best collective decisions are the product of disagreement and contest, not consensus or compromise. An intelligent group, especially when confronted with cognition problems, does not ask its members to modify their positions in order to let the group reach a decision everyone can be happy with.
But that's exactly what happens on wikis, and on Wikipedia. To be able to work together at all, consensus and compromise are the name of the game. As a result, the Wikipedian "crowd" can often agree upon some pretty ridiculous claims, which are very far from both expert opinion and from anything like an "average" of public opinion on a subject. I don't mean to say that the Wikipedia process is not robust and does not produce a lot of correct answers. It is and it does. But the process does not closely resemble the "wise crowd" phenomena that Surowiecki is explaining.
Besides, the standard examples demonstrating the strength of group guessing—say, that a classroom's average guess of the number of jelly beans in a jar is better than all individual guesses, or that experts cannot outperform financial markets—do not lend the slightest bit of support to the notion that experts and editors are not needed for publishing or content creation. There are objective facts about the number of jelly beans, or about market prices, that experts can be right or wrong about. But what facts are Wikipedians attempting to describe? Objective facts that you can point to like a stock price in a newspaper? Only rarely. The facts they want to amass are facts contained in the books and articles that, it so happens, they are so keen on citing. Who writes those books and articles? Experts, mostly. To say that expert guidance is not really needed in encyclopedia construction is like saying the opinion of the person who counted out the jelly beans before putting them in the jar is not really useful.
It's easy to be impressed with the apparent quality of Wikipedia articles. One must admit that some of the articles look very impressive, replete with multiple sections, surprising length, pictures, tables, a dry, authoritative-sounding style, and so forth. These are all good things (except for the style). But these same impressive-looking articles are all too frequently full of errors or half-truths, and—just as bad—poor writing and incoherent organization. (Jaron Lanier was eloquent on the latter points in his interesting Edge essay, "Digital Maoism.") In short, Wikipedia's dabblerism often unsurprisingly leads to amateurish results.
Some might point to Nature's December 2005 investigative report—often billed as a scientific study, though it was not peer-reviewed—that purported to show, of a set of 42 articles, that whereas the Britannica articles averaged around three errors or omissions, Wikipedia averaged around four. Wikipedia did remarkably well. But the article proved very little, as Britannica staff pointed out a few months later. There were many problems: the tiny sample size, the poor way the comparison articles were chosen and constructed, and the failure to quantify the degree of errors or the quality of writing. But the most significant problem, as I see it, was that the comparison articles were all chosen from scientific topics. Wikipedia can be expected to excel in scientific and technical topics, simply because there is relatively little disagreement about the facts in these disciplines. (Also because contributors to wikis tend to be technically-minded, but this probably matters less than that it's hard to get scientific facts wrong when you're simply copying them out of a book.) Other studies have appeared, but they provide nothing remotely resembling statistical confirmation that Wikipedia has anything like Britannica-levels of quality. One has to wonder what the results would have been if Nature had chosen 1,000 Britannica articles randomly, and then matched Wikipedia articles up with those.
Let's set aside the question whether Wikipedia's quality does, at present, rival Britannica's. One might argue that, even if it doesn't, there is still no prima facie reason to give experts any special role in the project. To give authority to people simply on the basis of their expertise is—as Wikipedians often say—simply "credentialism," and no more rational than rejecting an application from a stellar programmer simply because he lacks a B.S. in Computer Science. People should be judged based on their demonstrated abilities, not degrees.
But I can agree with that. There is no reason whatsoever to insist on any simpleminded approach to identifying experts. Some of the finest programmers in the world lack any computer science degrees, and it would be silly to fail to recognize that fact. But there is no reason why a content creation system could not recognize as a "credential," or as proof of expertise, all manner of evidence, not just degrees.
Similarly, Wikipedians have a sort of moral argument for their dabblerism: they say, sometimes, that it is only fair to judge people based on what theydo, not who they are. Meritocracy is the only fair way to justify differing levels of editorial authority in open projects; and a genuine meritocracy would assign authority not based on "credentials," but only based on what people have demonstrated they can do for the project. It is wrong and unfair to hand out authority based on credentials.
But, interestingly, Wikipedians can't help themselves to this argument. If they are fully committed to dabblerism, then they cannot justify different levels of editorial authority on any grounds. Dabblerism, as I said, is the view that no one should have any special role or authority in a content creation system simply on account of their expertise. But we can easily identify, as a kind of expertise, the proven ability to do excellent work. So dabblerism, as I defined it, is incompatible with meritocracy itself. There's another way to state this line of thought. Define "credential" as "evidence of expertise." If we reject the use of credentials, we reject all evidence of expertise; ergo, lacking any means of establishing who is an expert, we reject expertise itself. Meritocrats are necessarily expert-lovers.
I find the moral argument annoying for another reason, however. It implies that degrees, certificates, licenses, association memberships, papers, books, presentations, awards, and all other possible evidence of expertise—the whole gamut of "credentials"—just don't matter. They don't constitute good evidence of anything. But if they don't count as good evidence of expertise, why should the ability to do something on behalf of a mere Internet project count as good evidence? There is a bizarre reversal in the insular world of Wikipedia: mere quantity of work is a credential there, but not for academic tenure and advancement committees; meanwhile, degrees and peer-reviewed papers are credentials for tenure and advancement committees, but not for Wikipedia and its ilk. (Wikipedians will protest that quantity of work doesn't really matter. But, of course, it very much does.)
The last hope for rescuing dabblerism might come in the form of an argument that the use of experts will render the project less collaborative; it will "kill the goose that lays the golden eggs." Wiki-style collaboration requires that there be no differences in authority. According to this argument, we are committed to dabblerism if we want to enjoy the fruits of bottom-up collaboration.
But this is little better than an untested prejudice. The notion that experts cannot play a gentle guiding role in a genuinely bottom-up collaborative project seems to be plain old bigotry. No doubt this prejudice stems from a fear that experts will twist what should be an efficient process into the sort of slow, top-down, bureaucratic drudgery that they are used to. But this needn't be the case. Surely it isn't impossible for professors to exit the cathedral—to borrow Eric Raymond's metaphor in his essay "The Cathedral and the Bazaar"—and wander the bazaar, offering guidance and highlighting what is excellent. Will that necessarily make the bazaar less of a bazaar?
None of these arguments, dismissing special roles for experts in encyclopedia projects, is any good. The support for dabblerism—as I've defined the term—would appear irrational. Is it really?
IV
Here's a little dilemma. Wikipedia pooh-poohs the need for expert guidance; but how, then, does it propose to establish its own reliability? It can do so either by reference to something external to itself or else something internal, such as a poll of its own contributors. If it chooses something external to itself—such as the oft-cited Nature report—then it is conceding the authority of experts. In that case, who is it who says "we know"? Experts, at least partially: their view is still treated as the touchstone of Wikipedia's reliability. And if it concedes the authority of experts that far, why not bring those experts on board in an official capacity, and do a better job?
If, on the other hand, Wikipedia proposes to establish its own reliability "internally," for example through polls of its contributors, or through sheer quantity of edits, they have a ridiculously untenable position. The position entails that the word of an enormous, undifferentiated, and largely anonymous crowd of people is to be trusted, or held reliable, for no other reason than that it is such a crowd. It is one thing to argue for "the wisdom of crowds" by reference to an objective benchmark. It is quite another thing to maintain that crowds are wise simply because they are crowds. That is a philosophical view, a variety of relativism, according to which the only truth there is, the only facts there are, are literally "socially constructed" by crowds like the contributors to Wikipedia.
It's this view that Stephen Colbert was able to mock so effectively and hilariously as "wikiality": reality is what the wiki says it is. Colbert has in effect added to what "we all know." By brilliantly skewering the notion that facts are whatever Wikipedians want them to be, Colbert has added to our culture's modest stock of background knowledge—about philosophy. Thanks to Colbert, we all know now that reality isn't created by a wiki. That's no mean feat for a humorist.
But nobody really believes that reality is constructed by Wikipedia. Instead, Wikipedians attempt to take my dilemma by the horns, supporting the credibility of Wikipedia's content through a combination of both external and internal means. They insist that footnotes suffice to support an article. If a fact has been supported by a footnote, then, apparently, it is credible. This, we might say, is an external means of fact-checking; but it is up to rank-and-file Wikipedians, not any fancy experts, to add and edit the footnotes, and so it's also an internal means of fact-checking. So, where's the dilemma?
The dilemma is easy to apply here, too. If Wikipedians actually believe that the credibility of articles is improved by citing things written by experts, will it not improve them even more if people like the experts cited are given a modest role in the project? And, on the other hand, if (somehow) it is notthe fact that the cited references were created by experts, one has to wonder what the references are for. They have a mysterious, talismanic value, apparently. It seems that we all know that footnotes makes articles much more credible—but why? Whatever the reason, Wikipedians wouldn't want to say that it's because the people cited are credible authorities on their subjects.
The dilemma Wikipedia finds itself in, then, is that if it wants to establish its credibility by reference to expert opinion, then it has no reason not to invite experts to join in some advisory capacity. But this is completely intolerable for Wikipedians. Now, why is that?
Wikipedia is deeply egalitarian. One of its guiding principles is epistemic(knowledge) egalitarianism. According to epistemic egalitarianism, we are all fundamentally equal in our authority or rights to articulate what should pass for knowledge; the only grounds on which a claim can compete against other claims are to be found in the content of the claim itself, never in who makes it.
Notice that (on my account) this is a doctrine about rights or authority, not about ability; it would be simply absurd to say that we are equal in ability to declare what should pass for knowledge. Someone who has never had a course in physics is unlikely to be equal to a Nobel laureate in physics in his ability to declare what is known about physics. But epistemic egalitarianism would hold them equal in rights—for example, in the right to change a wiki page about a topic in physics—nonetheless.
Note also that epistemic egalitarianism doesn't declare we have the right to say what really is known—that too would be absurd—but only what passes for knowledge, or what is presented as known, for example through Wikipedia's mechanisms, or through a Blogosphere that operates much like a democratic popularity contest. In fact, Wikipedia is the perfect vehicle for epistemic egalitarianism, since it allows virtually everyone to edit virtually any page. Granted, Wikipedia's "administrators" have rights that others do not have; but it is perhaps as egalitarian as it's possible for a project of its scale to be.
It is precisely the fact that it speaks about our rights to declare what passes for knowledge that makes epistemic egalitarianism a doctrine about the politics of knowledge. So, who says "we know"? We all do.
Put that way, perhaps the appeal of the doctrine should be plain. I began this essay by saying that the power to declare society's background knowledge is awesome, and that many consequential decisions, including political decisions, are deeply influenced by that background knowledge. If the Internet now makes it possible for society's background knowledge to be shaped by a far broader, more open and inclusive group of people, that would seem to be a good thing. Indeed, perhaps it is only an accident of history, not any good reason, that placed the epistemic leadership of society almost exclusively in the hands of a fairly small class of professionals. But now, through another accident of history—the rise of the Internet—the general public may partake in the conversations that determine what "everybody knows." I think this is mostly a positive development.
No doubt the main philosophical reason for epistemic egalitarianism is, like the reason for egalitarianism generally, the now-common and overarching desire for fairness. The desire for fairness creates hostility toward any authority—and not just when authority uses its power to gain an unfair advantage, but toward authority as such. That is, the most radical egalitarians advocate that our situations be made as equal as possible, including in terms of authority. But, in our specialist-friendly modern society, expertise can confer much authority not available to non-experts. Perhaps the most important and fundamental authority experts have is the authority to declare what is known. This authority, then, should be placed in the hands of everyone equally, according to a thoroughgoing egalitarianism.
I support meritocracy: I think experts deserve a prominent voice in declaring what is known, because knowledge is their life. As fallible as they are, experts, as society has traditionally identified them, are more likely to be correct than non-experts, particularly when a large majority of independent experts about an issue are in broad agreement about it. In saying this, I am merely giving voice to an assumption that underlies many of our institutions and practices. Experts know particular topics particularly well. By paying closer attention to experts, we improve our chances of getting the truth; by ignoring them, we throw our chances to the wind. Thus, if we reduce experts to the level of the rest of us, even when they speak about their areas of knowledge, we reduce society's collective grasp of the truth.
It is no exaggeration to say that epistemic egalitarianism, as illustrated especially by Wikipedia, places Truth in the service of Equality. Ultimately, at the bottom of the debate, the deep modern commitment to specialization is in an epic struggle with an equally deep modern commitment to egalitarianism. It's Truth versus Equality, and as much as I love Equality, if it comes down to choosing, I'm on the side of Truth.