Proposal for funding from the AI Ethics Challenge

I submitted the following project proposal for the Artificial Intelligence Ethics Challenge:

Unpacking Wikipedia’s Lessons for Journalism

In its 18 years, Wikipedia has established itself as a titan of the Internet, built on a model utterly different from other top websites; this project will help journalists understand it better, and empower them to better access its data, and to draw effective contrasts with other top websites.

Major websites like Google, Facebook, and Amazon are well covered by major media; Wikipedia is often covered inaccurately and/or incompletely. A key ingredient in overcoming this is better understanding among journalists of how Wikipedia operates, and what unique benefits it provides.

This project will have three outputs:

  1. An app facilitating analysis of the growth and refinement of Wikipedia content, including addition of  citations, amount of discussion, addition & removal of quality
    control banners, etc.;
  2. Detailed articulation of a theory for what Wikipedia can teach us about the necessary ingredients for collaboration, which can inform software design for collaborative
    projects, submitted for publication to media outlets (see here); and
  3. A panel discussion at a major journalism conference, in which journalists and
    Wikipedia experts will evaluate the role of Wikipedia, including both
    human and automated edits, in shaping and summarizing knowledge and
    news.

One major reason we are holding this open call for ideas is to support voices which are frequently left out of the design, development and deployment of artificial intelligence systems. How does your project help forward this goal?

Wikipedia struggles with the demographics of its content, and of its editor base. By taking an intentional approach to cultivating its network of volunteer software developers, it has improved the culture of its events, and raised awareness of dynamics that tend to exclude those who are frequently underrepresented. In recent years Wikipedia emerged as a forum for strong advocacy and discussion of these issues. One dynamic that contributes to “clubbiness” is lack of understanding of rules and cultural dynamics. By helping journalists, and thereby their readers, to learn about these important areas, this project will address a condition that underlies imbalance in the software development and
editorial community of Wikipedia.

Posted in Uncategorized | Leave a comment

A proven innovation could benefit Facebook’s users—and its shareholders, too.

Concern about social media and the quality of news is running high, with many commentators focusing on bias and factual accuracy (often summarized as “fake news”). If efforts to regulate sites like Facebook are successful, they could affect the bottom line; so it would behoove Facebook to regulate itself, if possible, in any way that might stave off external action.

Facebook has tried many things, but they have ignored something obvious. It’s something that has been identified by peer reviewed studies as a promising approach since at least 2004…the same year Facebook was founded.

Instead of making itself the sole moderator of problematic posts and content, Facebook should offer its billions of users a role in content moderation. This could substantially reduce the load on Facebook staff, and could allow its community to care of itself more effectively, improving the user experience with far less need for editorial oversight. Slashdot, once a massively popular site, proved prior to Facebook’s launch that distributing comment moderation among the site’s users could be an effective strategy, with substantial benefits to both end users and site operators. Facebook would do well to allocate a tiny fraction of its fortune to designing a distributed comment moderation system of its own.

Distributed moderation in earlier days

“Nerds” in the late 1990s or early 2000s—when most of the Internet was still a one-way flow of information for most of its users—had a web site that didn’t merely keep them them informed, but let them talk through the ideas, questions, observations, or jokes that the (usually abbreviated and linked) news items would prompt. Slashdot, “the first social news site that gained widespread attention,” presented itself as “News for Nerds. Stuff that Matters.” It’s still around, but in those early days, it was a behemoth. Overwhelming a web site with a popular link became known as “slashdotting.” There was a time when more than 5% of all traffic to sites like CNET, Wired, and Gizmodo originated from Slashdot posts.

Slashdot featured epic comment threads. It was easy to comment, and its readers were Internet savvy almost by definition. Slashdot posts would have hundreds, even thousands, of comments. According to the site’s Hall of Fame, there were at least 10 stories with more than 3,200 comments.

But amazingly—by today’s diminished standards, at least—a reader could get a feel for a thread of thousands of messages in just a few minutes of skimming. Don’t believe me? Try this thread about what kept people from ditching Windows in 2002. (The Slashdot community was famously disposed toward free and open source software, like GNU/Linux.) The full thread had 3,212 messages; but the link will show you only the 24 most highly-rated responses, and abbreviated versions of another 35. The rest are not censored; if you want to see them, they’re easy to access through the various “…hidden comments” links.

As a reader, your time was valued; a rough cut of the 59 “best” answers out of 3,212 is a huge time-saver, and makes it practical to get a feel for what others are saying about the story. You could adjust the filters to your liking, to see more or fewer stories by default. As the subject of a story, it was even better; supposing some nutcase seized on an unimportant detail, and spun up a bunch of inaccurate paranoia around it, there was a reasonable chance their commentary would be de-emphasized by moderators who could see through the fear, uncertainty, and doubt.

At first blush, you might think “oh, I see; Facebook should moderate comments.” But they’re already doing that. In the Slashdot model, the site’s staff did not do the bulk of the moderating; the task was primarily handled by the site’s more active participants. To replicate Slashdot’s brand of success, Facebook would need to substantially modify the way their site handles posts and comments.

Going meta

Distributed moderation, of course, can invite all sorts of weird biases into the mix. To fend off the chaos and “counter unfair moderation,” Slashdot implemented used what’s known as “metamoderation.” The software gave moderators the ability to assess one another’s moderation decisions. Moderators’ decisions needed to withstand the scrutiny of their peers. I’ll skip the details here, because the proof is in the pudding; browsing some of the archived threads should be enough to demonstrate that the highly-rated comments are vastly more useful than the average comment.

Some Internet projects did study Slashdot-style moderation

For some reason, it seems that none of the major Internet platforms of 2018—Facebook, Twitter, YouTube, etc.—have ever experimented with meta-moderation.

From my own experience, I can affirm that some projects intending to support useful online discussion did, in fact, consider meta-moderation. In its early stages, the question-and-answer web site quora.com took a look at it; so did a project of the Sloan Foundation in the early days of the commentary tool hypothes.is.

If Facebook ever did consider a distributed moderation system, it’s not readily apparent. Antonio García Martínez, a former Facebook product manager, recently tweeted that he hadn’t thought about it at length, and expressed initial skepticism that it could work.

There are a few reasons why Facebook might be initially reluctant to explore distributed moderation:

  • Empowering people outside the company is always unsettling, especially when there’s a potential to impact the brand’s reputation;
  • Like all big tech companies, Facebook tends to prefer employing technical, rather than social, interventions;
  • Distributed moderation would require Facebook to put data to use on behalf of its users, and Facebook generally seeks to tightly control how its data is exposed;
  • Slashdot’s approach would require substantial modification to fit Facebook’s huge variety of venues for discussion.

Those are all reasonable considerations. But with an increasing threat of external regulation, Facebook should consider anything that could mitigate the problems its critics identify.

Subject of academic study

If you’ve used a site with distributed moderation, and a meta-moderation layer to keep the mods accountable, you probably have an intuitive sense of how well it can work. But in case you haven’t, research studies going back to 2004 have underscored its benefits.

According to researchers Cliff Lampe and Paul Resnick, Slashdot demonstrated that a distributed moderation system could help to “quickly and consistently separate high and low quality comments in an online conversation.” They also found that “final scores for [Slashdot] comments [were] reasonably dispersed and the community generally [agreed] that moderations [were] fair.” (2004)

Lampe and Resnick did acknowledge shortcomings in the meta-moderation system implemented by Slashdot, and stated that “important challenges remain for designers of such systems.” (2004) Software design is what Facebook does; it’s not hard to imagine that the Internet giant, with annual revenue in excess of $40 billion, could find ways to address design issues.

The appearance of distributed moderation…but no substance

In the same year that Lampe and Resnick published “Slash(dot) and burn” (2004), Facebook launched. Even going back to the site’s earliest days, the benefits of distributed meta-moderation had already been established.

Facebook, in the form it’s evolved into, shares some of the superficial traits of Slashdot’s meta-moderation system. Where Slashdot offered moderators options like “insightful,” “funny,” and “redundant,” Facebook offers options like “like,” “love,” “funny,” and “angry.” The user clicking one of those options might feel as though they are playing the role of moderator; but beneath the surface, in Facebook’s case, there is no substance. At least, nothing to benefit the site’s users; the data generated is, of course, heavily used by Facebook to determine what ads are shown to whom.

In recent years, Facebook has offered a now-familiar bar of “emoticons,” permitting its users to express how a given post or comment makes them feel. Clicking the button puts data into the system; but it’s only Facebook, and its approved data consumers, who get anything significant back out.

When Slashdot asked moderators whether a comment was insightful, funny, or off-topic, that information was immediately put to work to benefit the site’s users. By default, readers would see only the highest-rated comments in full, and would see a single “abbreviated” line for those with medium ratings, and would have to click through to see everything else. Those settings were easy to change, for users preferring more or less in the default view, or within a particular post. Take a look at the controls available on any Slashdot post:

Where Facebook’s approach falls short

Facebook’s approach to evaluating and monitoring comments falls short in several ways:

  1. It’s all-or-nothing. With Slashdot, if a post was deemed “off topic” by several moderators, it would get a low ranking, but it wouldn’t disappear altogether. A discerning reader, highly interested in the topic at hand and anything even remotely related, might actually want to see that comment; and with enough persistence, they would find it. But Facebook’s moderation—whether by Facebook staff or the owner of a page—permits only a “one size fits all” choice: to delete or not to delete.
  2. Facebook staff must drink from the firehose. When the users have no ability to moderate content themselves, the only “appeal” is to the page owner or to Facebook staff. Cases that might be easily resolved by de-emphasizing an annoying post either don’t get dealt with, or they get reported. Staff moderators have to process all the reports; but if users could handle the more straightforward cases, the load on Facebook staff would be reduced, permitting them to put their attention on the cases that really need it.
  3. Too much involvement could subject Facebook to tough regulation as a media company. There is spirited debate over whether companies like Facebook should be regarded as a media company or a technology platform. This is no mere word game; media companies are inherently subject to more invasive regulation. Every time Facebook staff face a tricky moderation decision, that decision could be deemed an “editorial” decision, moving the needle toward the dreaded “media company” designation.

Facebook must learn from the past

Facebook is facing substantial challenges. In the United States, Congress took another round of testimony last week from tech executives, and is evaluating regulatory options. Tim Wu, known for coining the term “net neutrality,” recently argued in favor of competitors to Facebook, perhaps sponsored by the Wikimedia Foundation; he now says the time has come for Facebook to be broken up by the government. In the same article, antitrust expert Hal Singer paints a stark picture of Facebook’s massive influence over innovative competitors: “Facebook sits down with someone and says, ‘We could steal the functionality and bring it into the mothership, or you could sell to us at this distressed price.’” Singer’s prescription involves changing Facebook’s structure, interface, network management, and dispute adjudication process. Meanwhile in Europe, the current push for a new Copyright Directive would alter the conditions in which Facebook operates.

None of these initiatives would be comfortable for Facebook. The company has recently undertaken a project to rank the trustworthiness of its users; but its criteria for making such complex evaluations are not shared publicly. Maybe this will help them in the short run, but in a sense they’re kicking the can down the road; this is yet another algorithm outside the realm of public scrutiny and informed trust.

If Facebook has an option that could reduce the concerns driving the talk of regulation, it should embrace it. According to Lampe and Resnick, “the judgments of other people … are often the best indicator of which messages are worth attending to.” Facebook should explore an option that lets them tap an underutilized resource: the human judgment in its massive network. The specific implementation I suggest was proven by Slashdot; the principle of empowering end users also drove Wikipedia’s success.

Allowing users do play a role in moderating content would help Facebook combat the spread of “fake news” on its site, and simultaneously demonstrate good faith by dedicating part of its substantial trove of data to the benefit of its users. As Cliff Lampe, the researcher quoted above, recently tweeted: “I’ve been amazed, watching social media these past 20 years, that lessons from Slashdot moderation were not more widely reviewed and adopted. Many social sites stole their feed, I wish more had stolen meta-moderation.”

All platforms that feature broad discussion stand to benefit from the lessons of Slashdot’s distributed moderation system. To implement such a system will be challenging and uncomfortable; but big tech companies engage with challenging software design questions routinely, and are surely up to the task. If Facebook and the other big social media companies don’t try distributed moderation, a new project just might; and if a new company finds a way to serve its users better, Facebook could become the next Friendster.


This article was also published on LinkedIn and Medium.

Posted in core, governance, history, journalism, leadership, User experience, wiki, Wikimedia Foundation, Wikipedia | Leave a comment

“Open” everything, and minimal financial needs: Wikipedia’s strengths

What insulates Wikipedia from the criticisms other massive platforms endure? We explored some answers—core values, lack of personalization algorithms, and lack of data collection—in last week’s “How Wikipedia Dodged Public Outcry Plaguing Social Media Platforms.”

But wait, there’s more:

Wikipedia moderation is conducted in the open.

“The biggest platforms use automated technology to block or remove huge quantities of material and employ thousands of human moderators.” So says Mark Bunting in his July 2018 reportKeeping Consumers Safe Online: Legislating for platform accountability for online content.” Bunting makes an excellent point, but he might have added a caveat: “The biggest platforms, like Facebook, Twitter, and YouTube, but not Wikipedia.

Wikipedia, one of the top web sites worldwide for about a decade, works on a different model. The volunteers writing Wikipedia’s content police themselves, and do so pretty effectively. Administrators and other functionaries are elected, and the basic structure of Wikipedia’s software helps hold them accountable: actions are logged, and are generally visible to anybody who cares to review them. Automated technology is used; its coding and its actions are transparent and subject to extensive community review. In extreme cases, Wikimedia Foundation staff must be called in, and certain cases (involving extreme harassment, outing, self-harm, etc.) require discretion. But the paid moderators certainly don’t number in the thousands; the foundation employs only a few hundred staff overall.

More recently, a Motherboard article explored Facebook’s approach in greater depth: “The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People.” It’s a long, in-depth article, well worth the read.

One point in that article initially stood out to me: presently, “Facebook is still making tens of thousands of moderation errors per day, based on its own targets.” That’s a whole lot of wrong decisions, on potentially significant disputes! But if we look at that number and think, “that number’s too high,” we’re already limiting our analysis to the way Facebook has presented the problem. Tech companies thrive on challenges that can be easily measured; it’s probably a safe bet that Facebook will achieve something they can call success…that is, one that serves the bottom line. Once Facebook has “solved” that problem, bringing the errors down to, say, the hundreds, Facebook execs will pat themselves on the back and move on to the next task.

The individuals whose lives are harmed by the remaining mistakes will be a rounding error, of little concern to the behemoth’s leadership team. On a deeper level, the “filter bubble” problem will remain; Facebook’s user base will be that much more insulated from information we don’t want to see. Our ability to perceive any kind of objective global reality—much less to act on it—will be further eroded.

As artificial intelligence researcher Jeremy Lieberman recently tweeted, we should be wary of a future in which “…news becomes nearly irrelevant for most of us” and “our own private lives, those of our friends; our custom timelines become the only thing that really matters.” In that world, how do we plan effectively for the future? When we respond to natural disasters, will only those with sufficiently high Facebook friend counts get rescued? Is that the future we want?

It’s not just moderation—almost all of Wikipedia is open.

“If you create technology that changes the world, the world is going to want to govern [and] regulate you. You have to come to terms with that.” —Brad Smith, Microsoft, May 2018. As quoted in Bunting (2018).

From the start, Wikipedia’s creators identified their stakeholders, literally, as “every single human being.” This stands in stark contrast to companies that primarily aim to thrive in business. Wikipedia, on the whole, is run by a set of processes that is open to review and open to values-based influence.

This point might elicit irate howls of frustration from those whose ideas or efforts have been met with a less-than-respectful response. Catch me on another day, and the loudest howls might be my own. But let’s look at the big picture, and compare Wikipedia to massive, corporate-controlled platforms like YouTube, Facebook, or Google.

  • Wikipedia’s editorial decisions are made through open deliberation by volunteers, and are not subject to staff oversight.
  • Most actions leading up to decisions, as well as decisive actions themselves, are logged and available to public review and comment.
  • It’s not just the content and moderation: the free software that runs Wikipedia, and the policies that guide behavior on the site, have been built through broad, open collaboration as well.
  • The Wikimedia Foundation has twice run extensive efforts to engage volunteers in strategic planning, and in many instances has effectively involved volunteers in more granular decision-making as well.

There is room for improvement in all these areas, and in some cases improvement is needed very badly. But inviting everyone to fix the problems is part of what makes Wikipedia thrive. Treating openness as a core value invites criticism and good faith participation, and establishes a basic framework for accountability.

“While principles and rules will help in an open platform, it is values that [operators of platforms] should really be talking about.” — Kara Swisher in the New York Times, August 2018.

Wikipedia lacks relentless public relations & financial shareholders.

There’s another frequently overlooked aspect of Wikipedia: financially speaking, the site is an ant among elephants.

The annual budget of the Wikimedia Foundation, which operates Wikipedia, is about $120 million. That may sound like a lot, but consider this: Just the marketing budget of Alphabet (Google’s parent company) is more than $13 billion.

In terms of the value Wikipedia offers its users, and the respect it shows for their rights, Wikipedia arguably outstrips its neighbors among the world’s top web sites. But it does so on a minuscule budget.

Wikipedia doesn’t have armies of public relations professionals or lobbyists making its case. So part of the reason you don’t hear more about Wikipedia’s strategy and philosophy is that there are fewer professionals pushing that conversation forward. The site just does its thing, and its “thing” is really complex. Because it works fairly well, journalists and policymakers have little incentive to delve into the details themselves.

Wikipedia also doesn’t have armies of stockholders exerting pressure, forcing the kind of tension between profit and ethics that often drives public debate.

Wikipedia is driven by philosophical principles that most would agree with; so the issues that arise are in the realm of implementation. There is little pressure to compromise on basic principles. Tensions between competing values, like business interests vs. ethical behavior, drive the debate over corporate-controlled platforms; but those tensions basically don’t exist for Wikipedia.

In 1973, video artist and provocateur Richard Serra produced the short film “Television Delivers People.” It suggested that those consuming “free” television were not the customers, but the product…being sold to advertisers. In the Internet era, the notion has been frequently applied to media companies. Reasonable people might debate how well this line of thinking applies to various media and social media companies. But with Wikipedia, unique among major Internet platforms, this particular criticism clearly does not apply.

Concluding thoughts

The reasons you don’t hear much about Wikipedia’s governance model are that it is rooted in clearly articulated principles, works fairly well, is reasonably open to benevolent influence, and lacks a public relations campaign.

Those are all good things—good for Wikipedia and its readers. But what about the rest of the Internet? The rest of the media world, the rest of society? If the notion of objective truth is important to you, and if you’re concerned about our access to basic facts and dispassionate analysis in today’s rapidly shifting media landscape, you might want to challenge yourself to learn a bit more about how Wikipedia has been made and how it governs itself…even if you have to dig around a bit to do so.


This article was also published on LinkedIn and Medium.

Posted in core, governance, history, journalism, leadership, Statements of Ethics, wiki, Wikimedia Foundation, Wikipedia | Leave a comment

How Wikipedia dodged public outcry plaguing social media platforms

Everybody has an opinion about how to govern social media platforms. It’s mostly because they’ve shown they’re not too good at governing themselves. We see headlines about which famous trolls are banned from what sites. Tech company executives are getting called before Congress, and the topic of how to regulate social media is getting play all over the news.

Wikipedia has problematic users and its share of controversies, but as web platforms have taken center stage in recent months, Wikipedia hasn’t been drawn into the fray. Why aren’t we hearing more about the site’s governance model, or its approach to harassment, bullying? Why isn’t there a clamor for Wikipedia to ease up on data collection? At the core, Wikipedia’s design and governance are rooted in carefully articulated values and policies, which underlie all decisions. Two specific aspects of Wikipedia inoculate it from some of the sharpest critiques endured by other platforms.

Wikipedia exists to battle fake news. That’s the whole point.

Wikipedia’s fundamental purpose is to present facts, verified by respected sources. That’s different from social media platforms, which have a more complex project…they need to maximize engagement, and get people to give up personal information and spend money with advertisers. Wikipedia’s core purpose involves battling things like propaganda and “fake news.” Other platforms are finding they need to retrofit their products to address misinformation; but battling fake news has been a central principle of Wikipedia since the early days. Continue reading

Posted in core, governance, journalism, Statements of Ethics, Uncategorized, wiki, Wikipedia | Leave a comment

Wikipedia’s ban of Daily Mail exposes news publisher flaws

Who doesn’t love a good media feud?

As reported by the Guardian on February 8, the English language Wikipedia has (mostly) banned the Daily Mail as an acceptable source for citation, after declaring it “unreliable”. The report touched a nerve; the Mail swiftly issued a shrill and rambling retort, and the story was swiftly picked up in more than a dozen other publications.

But this feud more than a mere “pass the popcorn” moment. It’s also a learning opportunity, highlighting important dynamics in how media outlets function. The widespread interest in how Wikipedia evaluates its sources is welcome and overdue. In this post, I’ll consider:

  1. What’s the context of Wikipedia’s decision? What exactly was the decision, how was it made, and how binding is it?
  2. Was it the right decision?
  3. What insights does the media response to Wikipedia’s decision offer?

1. What Wikipedia decided, and how

The decision about the Daily Mail may be the first such decision to be widely reported, but Wikipedia editors routinely make decisions about the suitability of sources. We have to! Hundreds of thousands of volunteer editors write and maintain Wikipedia. Evaluating the relative merits of competing sources has been a central approach to resolving the inevitable disagreements that arise, throughout the site’s history. In fact, Wikipedia has a highly active discussion board devoted to the topic. A 2008 discussion about Huffington Post (with cogent arguments for both inclusion and exclusion, though it did not result a blanket decision one way or another) is just one of hundreds of discussions where Wikipedia editors weigh sources against the site’s criteria.

Journalist Noam Cohen, Wikimedia director Katherine Maher, journalism professor Tim Wu discussed Wikipedia’s reliability in January 2017. Photo by King of Hearts, licensed CC BY-SA.

With topics like “fake news” and “alternative facts” dominating recent headlines, much has been said about Wikipedia’s diligence in evaluating sources. The central role of human evaluation sets Wikipedia apart from other top web sites like Facebook and Twitter; and the relative transparency of Wikipedia’s process sets it apart from other top publishers. These distinctions have been highlighted in many venues, by prominent Wikipedians including former Wikimedia Foundation (WMF) trustee James Heilman, WMF director Katherine Maher, and Wikipedia cofounder Jimmy Wales.

The discussion, and the formal finding at its conclusion, are available for public review; click above for the details.

In the context of Wikipedia’s usual process, the Daily Mail decision was pretty unremarkable. A few dozen Wikipedians deliberated, and then five site administrators made the decision based on their understanding of the discussion, and its ties to Wikipedia policy and precedent.

Consensus has determined that the Daily Mail (including its online version, dailymail.co.uk) is generally unreliable, and its use as a reference is to be generally prohibited, especially when other more reliable sources exist. …

One angle seems under-reported: the decision was not based on a mere “majority rule” vote of partisan Wikipedians; it was (as always) an effort to determine the best path forward in light of what is publicly known.

Numerous independent evaluations of the Daily Mail’s diligence were considered by Wikipedia editors. That’s at the heart of how we work, in this and similar cases; we consider what reputable publications have had to say.

2. Wikipedia’s decision

Wikipedia editors made their ruling. Public domain image from Wikimedia Commons

Criticism of the Wikipedia decision boils down to two things:

  • The legitimacy of the process: Were there enough Wikipedia editors involved in the decision? Was the discussion open for long enough before a decision was made?
  • The reasoning of the decision: Was the deliberation thorough and sound? Was it infected with partisan bias, or did it ignore important facts?

To properly address the process questions would require a detailed breakdown of how Wikipedia decisions are made. I’ve taken on such questions elsewhere. Without getting into the details, consider: would you want an analysis of a U.S. Senate decision from somebody who knows little of the Senate’s parliamentary rules, or of the U.S. court system from somebody who’s never read a law journal? Views on Wikipedia’s process from arbitrary media commentators should be viewed with a skeptical eye.

As a longtime Wikipedia administrator, I’ll say this: The number of people involved and the length of time were entirely sufficient to satisfy Wikipedia’s requirements. Like nearly every Wikipedia decision, this one can be overturned in the future, if there’s reason to do so. The questions about Wikipedia’s process are without merit.

So, how diligent were those making the decision? Well, I’m not necessarily better qualified to answer that than any other Wikipedian, so I’m not here to give an overall endorsement or rebuttal; instead, I’d mainly encourage you to read the discussion and decision, and decide for yourself. But, here are a few observations:

It seems the Mail had been widely criticized long before Wikipedia took up the question.

I expect you’ll agree that those in the discussion considered a wide variety of evidence — much of it from independent media commentators. Perhaps, if you’re a careful media observer, you know of something they missed. Wikipedians tend to be receptive to new information; so the best thing to do, if you do have further information, is to present it for consideration. You could, for instance, start a new discussion. But before you do so, consider carefully: is your evidence truly likely to sway the decision? You’ll be asking a lot of volunteers for their attention; please exercise that option with appropriate caution.

You might also ask yourself whether there is evidence of partisan bias in what you read. I didn’t see any, but perhaps you disagree. Again, if it’s there, it’s worth pointing out. As a rule, Wikipedians don’t like the idea that politics might influence the site’s content any better than you do. If such a bias can be demonstrated (which is a lot more than a mere accusation), perhaps something can be done about it.

One point, raised in several venues since the decision, does sound out. To quote the Guardian’s Charles Arthur (archive link):

There’s … a distinction to be made between the [Mail’s] website, which churns stories at a colossal rate and doesn’t always check facts first, and the newspaper, which (in my experience, going up against its writers) does. The problem is that you can’t see which derives from which when all you do is go for the online one.

Andrew Orlowski of the Register noted the brand confusion among the Mail’s various properties, as well. A 2012 New Yorker profile delves deeper, offering useful background on the various brands within the Daily Mail brand.

3. The Guardian’s solid analysis, rooted in non-sequitur

The Daily Mail didn’t like the statement. Public domain image from Wikimedia Commons

The Daily Mail issued a rather amusing statement on the matter; there’s no substance to it, but curious readers may enjoy my line-by-line rebuttal.

But some of the coverage the episode sparked contained genuine insights.

The Guardian offered a solid followup piece, delving into many of the issues involved. But while the reporting was accurate and helpful, the Guardian inadvertently further illustrated the great gulf between traditional media and Wikipedia. Without explanation, the reporter declined to build the story around an interview with one of the decision-makers involved in the case.

Five people ultimately made Wikipedia’s decision about the Daily Mail; they would have made worthy sources for the Guardian story. If they weren’t available, there are more than 1,200 of us with the authority to make such a decision, who can therefore provide expert commentary on the matter. But instead, the Guardian centered its story on Wikimedia executive director Katherine Maher. Maher is of course aware of the various issues, and represented Wikipedia admirably. The choice to feature her in an interview, though, was roughly equivalent to seeking out the U.N. secretary general for comment on a domestic decision of the U.S. Supreme Court.

The Guardian story acknowledged the point in its second paragraph, but inexplicably chose to focus on Maher’s views anyway. What’s going on here?

To earn a comment from the top executive at a ~$100 million organization, it helps to have status, and it helps to have connections. The Guardian, of course, has both. It’s one of the world’s top news outlets, and Wikipedia co-founder Jimmy Wales (one of Maher’s bosses on the organization’s board of trustees) sits on the Guardian’s board.

To get a comment from one of 1,200+ volunteer administrators of the world’s most widely read source of original comment, though, takes good old fashioned legwork. You send a message, you pick up the phone, and if you don’t get a decent response, you go on to the next person. It’s not sexy, and it’s not always fun but if you stick to it, sooner or later you have a real basis for solid reporting.

Wikipedia, of course, is famous for its hordes of detail-oriented volunteers. If the media needed a clear demonstration that Wikipedia might just be better equipped for certain tasks than traditional publications like the Daily Mail or the Guardian, it need look no further.

Posted in governance, journalism, wiki, Wikipedia | 3 Comments

Wikipedia, controversy, and an acclaimed documentary

The Hunting Ground, a 2015 documentary about sexual assault on college campuses exposed conflicts of interest, malfeasance and cover-ups.

Drawing by Nicholas Boudreau, licensed CC BY 4.0.

Drawing by Nicholas Boudreau, licensed CC BY 4.0.

To learn about a complex topic—especially if powerful institutions have a major stake in it—we rely on experts. People who devote substantial effort toward understanding all facets of a topic can offer the public a great deal of value. We routinely refer to their perspectives and analysis when forming opinions on important social and political issues.

But of course, our reliance on experts makes their interests and motivations highly significant. To what extent do an expert’s motivations inappropriately drive their opinions and judgments? Do those opinions and judgments color how they present the facts? As critical readers, we should always pay attention to conflicts of interest (COIs). And if they’re insufficiently disclosed, we’re at a significant disadvantage. If you learn that a product review you relied on was secretly written by the company that made it, you might feel some indignation—and rightly so.

Publishers that care about accurate information face the same issue, but have a greater degree of responsibility; and if a publisher inadvertently amplifies biased information on to its readers, its reputation may suffer. So publishers establish standards and processes to eliminate COIs, or—since it’s often impossible to gather information that is 100% free of COI—to manage them responsibly. Wikipedia is no exception.

But as a publication that invites participation from any anonymous person in the world, Wikipedia has unique challenges around COI. Despite Wikipedia’s efforts to require disclosure, COIs often go undeclared and unnoticed, which leaves everybody (understandably) a little skittish about the whole topic. Blogging pioneer Dave Winer’s words in 2005 illustrate this point: “Every fact in [Wikipedia] must be considered partisan, written by someone with a conflict of interest,” he said. But significantly, every change to the site is rigorously preserved and open to public review. Wikipedia editors routinely investigate and deliberate additions and edits to the site. Every change can be reverted. Every user can be chastised or blocked for bad behavior. The process can be messy, but since 2005, researchers have repeatedly found that Wikipedia’s process generates good content. A properly disclosed and diligently managed COI on Wikipedia is rarely a big deal; it’s part of what makes Wikipedia work. Disclosure is a key component that supports informed deliberation. Disclosing a COI doesn’t give a Wikipedia user carte blanche to do as they see fit; but it does express respect for Wikipedia’s values and for fellow editors, and it gives Wikipedia editors more information to use in resolving disagreements.

One of the principle methods Wikipedia employs to minimize the impact of COI is an insistence on high quality sourcing. But on occasion, Wikipedia editors are overly swayed by sources that match up poorly against the site’s standards.

See our previous blog post, Conflict of interest and expertise, for a deeper look at the subject.

A Wikipedia case study

The Hunting Ground  (2015), a documentary film which investigated the issue of sexual assault on U.S. college campuses, received widespread acclaim, but it also ignited controversy. The production company, Chain Camera Pictures, retained Wiki Strategies beginning early that year to assist with developing and improving the Wikipedia articles related to the film’s focus, as well as the article about the film itself. (See “Disclaimer” below.)

Conflict of interest is a central focus of The Hunting Ground. Universities are required to investigate any report of a sexual assault involving their students; but they also have a strong financial and reputational interest in avoiding scandal. By vigorously investigating sexual assault cases, universities might associate their campuses with violent crime, which could impact recruitment and alumni donations.

In one of the incidents explored in The Hunting Ground, Florida State University (FSU) football star Jameis Winston was accused of rape. A state attorney, when announcing months later that he had insufficient evidence to prosecute, noted substantial problems in the initial rape investigation carried out by both FSU officials and Tallahassee police. Independent investigative pieces from Fox Sports and the New York Times both suggested that COI might have been a factor.

While the influence of a COI in any specific case is difficult to prove, it’s clear that the financial interests of entities like FSU―whose athletics programs bring in more than $100 million a year―sharply conflict with the interests of the women portrayed in The Hunting Ground. FSU is one of the many institutions that had reason to feel threatened by the film, alongside numerous universities, law enforcement agencies, and athletic programs.

The Hunting Ground earned substantial accolades and validation. It received two Emmy nominations, including Exceptional Merit in Documentary Filmmaking, and was one of 15 documentaries shortlisted for the Academy Award for Best Documentary Feature. CNN vetted and broadcast the film, along with a series of panel discussions, putting its own journalism reputation on the line. And student groups, faculty, and university administrators screened the film on hundreds of  college campuses.

But unsurprisingly, given the threat it posed to powerful institutions, the film drew pushback as well as praise.

Among the film’s more persistent critics has been the Washington Examiner’s Ashe Schow, who has written columns about it or mentioning it more than 20 times since March 2015. In November 2015, during the runup to the Academy Awards, Schow announced Chain Camera’s Wikipedia efforts, under a headline proclaiming that they had been “caught” editing Wikipedia. But of course, you can’t be “caught” doing something you were open about from the start; and Chain Camera had been diligent about disclosure. Wikipedia editors working on the various articles had known of the efforts of Chain Camera’s employee Edward Alva for many months. As Chain Camera stated in their rebuttal, Schow’s charges were inaccurate and ill-informed.

The complaints about Alva's editing, made initially by Schow and amplified by Wales, were considered in detail. The graphic highlights the formal decision by administrator Drmies. Click the image to see the full discussion.

The complaints about Alva’s editing, made initially by Schow and amplified by Wales, were considered in detail. The graphic highlights the formal decision by administrator Drmies. Click the image to see the full discussion.

Wikipedia co-founder Jimmy Wales took Schow’s words at face value, praising her piece for “embarrassing” Alva. As Wikipedia editors took up the issue in a public discussion, several included Wales’ statement as part of the evidence of Alva’s wrongdoing. Much of the early discussion was characterized by a lack of diligence in considering Alva’s efforts. One comment stands out: “I don’t have enough time to do a thorough investigation,” said a Wikipedia editor. “But as I now see it, this situation could be dealt with very quickly and justly with a permanent ban of [Alva].” A lack of thorough information was apparently not enough to stop this editor from recommending strong sanctions.

But as many Wikipedians recognize, diligent investigation is important. In the following week, several Wikipedians did indeed take a close look at the edit history. They ultimately rejected the accusations leveled by Schow and Wales. Drmies, the administrator who made the formal determination to close the discussion, stated that “the [Chain Camera] editor declared their COI early enough,” and that “the editor’s defenders present very strong evidence that [Wikipedia’s] system worked.

Drmies’ closing statement carries weight in the Wikipedia world, and it is archived publicly. But from an outside perspective, it might as well be invisible. The media world is used to covering traditional decision-making processes, like court decisions and the acts of public officials, but it’s rare that a media outlet will understand Wikipedia well enough to track a contentious discussion effectively. Schow is ahead of the curve: she knows enough about Wikipedia to find some tantalizing tidbits, to generate copy, to generate clicks, and to influence those readers who lack deep familiarity with Wikipedia.

Specific problems with Schow’s account

But this story, despite its ramifications for an Academy Award shortlisted film and the National Football League’s #1 draft pick, was never picked up in any depth by a journalist who understands Wikipedia’s inner workings. Commentator Mary Wald did briefly note that Alva had observed Wikipedia standards, in a Huffington Post piece that highlighted the strength and stature of the interests taken on by the film. But this brief mention in a single story did not turn the tide. Anyone who follows the media coverage would likely be left with the incorrect impression that Chain Camera had done something wrong—to this day.

If a journalist had covered the story in depth, paying close attention to Wikipedia’s policies, norms, and best practices, they would have noted several flaws in Schow’s analysis. For instance:

  1. Schow began with a common—and erroneous—premise: she assumed Wikipedia’s COI guideline, which recommends against editing an article while in a COI, is a policy. Wikipedia makes a distinction between the two, and explicitly notes that guidelines “are best treated with common sense, and occasional exceptions may apply.” Guidelines are not to be treated as rigid requirements. The COI guideline, in particular, has been scrutinized and deliberated extensively over the last decade. Wikipedia’s need for experts and its philosophical commitment to open editing have both prevented it from ever adopting a formal policy prohibiting editing while under a COI. Wikipedia’s relevant policy does not prohibit someone like Alva from making edits, but it does require disclosure in one of three places. Alva made that disclosure from the start, and in fact exceeded the policy’s requirement by disclosing in multiple places.
  2. In her second column on the topic, Schow attaches significance to Jimmy Wales’ important-sounding words about changing Wikipedia policy in light of Schow’s report. This, again, is an understandable mistake; with most organizations, it’s safe to assume that a founder and board member’s ambitions have a close connection with reality. But with this particular board member and this particular issue, that assumption couldn’t be much further from the truth. Jimmy Wales has a long history of strongly advocating the “bright line rule,” which—had Wales’ efforts to have it codified a policy not been rejected—would have forbidden certain COI edits. Wales has even unequivocally stated that it doesn’t matter if the public thinks it’s policy; in his view, such details are unimportant. To put it simply, Wales is an entirely unreliable source on the topic of conflict of interest on Wikipedia. And despite Wales’ “renewed interest,” as Schow called it, his commentary on the topic ended as soon as it became clear the facts did not support his initial reaction to Schow’s column.
  3. Schow doubled down on some of her strongest words about Alva’s approach, in her third column (November 30): she claimed that Alva had failed to sufficiently disclose his editing of topics related to The Hunting Ground until September 2015. But he had in fact exceeded Wikipedia’s disclosure requirements, as mentioned above. As she did acknowledge, Alva disclosed his connection to The Hunting Ground as early as March 2015, prior to any edits to related Wikipedia articles. He made further, more specific disclosures on April 23, July 27, August 10, and again on August 10, all before the September edit noted by Schow. A columnist, of course, might not be expected to fully grasp the intricacies of Wikipedia editing; but to vet such strong opinions before doubling down, she might have interviewed an uninvolved Wikipedia editor or two.

Schow’s errors may well have resulted from a good faith effort; but that doesn’t make them any less important. Her influence on the public perception of the connection between Chain Camera and Wikipedia has been substantial (see coverage at the Independent Journal Review and the Hill). So it’s significant that she got major parts of the story wrong.

Let the Wikipedia process work – don’t try to shut it down

In covering any story that challenges powerful institutions, Wikipedia editors have to sort through strong messages from various parties. Ultimately, Wikipedia relies on the sources it cites as references. High-quality source materials, not the interests or organizational affiliations of Wikipedia editors, should be the main factor in crafting its content. Wikipedians should not ignore those affiliations, and should always be mindful of the COI of various parties―not only of the editors, but of the people and institutions who generate and influence the stories they cite.

Any COI can be either disclosed or obscured, and even a fully disclosed COI can be managed well or poorly. Of course, it’s impossible to know whether other, anonymous editors have undisclosed COIs; but it would be foolish to conclude with any certainty that those who disclose are the only Wikipedians with a COI, when more than a decade of experience tells us that secretive paid editing – despite being a policy violation – is commonplace. Wikipedians should applaud Alva and Chain Camera Pictures for disclosing from the start. Even if they disagree with his specific suggestions or edits, they result from a good faith effort to improve the encyclopedia. When Wikipedians disagree with a good faith editor, they should talk it through—not discuss whether to block them from editing.

Wikipedia needs more, not fewer, expert contributors

When experts engage openly with Wikipedia, seeking to improve the encyclopedia, we should celebrate and support that effort. Chain Camera Pictures brought something to the table that few Wiki Strategies clients do: they sought to improve Wikipedia’s coverage of a broad topic they knew well through their work. Does this mean that they alone should determine the content of relevant Wikipedia articles? Of course not—Wikipedia’s model demands that any Wikipedian present convincing arguments, with reference to independent reliable sources. That is exactly what Alva did.

The approach Alva took, overall, is the right one. It should be readily apparent to any Wikipedian who looks at the edit history that Alva’s overall intent was to be transparent about his affiliation. Alva made several disclosures, and engaged other Wikipedians in discussion on points of contention multiple times. He added independent, reliable sources, sorting out disambiguation pages, reverting vandalism, and expanding content, and removed poorly-sourced, inaccurate information.

For a topic as important as sexual assault allegations on university campuses, Wikipedia benefits when experts engage with its content. Every day, non-expert writers do their best to to place snippets of information into a narrative, to build Wikipedia articles; but it often takes some expertise to evaluate and refine that narrative. Wikipedians recognize this need; there is even a banner placed on articles deemed to lack an expert’s perspective.

This banner is placed on a Wikipedia article when somebody thinks an expert opinion could help.

This banner is placed on a Wikipedia article when somebody thinks an expert opinion could help.

The creators of The Hunting Ground are not, of course, the only experts on this topic. Investigative reporters like Walt Bogdanich of the New York Times and Kevin Vaughan of Fox Sports reported extensively on the subject. They reviewed thousands of pages of documents and interviewed many and various parties. In so doing, they surely developed significant expertise. If Wikipedia seeks to excel at summarizing all human knowledge, it should engage people like investigative filmmakers and journalists, who often have the strongest understanding of a given topic. As readers and as Wikipedia editors, we rely on these people to report on difficult stories that institutions often try to keep secret.  We should applaud and welcome experts of all stripes when they bring their skills and knowledge to Wikipedia, as long as they are upfront about relevant affiliations. If it keeps the focus on including experts and sorting through disagreements, Wikipedia will be a more robust and comprehensive platform. Its editors, and more importantly its readers, will benefit.

Disclaimer

Chain Camera Pictures is a Wiki Strategies client. Our statement of ethics addresses cases like this; specifically, see item #4 under “Broad commitments & principles,” and the second paragraph of “Article composition and publishing.” It is unusual for us to blog about a client’s project, as we do here; in this case, the client made the decision (in consultation with us) to disclose our work together. In this blog post, we focus on the process Chain Camera Pictures followed in editing Wikipedia; in light of the issues addressed in our statement of ethics, we do not comment on the specific content of the Wikipedia articles in question.

Posted in conflict of interest, governance, journalism, leadership, paid editing, Terms of Use, Uncategorized, wiki, Wikipedia | 5 Comments

No, Congresswoman: WikiLeaks has nothing to do with Wikipedia

Rep. Sheila Jackson Lee of Houston, Texas. Photo public domain, courtesy of U.S. Congress.

Rep. Sheila Jackson Lee of Houston, Texas. Photo public domain, courtesy of U.S. Congress.

Congresswoman Sheila Jackson Lee of Houston is the latest prominent figure to confuse Wikipedia with WikiLeaks. This confusion goes back many years; it often flares up when WikiLeaks releases capture the public’s attention. In 2010, for instance, when WikiLeaks released a string of controversial documents and video, journalists including Charlie Rose turned to Wikipedia cofounder Jimmy Wales for commentary — only to learn, sometimes on live camera, that Wales and Wikimedia have nothing to do with WikiLeaks.

But the mistake is an important and troubling one, especially when made by a public official. Wikipedia and WikiLeaks do not merely lack institutional ties; they also reflect profoundly divergent philosophies about the public’s role in information stewardship.

Wikipedia invites everybody in the world to participate in nearly every decision.

WikiLeaks, while it might solicit key information from anybody who has it, is completely opaque and centrally driven in its decisions; founder Julian Assange may be the sole decision-maker (or perhaps there is a small inner circle he consults).

Any member of Congress should care about the public’s role in information management and dissemination. And anybody who cares about that topic should know, as a basic point of literacy in 2016, that Wikipedia and WikiLeaks are at opposite ends of the spectrum.

Read more on the differences here: WikiLeaks is not part of Wikipedia

 

Posted in governance, government, history, journalism, wiki, Wikimedia Foundation, Wikipedia | Leave a comment

Cupcake with feeling: Getting quotes and taking names

fatcc3

KATU news team interviews Angelica of Fat Cupcake. Photo CC BY, Pete Forsyth.

The news ain’t what it used to be…a recent story that ran in multiple Portland, Oregon news outlets took a single, anonymous Yelp comment as evidence of a “controversy.” Fortunately, I got to be at the bakery the story covered when a TV news team came in to…actually interview to sources.

Angelica, owner of Fat Cupcake, poses with me and her "controversial" cupcake. Photo by KATU news anchor, at Pete's request.

Angelica, owner of Fat Cupcake, poses with me and her “controversial” cupcake. Photo by KATU news anchor, at Pete’s request, CC BY.

This story reminds me of how Wikipedia is often covered: two random people disagreeing over something in a Wikipedia article might be presented as an “edit war” or a “something-gate.” As the way we communicate evolves, it’s taking traditional media a while to catch up.

See my full writeup on Medium.

Posted in events, journalism, Oregon | Leave a comment

ACTRIAL: Wikipedia volunteer leadership should be recognized

The Wikimedia Foundation (WMF) is inviting commentary on how to recognize and encourage informal leadership in the volunteer community. (The consultation runs from September 20 through October 16, 2016.) This is a welcome initiative; the Wikimedia movement has not done well, over the years, at capturing stories of volunteers who successfully focus attention on important areas, and who do good work building consensus and forging and executing plans.

There are exceptions. The announcement linked above hails achievements by Liam Wyatt and Vassia Atanassova; and the WMF has consistently highlighted successful volunteer-driven projects on its blog and elsewhere. Still, many who have taken on big challenges and risks to advance our shared values and vision go unheralded.

Below, I consider an important 2011 initiative and conflict, known informally as ACTRIAL (short for “Article Creation Trial”). I learned of it many months after it took place; I had to jump through jargon-filled discussions in multiple venues before I began to understand what had happened. It’s an important story, though. In recent years, WMF staff have often asserted that Wikipedia’s volunteers are “change averse,” and incapable of generating or agreeing on new ideas. But the characterization is neither fair nor accurate, and is typically asserted out of mere political convenience.

ACTRIAL, however, provides a clear and valuable counterexample, in which Wikipedians self-organized to advocate for change, and WMF staff blocked the effort. This post highlights an important piece of Wikimedia history, and offers a little recognition to unsung heroes.


The Blade of the Northern Lights, a Wikipedia volunteer, wanted to address a persistent problem that irks many regular Wikipedians: brand new Wikipedia volunteers who write articles that are far from meeting Wikipedia’s content standards.

The Blade proposed that the creation of new articles be restricted to users with a bit of experience (“autoconfirmed”: 4 days, 10 edits), and then guided more than 500 English Wikipedia volunteers in considering and ultimately approving the proposal.

ACTRIAL was designed as a six-month experiment rather than a definitive policy change. That point is an important one; the WMF often encourages volunteers and affiliate organizations to test hypotheses before making long-term commitments. Volunteer Rich Farmbrough, in spite of his skepticism about the proposed change, praised ACTRIAL’s design as an experiment in 2014, and explained the significance:

I am against preventing article creation by IPs let alone non-autoconfirmed users. But this trial might well have provided compelling evidence one way or the other.

Once the English Wikipedia community had agreed to move forward with ACTRIAL, Scottywong, another Wikipedia volunteer, formally requested a necessary technical change. In a haphazard discussion driven by WMF staffers, the request was denied. Apparently ignoring the extensive deliberation that had involved hundreds of volunteers, one WMF employee stated: “this entire idea doesn’t appear to have been thought through.” Several seemed to agree that the proposal was at odds with the strategic goal of improving editor retention, though no clear argument supporting that position was advanced.

To date, the WMF has not explained this extraordinary rejection of a good-faith, volunteer-driven initiative. The closest approximation to an explanation was a mailing list discussion in 2014. In that discussion, then-WMF staffer Philippe Beaudette asserted that the WMF had ultimately solved the underlying problem in another way:

What I remember was that a pretty good number (~500) of [English Wikipedia] community members came together and agreed on a problem, and one plan for how to fix it and asked the WMF to implement it. The WMF evaluated it, and saw a threat to a basic project value. WMF then asked “what’s the problem you’re actually trying to solve?”, and proposed and built a set of tools to directly address that problem without compromising the core value of openness. And it seems to have worked out pretty well because I haven’t heard a ton of complaints about that problem since.

However, Beaudette’s statement had several problems (edited: see note below):

  • If there was indeed an evaluation, it was never made public.
  • While several individuals argued that a “basic project value” was at risk, no decisive case was made, nor any formal conclusion presented. Others disagreed, and the matter was never resolved decisively.
  • If the WMF had asked “what’s the problem you’re actually trying to solve?”, the question was not (as far as I can tell) posed in a public venue.
  • It’s unclear what “set of tools” were developed; but regardless of what that was referring to, any claim that it “worked out pretty well” should have been evaluated by a more robust process than listening for complaints. As volunteer Todd Allen said: “You haven’t heard more complaints, because the complaint was pointless the first time and took a massive effort to produce.”

When I requested clarification, Beaudette did not respond; but James Alexander, another WMF staff member, did step in. Alexander speculated that the software inspired by ACTRIAL was the Page Curation tool. He may have been correct, but no other staff member confirmed it in that email thread; and neither the page on Page Curation nor its parent page on the Article Creation Workflow make any mention of the discussion of the 500+ volunteers that may or may not have have inspired them.

The sequence of events around ACTRIAL has not been publicly documented by the Foundation – and accordingly, five years later, the leadership shown by the volunteers who guided the discussion remains unrecognized. The initial reactions of WMF staff, therefore, loom large in volunteers’ memory. As Rich Farmbrough opined: “The dismissal as a ‘we know better’ was a bad thing.”

Now that the Foundation seeks to explore stories of leadership in the Wikimedia movement, it would do well to look into the story of The Blade of the Northern Lights and Scottywong. These two played important roles in guiding the English Wikipedia community to define a problem, and to map out a viable (if unimplemented) way to explore a solution. Moreover, if any Wikimedia Foundation staff worked with the volunteer community to make something worthwhile out of those deliberations, their leadership merits recognition as well.


Note: This blog post does not cover the role of Erik Moeller, then the Deputy Director of the WMF. He made a couple of substantive comments in the discussion. Some of them speak to Beaudette’s points, and the extent to which the WMF engaged with the problem surfaced in the volunteer community. I will update this post or follow it up with more detail when I have time. -Pete

Posted in governance, history, leadership, Uncategorized, wiki, Wikimedia Foundation, Wikipedia | 9 Comments

What will be our Taj Mahal of text?

Script from the Koran adorns much of the Taj Mahal. Photo by Jean-Pierre Dalbéra, licensed CC BY 2.0

Script from the Koran adorns much of the Taj Mahal. Photo by Jean-Pierre Dalbéra, licensed CC BY 2.0

A slide flashed on the screen—the Taj Mahal. The audience was initially taken by its physical beauty.  But upon closer inspection, we were told, one would find much of the text of the Koran chiseled into this wonder  of the ancient world.

What a brilliant way to preserve text against the ravages of time. Carve the most  important words into the stone of a marvelous structure.

Text—its physical structure, its preservation, its manipulation, interpretation and cultural transformation—was the topic of the sixth Future of Text Symposium on the Google campus in late August.

Frode Hegland introduces the Symposium.

Frode Hegland introduces the Symposium. Event photos by Dan Cook, licensed CC BY-SA 4.0.

For the second year, Wiki Strategies founder Pete Forsyth was among the speakers, who each had 10 minutes to describe a particular personal text passion, followed by five minutes for questions and discussion. Wiki Strategies co-sponsored the symposium.

The symposium exists because Frode Hegland cares deeply about text and had been searching for a vehicle to bring together kindred spirits to probe text’s evolution. As he has said in explaining the need for such a gathering, “The written word is a fundamental unit of knowledge and as such is of universal importance.”

Hegland, a teacher, lecturer, software developer and author, hosted the first Future of Text symposium in London six years ago. He intentionally keeps the crowd small, intimate and engaged; expanding to a three-day conference in a hotel ballroom is not his idea of  thought leadership.

Eileen Clegg presenting. Event photos by Dan Cook, licensed CC BY 4.0.

Eileen Clegg presenting.

His co-organizer is Houria Iderkou—without whom, he says, the symposium would not be possible. Houria is an e-commerce entrepreneur who flawlessly manages the many details of hosting a symposium.

This year they again brought together leading thinkers in the various disciplines that are united by text. Among the two dozen presenters: Google hosts and innovators Vint Cerf and Peter Norvig; Ted Nelson (via Skype); Robert Scoble; Jane Yellowlees Douglas; Livia Polanyi; and Adam Hyde. The full roster can be more thoroughly appreciated here.

The passions unleashed and the intellectual exchanges that occurred that day in Mountain View paid tribute to text’s contributions to human culture. For one day, the written word was celebrated as the miraculous gift to mankind that it truly is. Frode admitted to being a bit discouraged at day’s end–not by the discussions that took place, but by the weight of responsibility mankind has to hand text along from generation to generation, and to use it to its full potential to support the human endeavor.

Jim Strahorn, Pete Forsyth, and Bonnie DeVarco dig into the technicalities of text.

Jim Strahorn, Pete Forsyth, and Bonnie DeVarco dig into the technicalities of text.

It is, after all, the accounting and preservation of the human experience, as well as an essential ingredient of that experience. When one considers what has been irrevocably lost of the experiences of humans who did not have a written language to pass their tales on to those who came after them, then perhaps Frode’s anxieties come sharply into focus. The responsibility is on us to preserve in text what we have learned, what we have seen and heard and touched and felt and smelled.

So perhaps we should ask: What will be our Taj Mahal?

Posted in events, governance, journalism, wiki, Wikipedia | Leave a comment