The controversy around Twitter’s latest public relations crisis hangs on a single, perhaps unanswerable question: what does it mean to be verified? The little blue badge originated in response to worries that celebrities on the platform were being impersonated. In time, though, it came to mean more. So when Twitter granted the badge this week to a white supremacist, it was little wonder that outrage followed.
The story of the Twitter verification controversy is the story of many Twitter controversies before it. A series of deferred decisions all but guaranteed the issue would eventually blow up in the company’s face, and when that moment finally arrived, Twitter could do little but apologize and promise to work on a solution.
The current drama began with the verification of Jason Kessler, a white supremacist who organized the United the Right rally in Charlottesville this August. Heather Heyer, a protester, died during the rally, after which Kessler called her “a fat, disgusting communist,” and said her death was “payback time.” Twitter verified his account on Wednesday.
High-profile users denounced the move. Actor Michael Ian Black, a prolific Twitter user, threatened to quit the service, drawing more than 11,000 retweets.
Hey @jack: very active user, 2.1M followers here: this is disgusting. Verifying white supremacists reinforces the increasing belief that your site is a platform for hate speech. I don’t want to give up Twitter, but I may have to. Who do you value more, users like me or him? https://t.co/5ymcNfFvH0
— Michael Ian Black (@michaelianblack) November 9, 2017
On Thursday morning, Twitter responded. For now, the company said, it would no longer verify any accounts.
Verification was meant to authenticate identity & voice but it is interpreted as an endorsement or an indicator of importance. We recognize that we have created this confusion and need to resolve it. We have paused all general verifications while we work and will report back soon
— Twitter Support (@TwitterSupport) November 9, 2017
To understand how we got here, it’s helpful to remember why Twitter began to verify accounts in the first place. Co-founder Biz Stone announced the feature in a blog post in June 2009. At the time, Twitter faced a lawsuit from Tony La Russa, the then-manager of the St. Louis Cardinals. Someone was impersonating La Russa on Twitter, and he was incensed.
La Russa dropped the suit a month later, but it was enough to get Twitter’s attention. Going forward, Stone wrote, the company would verify the identities of high-profile users and add a badge to their account. “The experiment will begin with public officials, public agencies, famous artists, athletes, and other well known individuals at risk of impersonation,” Stone wrote. “We hope to verify more accounts in the future but due to the resources required, verification will begin only with a small set.”
Over time, though, Twitter began granting special privileges to verified users. They got analytics, which were otherwise available only to advertisers, showing them how their tweets performed. They got a tab showing only their interactions with other verified users — a ham-fisted way of dealing with the abuse that celebrities received from regular accounts. When Twitter introduced new keyword filters designed to combat abuse, verified users got them first.
Along the way, Twitter said very little about the criteria for verification. For years, there was no obvious way to apply. Either Twitter reached out to you, or you got to know someone at the company. And so the verification badge came to carry a sheen of authority: this person, the badge suggested, is a known quantity. This is an account that Twitter trusts.
Then, in January, Twitter removed the verification badge for the noxious provocateur Milo Yiannopoulos. At the time, Twitter told Yiannopoulos that he had violated Twitter’s rules in unspecified ways. But however offensive, Yiannopoulos’ account still belonged to the real-life Yiannopoulos — further suggesting that verification went beyond mere identification to mean something more.
So it’s understandable why the verification of Kessler — who had returned to Twitter after deleting his account in the wake of Charlottesville — caused so much offense. Both men had posted outrageous material on Twitter; the company banned Yiannopoulos permanently a week after unverifying him, after he inspired a campaign of abuse against actress Leslie Jones. So why would it verify Kessler after unverifying Yiannopoulos?
Twitter executives apologized for today’s move (making full use of the 280 characters that had only recently become available to them).
We should have stopped the current process at the beginning of the year. We knew it was busted as people confuse ID verification with endorsement. Have to fix the system, pausing until we do. https://t.co/HSLbJOG2AN
— Ed Ho (@mrdonut) November 9, 2017
People “confuse” verification with endorsement, of course, because Twitter had encouraged them to. By deferring the decision around what endorsement really meant, Twitter ensured the issue would eventually explode.
Twitter has been here before. It has faced a similarly difficult problem in deciding what constituted abuse — only to see, as with verification, that users were defining it for them, and fleeing the platform as a result. By the time former CEO Dick Costolo admitted, in an internal memo, that “we suck at dealing with abuse,” millions of users had quit.
In each case, decisive action was called for — and arrived years too late. Recently, at the encouragement of CEO Jack Dorsey, Twitter has strived to be more open about its inner workings — disclosing, for example, that the person who suspended President Donald Trump’s Twitter account last week was an employee on their last day.
And yet that openness, while admirable, has only revealed Twitter’s internal decision-making processes to be lax and inconsistent. For now, Kessler’s Twitter account remains verified. What that means, exactly, is still anyone’s guess. As usual, Twitter users will just have to wait.