Working to fix the YouTube hate speech problem

Last week The Times in London published a story concerning hate speech videos and the advertising surrounding them. The story by investigations editor Alexi Mostrous began:

Google is to be summoned before the government to explain why taxpayers are unwittingly funding extremists through advertising, The Times can reveal.

The Cabinet Office joined some of the world’s largest brands last night in pulling millions of pounds in marketing from YouTube after an investigation showed that rape apologists, anti-Semites and banned hate preachers were receiving payouts from publicly subsidised adverts on the internet company’s video platform.

David Duke, the American white nationalist, Michael Savage, a homophobic “shock-jock”, and Steven Anderson, a pastor who praised the killing of 49 people in a gay nightclub, all have videos variously carrying advertising from the Home Office, the Royal Navy, the Royal Air Force, Transport For London and the BBC.

Mr Anderson, who was banned from entering Britain last year after repeatedly calling homosexuals “sodomites, queers and faggots”, has YouTube videos with adverts for Channel 4, Visit Scotland, the Financial Conduct Authority (FCA), Argos, Honda, Sandals, The Guardian and Sainsbury’s.

At the end of the piece was Google’s response:

A Google spokeswoman said that the company had “strict guidelines” relating to advert placement and that in the vast majority of cases its policies “work as intended”. The company “doesn’t always get it right and sometimes ads appear where they should not,” she said, adding that it would make changes to policies and brand controls.

Since the publication of the story many brands and advertisers have pulled their ad campaigns pending clarification from Google.

Mostrous tweeted an image of The Times editorial that went with the story

This tweet led to an interesting conversation between Alexi, Benedict Evans from Andreessen Horowitz and later Rob Kniaz of Hoxton Ventures. The TL;DR of this discussion is as follows:

  1. Alexi: Google must remove hate speech from YouTube and [to quote from the editorial] there are no technical barriers to doing so.
  2. Benedict: ‘There are no technical barriers’ is gibberish & manually verifying billions of hours of content per day is impossible.
  3. Alexi: Google should be more pro-active and less reactionary. It tends to react to flagged content rather than rooting out extreme content itself.
  4. Benedict: [it] would need speech recognition on every video. And scanning all videos for text. That’s not easy at all.
  5. Alexi: Why not start with 200 people and pro-actively examine content?
  6. Benedict: Your basis for claiming a technical solution being easy is untrue.
  7. Rob: As an ex-Googler I can confirm it’s not easy.
  8. Benedict: There is a problem, Google should do more, but claiming it’s easy is wrong. You can’t use people to ‘edit billions of hours of video’ either.

There are two issues at play from the original story; one is that such extreme videos are on YouTube at all; and the second that advertisements from premium brands are appearing adjacent to this type of content — allowing publishers of such content to make money from public and private sources of ads — often without the knowledge of the brands themselves.

I’m inclined to say that both Alexi and Benedict are correct and wrong, but for different reasons.

I spent many years at Storyful (which was acquired by The Times’ owner News Corp) with breaking news content on YouTube, and working with my colleagues to find original — often graphic — content, working on YouTube for the web first, and then via the API to find content that is real/original, re-uploaded/copies, copyrighted content, or what is often referred to now as ‘fake’ content. This would often involve millions of API calls to find and verify the content we needed.

Of course it is the case already that YouTube does employ/contract people to deal with content — YouTube Policy. A quick look at LinkedIn suggests approximately 400 people working on this problem at YouTube — though Google generally does not share the actual number of staff working on this team.

I’m going to start at this problem from how it was articulated by Benedict: that is technically extremely difficult or impossible to vet billions of hours of video per day.

This is undoubtedly true, but I think it’s also a bit of a straw man argument.

The first question is: do billions of hours of videos need to be vetted algorithmically or manually to help solve this problem? I’d say no.

YouTube is built on two things; content and the accounts that upload that content. If you want to build a system to vet hate speech, for example, you start with accounts that create the content, not the content itself. From an algorithmic standpoint this is the lower hanging fruit. And if you want to start with even lower hanging fruit, you start with the known creators of extremist content.

In order to create a YouTube account, you need to create a Google account. This usually involves giving a real name/username and a real phone number to confirm it (though this is not obligatory). This is the starting point and here are some questions to ask:

  1. What accounts are uploading content that is being repeatedly flagged as hateful or in breach of YouTube policy?
  2. Before even getting into the possible whack-a-mole problem of sock puppet accounts, who are the repeat offenders and what content are they uploading? Are there other websites/social links to those same users?
  3. What data does YouTube collect at the point of account creation? Is the barrier too high or too low for account creation?
  4. When IP addresses are collected during account creation processes what happens? Yes, some users will use VPNs etc, but there are several steps that go from user x creating an account, to uploading a video, to then that video being removed. One could imagine lots of stuff being done here.
  5. If a video is flagged and removed and then re-uploaded, is it caught automatically and flagged using the YouTube CMS? (YouTube’s backend systems already detect duplicate content using a combination of audio and video matching).
  6. What other data does Google have outside of YouTube? (given that they are separate commercial entities). If a hate-speech website that has already been flagged as associated with questionable videos is using Google Analytics for example, are those signals recognised? Is there a flag to say: if website x embeds any video it’s an automatic flag on YouTube’s CMS as likely to need further vetting?
  7. Has the team in YouTube Policy expanded in line (on any basis) with the explosion of uploaded content on a per/billion hour video per/day basis? I would guess not. If not, how can they be expected to perform the same function as say, five years ago?
  8. If technical solutions are being employed to support the policy team, as I expect they are, are they enough? Recent evidence suggests no.
  9. Clearly spam accounts are an enormous issue at YouTube, as they also create a server cost for a) hosting the videos and b) playing them. Understanding the difference between spam and non-spam accounts is enormously difficult. But that doesn’t mean that you can’t create simple filters to start you on the road to vetting what is likely to be extreme content.

Therefore: a possible checklist upon upload of a new piece of content by either a) a brand new account or b) an account with which there were issues before (in whatever order is the most logical)

  1. Is the account new from a brand new user who has never posted to YouTube before, or is someone creating a new account who already has other accounts or has been banned? (Plus some magic sauce about the browser/OS/IP address etc creating the account).
  2. Has the account been around for a while? Has the account uploaded flagged content before?
  3. If the account has been around for a while, have algorithms been used to mine that account for: a) all comments ever posted beneath every video ever posted containing a mixture of hate speech/keywords/keyphrases. b) Has NLP been run across all comments to gauge the video content? c) Have algorithms been employed to score accounts based on this easy-to-obtain text content? d) Has SNA been used to graph the relationships of the commenters that surround extremist content? Are they uploaders themselves?
  4. Does a new video posted by a freshly made account contain flagged words or phrases — not in the video itself — but in the video title, the video description or in the earliest comments associated with the video? Are there links from the account or the video to sites that are flagged? Was the HTML of the website in question mined for keywords too (using Google Spiders for example?).
  5. If the video contains a bullshit title, and a nonsense description, but the video itself is questionable, how do you detect it? Are comments on or off? Where was the video embedded, if anywhere? Is it a known website? If it wasn’t embedded what can be learned from the video via other signals (before getting into audio analysis).

I’m sure the smart people at YouTube have thought of all of these things, however one of the perennial issues that affects YouTube is its relationship with Google — ie they are not the same thing. So it can be hard to get both companies on the same page, despite being from the same company.

It is also clear that the problem is not necessarily that every video uploaded by every person has to be checked, as Benedict seems to argue. What can happen at a technical level is outlined above — and more.

At Storyful we had built enough intelligence on top of YouTube to know what known account was likely to upload content of a real world event before the account even did so. We’d also know whether that content was likely to be graphic in nature before watching it. And we’d also have some idea of the reliability of the account.

And it the account was new to us: we’d have a fair idea whether the account was a sockpuppet account, a legitimate account, or a re-uploader, using signals available through YouTube’s own API (e.g. account creation date, related accounts, number of videos already posted).

And that was five years ago.

Alexi and Benedict are both right that YouTube could be doing more. Alexi is right that they could be doing a helluva lot more. Benedict is right that it’s not technically easy to mine billions of hours of videos in realtime — but that’s not necessarily the problem either.

The problem is this: YouTube has a policy on what videos can and cannot go on its platform. It has likely erred on the side of letting more content through than it should. It should re-consider.

And as for the other problem of ads being displayed next to extremist content: brands want to be assured that their ads are not associated with hate speech — by working to solve the problem above, YouTube also benefits by being able to assure brands to a greater degree than before that their ads are not showing next to such content (a YouTube CMS equivalent for where ads show).


Posted

in

by

Tags: