AI's Latest and Greatest Videos

YouTube Says Computers Are Catching Problem Videos

Figuring out how to remove unwanted videos — and balancing that with free speech — is a major challenge for the future of YouTube, said Eileen Donahoe, executive director at Stanford University’s Global Digital Policy Incubator.

“It’s basically free expression on one side and the quality of discourse that’s beneficial to society on the other side,” Ms. Donahoe said. “It’s a hard problem to solve.”

YouTube declined to disclose whether the number of videos it had removed had increased from the previous quarter or what percentage of its total uploads those 8.28 million videos represented. But the company said the takedowns represented “a fraction of a percent” of YouTube’s total views during the quarter.

Photo

Google said last year it would hire 10,000 people to address policy violations across its platforms. YouTube said on Monday that it had filled a majority of the jobs that had been allotted to it.

Credit
Roger Kisby for The New York Times

Betting on improvements in artificial intelligence is a common Silicon Valley approach to dealing with problematic content; Facebook has also said it is counting on A.I. tools to detect fake accounts and fake news on its platform. But critics have warned against depending too heavily on computers to replace human judgment.

It is not easy for a machine to tell the difference between, for example, a video of a real shooting and a scene from a movie. And some videos slip through the cracks, with embarrassing results. Last year, parents complained that violent or provocative videos were finding their way to YouTube Kids, an app that is supposed to contain only child-friendly content that has automatically been filtered from the main YouTube site.

YouTube has contended that the volume of videos uploaded to the site is too big of a challenge to rely only on human monitors.

Still, in December, Google said it was hiring 10,000 people in 2018 to address policy violations across its platforms. In a blog post on Monday, YouTube said it had filled the majority of the jobs that had been allotted to it, including specialists with expertise in violent extremism, counterterrorism and human rights, as well as expanding regional teams. It was not clear what YouTube’s final share of the total would be.

Still, YouTube said three-quarters of all videos flagged by computers had been removed before anyone had a chance to watch them.

The company’s machines can detect when a person tries to upload a video that has already been taken down and will prevent that video from reappearing on the site. And in some cases with videos containing nudity or misleading content, YouTube said its computer systems are adept enough to delete the video without requiring a human to review the decision.

The company said its machines are also getting better at spotting violent extremist videos, which tend to be harder to identify and have fairly small audiences.

At the start of 2017, before YouTube introduced so-called machine-learning technology to help computers identify videos associated with violent extremists, 8 percent of videos flagged and removed for that kind of content had fewer than 10 views. In the first quarter of 2018, the company said, more than half of the videos flagged and removed for violent extremism had fewer than 10 views.

Even so, users still play a meaningful role in identifying problematic content. The top three reasons users flagged videos during the quarter involved content they considered sexual, misleading or spam, and hateful or abusive.

YouTube said users had raised 30 million flags on roughly 9.3 million videos during the quarter. In total, 1.5 million videos were removed after first being flagged by users.

Continue reading the main story

By DAISUKE WAKABAYASHI

https://www.nytimes.com/2018/04/23/technology/youtube-video-removal.html

Source link

Similar Posts

WP2Social Auto Publish Powered By : XYZScripts.com