YouTube hosts a plethora of videos, appealing to a wide range of audiences, but sometimes videos can be a little too accessible. Adult animations like “Happy Tree Friends” or “Don’t Hug Me I’m Scared” can appeal to kids with their bright colors or Muppet-like puppets, even though they aren’t appropriate for children. But, it seems the video-sharing platform could be addressing complaints with an updated version of the YouTube Kids app that would be monitored by actual humans, according to a report from BuzzFeed News. Romper's request for comment from YouTube regarding reports of the update was not immediately returned.
The idea is that this new app would co-exist alongside YouTube Kids, the company’s original attempt using an algorithm to filter content accessible to children, according to BuzzFeed News. The news outlet reported that it will give parents the option to turn off algorithmically suggested videos. Instead, kids wouldn't have access to any video that hasn’t been screened by a flesh and blood human. It makes sense; algorithms can be great, but it’s a computer program and it can only be coded to read and flag for so much.
The YouTube Kids app functions to block basic searchers for unsuitable videos; searching for any explicit content won’t turn up any results. In addition, parents had the option of removing the search button in its entirety, leaving only pre-selected videos available. But concerns around videos YouTube Kids suggested to children have been around for some time, as Tech Radar reported. Even though YouTube Kids is a decent attempt at trying to filter out harmful information, there have been cases of the app allegedly suggesting conspiracy videos to kids, as reported by Business Insider.
Other videos included claims that the world is flat, the moon landing was faked, and that the planet was ruled by reptile-human hybrids, according to Business Insider. But, YouTube Kids has been experiencing worse problems than conspiracy theories since its inception in 2015. Less than two months after the app’s release, in May 2015, the Campaign for a Commercial-Free Childhood, a coalition of children’s and consumers advocacy groups, complained to the Federal Trade Commission (FTC). Their concerns included content the group called “not only...disturbing for young children to view, but potentially harmful.”
By using popular cartoon characters, such as Spiderman and Elsa, YouTubers were able to lure children into offensive videos. According to Polygon, the videos began normal, but soon showed princesses, superheroes, and other favorites participating in lewd or violent acts. Because an algorithm based system doesn’t scan the entire video, YouTubers were able to skirt the system by using video names and thumbnails that the algorithm reads as safe for kids.
Other issues the report lists, as reported by Polygon, include: explicit sexual language presented amidst cartoon animation, a profanity laced parody of the film Casino featuring Bert and Ernie from Sesame Street, graphic adult discussions about family violence, pornography, and child suicide, jokes about pedophilia and drug use, and modeling of unsafe behaviors.
This is where the reported app would step in. Instead of relying on easily fooled algorithms, all content would have to be filtered through real humans. The source reportedly told BuzzFeed News that the whitelisted option of the app could be released within the coming weeks. According to BuzzFeed News, YouTube didn’t deny plans for the app, but allegedly stated, “We are always working to update and improve YouTube kids, however we don’t comment on rumor or speculation.”
But, YouTube CEO Susan Wojcicki has said the company is expanding its moderation corps to more than 10,000 contractors in 2018, according to Polygon, and they would focus on content that might violate YouTube’s policies. “Human reviewers remain essential to both removing content and training machine learning systems because human judgement is critical to making contextualized decisions on content,” she wrote in a December 2017 blog post.
The following statement was released by YouTube at the time to Polygon:
Content that misleads or endangers children is unacceptable to us. We have clear policies against these videos and we enforce them aggressively. We use a combination of machine learning, algorithms and community flagging to determine content in the YouTube Kids app. The YouTube team is made up of parents who care deeply about this, and are committed to making the app better every day.
Although YouTube has not yet confirmed the development of this new app, it seems that it would uphold with the company’s policies and standards. Hopefully, YouTube will continue responding to the problems regarding YouTube Kids, so parents can trust that their children won’t be introduced to troubling content on an app marketed towards kids.