What is YouTube Kids?
Users must be at least 13 years old (or the applicable age in their country) to use YouTube independently. Every month we terminate thousands of accounts of people under 13 when we become aware of them. In 2015, we created YouTube Kids as a way for kids to explore their interests and curiosity while providing parents more tools to control and customize the experience for their families. We recommend parents use YouTube Kids if they plan to allow kids under 13 to watch independently.
Resources
What child safety measures exist on the main YouTube app?
We heavily invest in the technology and teams that help provide kids and families with the best protection possible on the main YouTube app as well as within YouTube Kids. Across both platforms, we have clear Child Safety policies to ensure that harmful content that endangers the emotional and physical well-being of minors does not appear on YouTube. We continue to evolve our platform to ensure we’re creating an appropriate environment for family content on YouTube, and make improvements to our product and policies that reflect the input of outside experts and internal specialists where relevant.
Resources
Is YouTube collecting children’s data to serve them ads?
We treat data from anyone watching content identified as 'made for kids' on YouTube as coming from a child, regardless of the age of the user. This means that on videos 'made for kids', we limit data collection and use, and as a result, we need to restrict or disable some product features. For example, we do not serve personalized ads on content 'made for kids', and some features are not available on these videos, like comments and notifications. All Creators are required to indicate whether or not their content is 'made for kids'.
Ads on YouTube Kids are also contextual, not personalized, which means the ad is matched to the video based on the content of the video not the specific user watching the video.YouTube Kids only contains ads that are approved as family-friendly and all ads undergo a rigorous review process for compliance with YouTube Kids Advertising Policies.
Resources
How is YouTube protecting kids who post content as a Creator or feature in videos by other Creators?
As noted in our Terms of Service, you must be at least 13 years old (or the applicable age in their country) to use YouTube, including posting content as a Creator.
We also provide best practices for child safety and prominent prompts for Creators with kids in their videos to understand their legal obligations. In addition to securing consent, it is their responsibility to comply with all the laws, rules, and regulations applicable to kids’ appearance in their content - including required permits, wages/revenue sharing, school and education, and working environment and working hours.
How does YouTube protect children who appear in risky situations in videos?
We have always had clear policies against videos, playlists, thumbnails, and comments on YouTube which sexualise or exploit children. We use machine learning systems to attempt to proactively detect violations of these policies and have human reviewers around the world who quickly remove violations detected by our systems or flagged by users. We immediately terminate the accounts of those who are seeking to sexualize or exploit minors, and we report illegal activities to the National Center for Missing and Exploited Children (NCMEC), which liaises with global law enforcement agencies.
While some content featuring minors may not violate our policies, we recognize the minors could be at risk of online or offline exploitation. This is why we take an extra cautious approach towards our enforcement. Our machine learning systems help to proactively identify videos that may put minors at risk and apply our protections at scale, such as restricting live features, disabling comments, and limiting recommendations.
We also work with the industry, including technology companies and NGOs by offering our industry-leading machine learning technology for combating CSAI (Child Sexual Abuse Imagery) content online. This technology allows us to identify known CSAI content in a sea of innocent content. When a match of CSAI content is found, it is then flagged to partners to responsibly report in accordance with local laws and regulations.