Blog
Jul 24, 2024
Blog
4 min read
6371 views
0

How To Properly Label AI-Generated Content?

Now it seems that artificial intelligence has almost always been with us. However, AI became a functional tool only a couple of years ago, and it is still not entirely clear what this technology will lead to — will it help us, or is it time to look for John Connor? Okay, okay, let’s not get into that, but what’s important here is this: YouTube has introduced strict rules for labeling content created using artificial intelligence. Therefore, if you work with a virtual assistant and create images and videos with it, then it will be useful for you to learn how to properly label such materials so that the platform does not block your videos. Let’s go!

What content should be labeled?

YouTube actually has a fairly democratic attitude towards generated content, but only given that it does not mislead the viewer. And as you well know, the platform does not like bad boys and girls who break the rules over and over again. Therefore, it genuinely warns that if the creator repeatedly neglects the labels about AI, their videos may be deleted or the channel may be completely disconnected from the Partner Program. And we don’t want this under any circumstances.

So creators are required to notify about content that:

  • Shows a real person allegedly saying or doing something that they did not actually say or do.
  • Contains modified images of a real place or event.
  • Realistically depicts an event that did not actually happen.

This may include content that has been modified or generated in whole or in part using audio, video, or image creation or editing tools.

How to label a video?

For now, this can only be done when uploading videos from a computer, but YouTube promises that this feature will soon be available on other devices too. So, if your video contains generated content that needs to be reported:

  • Go to YouTube Studio.
  • Start uploading the video.
  • In the “Details” section, find the “Altered Content” option, check the “Yes” box and answer the questions provided.
  • Then, simply provide the remaining details about the video.

Well, that’s all, no complications and no future problems with the platform. 

If you create Shorts videos using YouTube’s AI tools, such as Dream Track and Dream Screen, then you don’t need to report altered content; a notification about the use of AI will be added automatically.

In addition, the platform itself will check the “Yes” checkbox in the “Altered Content” section if the title of your video indicates that it uses generated content.

Why is it important to notify about such content in the first place?

  1. Ethics and transparency. Let’s be honest: it’s not good to deceive viewers. Users have the right to know that the content they are watching is computer-generated. This will definitely help maintain trust between you and the audience.
  2. Preventing misinformation. As you probably understand, this can have serious consequences, including manipulation of public opinion and undermining trust in real events.
  3. Reputation protection. You know how easy it is to ruin someone’s reputation these days. It is enough to start a rumor and upload a fake video for it to instantly spread all over the Internet. And people will begin to check whether it was generated or not only later, when the situation may already get out of control. We strongly condemn such practices.

Examples of content that doesn’t need to be reported

Let’s go over the basics one more time so you understand how everything works. If you still have questions, you can always clarify certain points in Google Help.

Here are some examples of content, alterations, and video creation and editing tools that do not need to be reported:

Unrealistic content

  • Donald Trump fines Justin Timberlake for drunk driving while Deadpool rides on Wolverine across the rainbow, throwing Skittles (looks like a ready script for the new movie).
  • A person is depicted in outer space using a green screen.

Minor alterations

  • Using color correction or filters.
  • Using filters with special effects such as background blur or vintage effects.
  • Using assistive tools, such as generative AI tools, to create or enhance a story, script, icon, video title, or infographic.
  • Adding subtitles.
  • Increasing video sharpness or resolution, and restoring video, audio, or voice recordings.

Keep in mind that this is not a complete list, just basic examples.

Examples of content that needs to be reported

  • Artificially generated music (including the use of Creator Music).
  • Cloning a person’s voice for use in voice-over.
  • Artificially generated footage that complements footage of a real-life location, such as images of a surfer in Maui for a travel promotional video.
  • An artificially generated realistic video of a match between two real professional tennis players.
  • Content that makes it seem like a person is giving advice that they have not actually given.
  • An altered audio recording in which a popular singer allegedly misses the notes during a concert.
  • A realistic depiction of a hurricane or other natural disaster supposedly approaching a real-life city.
  • Content that makes it appear as if a real person has been arrested or sent to prison.

Conclusion

Well, as you can see, labeling AI-generated content on YouTube is easy and simple, and this will allow you to avoid both problems with the platform and misunderstandings with the audience. Good luck making new videos!

By Andrew Masenzov
Andrew Masenzov

Comments

Your email address will not be published. Required fields are marked *

Do you want to stay up to date with all the news?

Subscribe to the newsletter from Mediacube

Try now
subscribe