How Sora will filter unsafe content

Share ideas, strategies, and trends in the crypto database.
Post Reply
sadiksojib35
Posts: 419
Joined: Thu Jan 02, 2025 6:47 am

How Sora will filter unsafe content

Post by sadiksojib35 »

There are several stages to filter unwanted content in Chat GPT. At the first level, all prompts are automatically checked for the use of prohibited keywords and topics. If a request violates the neural network usage policy, then the message Content violation comes in response, meaning that such a request cannot be processed.

At the second level, the generated image or video [url=https://www.latestdatabase.com/croatia- ... umber-list]croatia whatsapp number data/url] is analyzed, as will be the case with Sora. The result of the neural network's work goes through a separate module that analyzes each image. Then the content filter is turned on. If the image falls into the category with an age limit of 18+, it will be blocked, despite the fact that the request was initially harmless.

On Topic: 52% of ChatGPT Programming Answers Are Wrong — Study



Why Hands, Text, and Long Videos Are a Challenge for Sora
The longer the clip, the higher the probability of artifacts appearing and the longer the generation time. Technically, clips can be of any length, but Sora's one-minute limitation is logical. There is no point in creating a ten-minute clip, in which inconsistencies at the level of individual scenes and editing splices will be visible every minute.

Every new scene, every new frame is a risk of artifacts appearing. A more promising way is to first create short videos and then edit them yourself.

In the video, except for the best frames, the stream will have problems with the image and movement of hands. Neural networks do not understand what human fingers look like and how they work, how they grab objects. For them, it is just a set of pixels. It will be difficult for a neural network to depict a friendly handshake or hands folded into a lock.

This is how the neural network now generates hands

Texts inside images are another weak point of neural networks. And this does not depend on the language. AI models perceive text as a picture of lines or sticks that do not mean anything.

Based on the video presentation of frames already generated by the neural network, one can be impressed by the overall level of its capabilities. Compared to other existing generative neural networks, this is the best quality that has been achieved so far. At the same time, the shortcomings typical of images generated by AI remain unresolved. A well-drawn foreground with noticeable artifacts in the background, which AI models usually pay less attention to.
Post Reply