• Source:JND

Facebook's parent company Meta announced substantial changes to its restrictions on digitally created and altered media on Friday, ahead of US elections that would test the firm's ability to police deceptive content produced by growing artificial intelligence technologies.

In a blog post, Vice President of Content Policy Monika Bickert declared that starting in May, the social media giant will mark as "Made with AI" any videos, images, and music that are submitted on its platforms. This goes beyond an earlier guideline that simply dealt with a tiny percentage of content that had been doctored.

Regardless of whether the information was produced using artificial intelligence (AI) or other techniques, Bickert stated that Meta will additionally apply distinct and more noticeable labels to digitally altered media that offers a “particularly high risk of substantially deceiving the public on an issue of importance."

READ: Flipkart Mobile Offers: iPhone 14 Price Drop To Over Rs 12,000 Off On iPhone 14 Plus, No Bank Cards Needed To Avail Apple Deals

The company's handling of modified content will change as a result of the new strategy. It will shift from an approach that focuses on eliminating a certain subset of posts to one that maintains the material while informing users about the creative process.

In the past, Meta revealed a plan to use invisible identifiers included in files to identify photos created with generative AI technologies from other companies, although it did not provide a start date at the time.

A company representative informed Reuters that information uploaded on Meta's Facebook page would be subject to the new labelling strategy.

Services like Threads and Instagram. Different regulations apply to its other services, such as Quest virtual reality headsets and WhatsApp.

READ: iQOO Z9 5G Review: 'Nothing' To Worry About

The more noticeable "high-risk" markings will be applied by Meta right away, the spokesman said. The modifications are made months before the November presidential election in the United States, which industry experts fear could be impacted by emerging generative AI capabilities. In countries like Indonesia, political campaigns have already started implementing AI technologies, going beyond the recommendations of generative AI industry leader OpenAI and suppliers like Meta.

When the company's standards on modified material were examined in February, the monitoring board for Meta declared the company's practices "incoherent" after reviewing a Facebook video that included edited footage that falsely suggested U.S. President Joe Biden had acted inappropriately.

Because misleadingly altered films are only forbidden by Meta's existing "manipulated media" policy if they were made with artificial intelligence or if they make people appear to say things they never actually said, the video was permitted to stay up.

According to the board, since non-AI content is "not necessarily any less misleading" than information generated by AI, it should also be covered by the policy, along with audio-only content and recordings of people doing things they never actually did.

(With Agency Inputs)

Also In News