- By Ashish Singh
- Wed, 07 Feb 2024 01:55 PM (IST)
- Source:JND
OpenAI, the ChatGPT-maker has finally started rolling out the feature latest feature which will make the AI-created images more transparent and credible. This comes after the company introduced new watermarks for the images created by DALL-E 3 amid the rising concerns of deep fakes. It can be said as a significant step towards the user’s safety and
Information.
The company hopes that by using an AI platform, consumers will be able to more easily determine which photographs are authentic and credible. The Coalition for Content Provenance and Authenticity (C2PA) backs this strategy. Furthermore, viewers will find it easier to determine whether an image was created with artificial intelligence. There are two types of watermarks: the invisible metadata component and the visible CR symbol in the left corner.
The ChatGPT website and the DALL-E 3 API will also be available to mobile users and are stated to provide a smooth connection while maintaining the excellent quality of created photos. If you're concerned about image size, the firm has already stated that the adjustments will have a small impact on high-quality photos and not a major design overcharge.
The C2PA, an alliance of industry heavyweights such as Adobe and Microsoft that champions digital content authenticity through Content Credentials watermark integration, seeks to assist these projects. This initiative is much more than openness; its essence rests in shaping a web where human content and AI-generated material are clearly distinguishable, hence ratifying genuine online information.
Organising AI-generated photos and videos is still difficult, though, because of how simple it is for social media platforms or other users to remove the metadata. It will be interesting to see how AI platforms prevent the spread of false information and complicated digital growth caused by AI.