Source URL: https://slashdot.org/story/25/01/08/2252248/microsoft-rolls-back-its-bing-image-creator-model-after-users-complain-of-degraded-quality?utm_source=rss1.0mainlinkanon&utm_medium=feed
Source: Slashdot
Title: Microsoft Rolls Back Its Bing Image Creator Model After Users Complain of Degraded Quality
Feedly Summary:
AI Summary and Description: Yes
Summary: Microsoft has reverted its Bing Image Creator to an earlier version of OpenAI’s DALL-E 3 due to significant complaints about degraded image quality from users. The transition to the new model (PR16) aimed to enhance performance but resulted in user dissatisfaction, prompting a temporary rollback to the last stable version (PR13).
Detailed Description: The situation at Microsoft regarding its Bing Image Creator highlights critical aspects of AI operational efficiency and user satisfaction—an essential consideration for professionals in AI, cloud, and infrastructure security.
– Microsoft upgraded its Bing Image Creator with an AI model powered by OpenAI’s DALL-E 3, which was expected to improve speed and quality of generated images.
– The new model (codenamed PR16) promised faster image generation and superior quality, but it failed to meet user expectations.
– Users reported issues such as cartoonish and “lifeless” images, leading to widespread dissatisfaction expressed on social media platforms like X (formerly Twitter) and Reddit.
– Feedback included comments reflecting a decline in the perceived quality of the service, with users abandoning Bing in favor of alternatives like ChatGPT.
– In response to the backlash, Microsoft acknowledged the problems and announced a rollback to the previous version (PR13).
– The fix will take time, as the deployment process is noted to be slow, taking 2-3 weeks to reach full implementation.
This incident emphasizes the importance of user feedback in AI deployments and the necessity of quality assurance when integrating advanced models into services. For security and compliance professionals, it serves as a reminder of the potential implications that operational failures in AI tools can have on user trust and security practices in AI development.