OpenAI says it ignored the concerns of its expert testers when it rolled out an update to its flagship ChatGPT artificial intelligence model that made it excessively agreeable. The company released an update to its GPT‑4o model on April 25 that made it “noticeably more sycophantic,” which it then rolled back three days later due to safety concerns, OpenAI said in a May 2 postmortem blog post. The ChatGPT maker said its new models undergo safety and behavior checks, and its “internal experts spend significant time interacting with each new model before launch,” meant to catch issues missed by other tests. During the latest model’s review process before it went public, OpenAI said that “some expert testers had indicated that the model’s behavior ‘felt’ slightly off” but decided to launch “due to the positive signals from the users who tried out the model
source: https://cointelegraph.com/news/openai-ignored-experts-overly-agreeable-chatgpt-model-release?utm_source=rss_feed&utm_medium=rss&utm_campaign=rss_partner_inbound