- Docker Desktop 4.36 | Docker
- This 3-in-1 MagSafe dock will charge your Apple devices while keeping them cool (and for Black Friday it's only $48)
- Why Cisco Leads with Wi-Fi 7: Transforming Future Connectivity
- What is AI networking? How it automates your infrastructure (but faces challenges)
- I traveled with a solar panel that's lighter than a MacBook, and it's my new backpack essential (and now get 23% off for Black Friday)
OpenAI sets up new safety body in wake of staff departures
With the committee, OpenAI signals that it recognizes the continued concerns the industry and the general public have about AI, and is taking steps internally to monitor itself even as it aims to stay ahead of competitors.
“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” it said in the blog post.
Pressure mounts on OpenAI
OpenAI’s unveiling of progress on its next version of GPT is a natural progression for the company as it aims to protect its market lead even as competition heat ups. xAI, the company founded by Tesla leader Elon Musk, recently announced a $6 billion fundraising effort with a $24 billion valuation as Musk aims to challenge the startup he once championed on AI and AGI. Meanwhile, Musk and OpenAI remain embroiled in a heated legal dispute.
OpenAI also faced controversy recently when it released a virtual assistant with a voice that some said sounded eerily similar to that of Hollywood actress Scarlett Johannson, even though she did not consent to the company using her voice when asked for her permission several times. Johannson famously voiced an AI system with whom a character played by Joaquim Phoenix falls in love in the 2013 film “Her.”
“As the usage of generative AI increases, associated risks, and security concerns are emerging,” observed Pareekh Jain, CEO of EIIRTrend & Pareekh Consulting. “The Scarlett Johansson incident has heightened OpenAI’s awareness of these risks.”
Securing AI can bolster its adoption
AI security also remains a priority for AI stakeholders at large, with various initiatives being formed at both the government and corporate levels to try to set guidelines for the future development of the technology before it evolves beyond human control.