OpenAI Launches Collective Alignment Team to Crowdsource Governance for AI Models
In a move towards transparency and inclusivity, OpenAI has unveiled its Collective Alignment team, dedicated to integrating public input into the development of its AI models. The company aims to align its future AI models with human values by actively involving the public in shaping the behavior of its products and services.
The newly formed Collective Alignment team, comprised of researchers and engineers, will focus on creating a system for gathering and encoding public suggestions on model behaviors into OpenAI’s offerings. OpenAI plans to collaborate with external advisors, grant teams, and conduct pilots to incorporate prototypes that steer the behavior of its models.
This initiative is an extension of OpenAI’s public program launched in May, which granted funds for experiments exploring a “democratic process” for determining AI system rules. The program aimed to support individuals, teams, and organizations in developing proof-of-concepts to address questions about governance and guardrails for AI.
In a blog post, OpenAI highlighted the diverse range of projects funded through the grant program, covering topics such as video chat interfaces, crowdsourced audits of AI models, and methods to map beliefs for fine-tuning model behavior. All the code used by grant recipients has been made public, along with concise summaries and key takeaways from each proposal.
While OpenAI emphasizes the separation of the program from its commercial interests, some skepticism remains, especially in light of CEO Sam Altman’s critiques of AI regulation in the EU. Altman, along with OpenAI’s president and chief scientist, has argued that the rapid pace of AI innovation necessitates a crowdsourced approach due to the limitations of existing regulatory authorities.
Despite facing scrutiny from regulators, including a U.K. probe into its relationship with Microsoft, OpenAI continues to assert its commitment to openness. The startup recently announced efforts to collaborate with organizations to mitigate potential misuse of its technology in influencing elections. Initiatives include enhancing transparency in AI-generated images and developing methods to identify manipulated content even after modifications.
OpenAI’s proactive measures, including the Collective Alignment team, showcase its dedication to involving the public in shaping the ethical framework for AI development, aiming to address concerns and foster trust in the evolving field of artificial intelligence.