OpenAI has introduced a safety initiative, the Preparedness Framework, giving its board the power to override the CEO if they judge AI development risks as too high. The framework outlines processes for assessing and mitigating catastrophic risks associated with powerful AI models. OpenAI establishes Safety, Superalignment, and Preparedness Teams to monitor and evaluate different AI risk categories, aiming to systematize safety protocols and address gaps in studying frontier AI risks. This step by OpenAI is to ensure Responsible AI development and to alert risks in advance. (Source: Cryptopolitan)