The European Union has taken a significant step in regulating artificial intelligence (AI) by approving the final text of the EU’s AI Act. All 27 member states endorsed the political agreement reached in December 2023, according to Commissioner for Internal Market Thierry Breton. The AI Act is a risk-based regulatory framework covering various aspects of AI applications, including the governmental use of AI in biometric surveillance and rules for transparency before market entry. The agreement, which addresses concerns about deepfakes and AI risks, is set to proceed toward legislation with a vote by an EU lawmaker committee on February 13, followed by a European Parliament vote in March or April. By emphasizing the liability of developers in high-risk AI applications, the EU aims to establish a responsible and accountable environment for AI innovation. The focus on riskier AI applications having greater liabilities aligns with the goal of fostering trust in AI technologies while mitigating potential harms.The establishment of an AI Office to monitor compliance and support measures for local AI developers reflects a holistic approach toward overseeing the responsible development and use of AI.
- SFC issues guidance for tokenisation of SFC-authorised investment products and intermediaries engaging in Hong Kong tokenised securities-related activities
- US Sec Brings Action Against Kraken For Securities Law Breaches
- Rumoured That The US Doj May Fine Binance Us$4 Billion In Settlement Of Actions
- Panel discussion on Regulatory Clarity in Asia’s Crypto Hubs: Hong Kong and Singapore
- Panel discussion on Regulatory Landscape for Web3 & Digital Assets – a Global & GCC Perspective