AI Perspective June 2024
#News Center ·2024-06-24 06:44:03
1. EU AI Act approved by EU Council
2. Seoul AI Summit Outcomes
3. UK Self-Driving Cars Act officially takes effect
4. European Commission adopts first international treaty on AI
5. California Senate passes California AI Transparency Act and Frontier AI Model Safety Innovation Act
6. Japan's ruling party releases second AI white paper
EU AI Act approved by EU Council
On May 21, the EU Council approved the flagship bill, EU AI Act.
After being signed by the President of the European Parliament and the Council, the EU AI Act will be published in the EU Official Gazette in the next few days and will take effect twenty days after publication.
It should be reminded that some provisions of the EU AI Act will take effect six months after publication. In particular, providers and deployers of AI systems must ensure that their employees and other persons involved in the operation and use of AI systems have sufficient AI literacy. Provisions prohibiting AI will also take effect six months later.
Click here for more information and contact us if you would like to discuss the bill. We are actively involved in consultations on the bill.
Outcomes of the Seoul AI Summit on May 21-22, 2024
On May 21-22, South Korea and the United Kingdom co-hosted the Seoul AI Summit as a follow-up to the first AI Safety Summit to be held at Bletchley Park in late 2023. At the summit, leading countries in the field of AI discussed a range of topics, including regulation and innovation in “frontier” AI. “Frontier” AI is defined as high-performance general AI models or systems that can perform a variety of tasks and have capabilities that match or exceed state-of-the-art models. They are considered to offer both great opportunities and systemic risks.
Outcomes of the summit include:
Seoul AI Declaration: Promote the safe development of AI to address global challenges and protect human rights, and countries commit to sharing AI safety information.
Seoul Statement of Intent: World leaders commit to advance AI safety science through international cooperation.
Seoul Ministerial Statement: Participating countries commit to promote the safety, innovation and inclusiveness of AI, and agree on common risk thresholds and international cooperation in AI safety science.
Frontier AI Safety Commitment: Signed by 16 AI technology companies, it commits to responsible development of AI, including security frameworks, threat red team testing, cybersecurity investments, and public reporting of model capabilities. Signatories will avoid deploying models if risks cannot be adequately mitigated.
Please read the statement here and the commitment here.
UK Autonomous Vehicles Act officially comes into force
On May 20, the UK Autonomous Vehicles Act (AV Act) officially came into force, laying the foundation for the popularization of autonomous vehicles on UK roads in 2026. The Act provides a clear legal framework for drivers and developers, removing a key barrier to the popularization of autonomous vehicles in the UK. It is expected that by 2035, the Act will attract more investment, promote the development of the autonomous vehicle industry, and create 38,000 jobs.
It is worth noting that the Act establishes a new authorization and licensing system for autonomous vehicles. Autonomous vehicles can only be authorized if they pass the "autonomous driving test". For example, autonomous vehicles that provide "hands off, eyes on" driving assistance will not be authorized. Authorized manufacturers must ensure continuous compliance and report major updates or modifications to the relevant authorities.
The issue of liability for accidents caused by self-driving cars has been a focus of public attention. The bill clarifies that when a self-driving car is in autonomous mode, the driver is not responsible for the way it drives. As first clarified in the 2018 Autonomous and Electric Vehicles Act, this responsibility will be borne by insurance companies, software developers and car manufacturers.
Click here to read the bill.
European Commission adopts first international treaty on artificial intelligence
On May 17, the European Commission adopted the first international treaty on artificial intelligence (the "Convention"). The Convention establishes a legal framework covering the entire life cycle of artificial intelligence systems and aims to ensure that human rights, the rule of law and democracy are respected during the use of artificial intelligence systems.
The Convention:
Adopts a risk-based approach to the life cycle management of artificial intelligence systems and applies to both the public and private sectors, focusing on responsible innovation and addressing potential negative impacts;
Develops context-specific transparency and oversight requirements, including measures to identify AI-generated content and ensure accountability for adverse impacts;
Requires parties to take measures to ensure that democratic institutions and processes are not undermined by the use of artificial intelligence systems, including the principle of separation of powers, respect for judicial independence and access to justice.
The Convention covers the use of artificial intelligence systems in both the public and private sectors. Parties to the convention may choose to be directly bound by the relevant convention provisions or take other measures to comply with the treaty provisions, while fully respecting their international obligations on human rights, democracy and the rule of law.
The convention will be open for signature in Vilnius, Lithuania on September 5, 2024.
Read the full text here.
California Senate passes California AI Transparency Act and Frontier AI Model Safety Innovation Act
On May 21, the California Senate passed two important pieces of AI-related legislation:
The California AI Transparency Act aims to protect consumers by enabling them to determine whether certain content is generated by AI. The bill requires large generative AI system providers to label AI-generated content and embed imperceptible (but machine-detectable) disclosure information in the content, and to provide AI detection tools that enable users to query whether content was created by a generative AI system.
The Frontier AI Model Safety Innovation Act aims to regulate the development and use of advanced AI models. The bill requires developers to make certain safety judgments before training AI models, comply with various safety requirements, and report AI safety incidents. The bill also establishes the Frontier Model Division in the Ministry of Science and Technology to oversee these AI models.
The two bills are currently being considered in Parliament. If approved, they will be submitted to the Governor General for signature.
Please read the full text of the bills here and here.
Japan's ruling party releases "AI White Paper" to "become the most AI-friendly country in the world"
On May 21, Japan's Liberal Democratic Party released its second "AI White Paper", proposing Japan's strategy to "become the most AI-friendly country in the world".
The document was produced by the Liberal Democratic Party's Artificial Intelligence Project Team, which was established in January 2023 to study Japan's AI strategy and provide policy recommendations.
The recommendations made in the paper aim to enhance Japan's competitiveness through the use of AI while ensuring the safe application of these technologies.
The main recommendations outlined in the document include:
Public-private partnership: Encourage cooperation in collecting, maintaining and updating data, and developing and utilizing AI in areas of Japan's strengths, such as automobiles, robots and material development. This should also apply to key areas such as medicine, finance and agriculture.
Infrastructure construction: The government should provide financial and policy support to ensure the construction of data centers and other infrastructure required to support AI technology. The report emphasizes the need to develop computing infrastructure for processing and storing data domestically to ensure secure management of critical data and reduce processing time. The government should also prioritize energy-efficient infrastructure and consider future energy needs.
International coordination of AI safety: Establish a network of high-level government-supported AI Safety Institutes (AISIs) in Japan. AISIs should consider safety assessments and standards and produce educational materials on the appropriate use of AI. In addition, international coordination should be carried out with other countries to ensure AI safety, and due attention should be paid to coordinating international standards in AI technology audits and third-party certification.