Amnesty International head says AI innovation vs. regulation is ‘false dichotomy’
The secretary-general of Amnesty International, Anges Callamard, released a statement on Nov. 27 in response to three European Union member states pushing back on regulating artificial intelligence (AI) models.
France, Germany and Italy reached an agreement that included not adopting such stringent regulations for foundation models of AI, which is a core component of the EU’s forthcoming EU AI Act .
This came after the EU received multiple petitions from tech industry players asking the regulators not to over-regulate the nascent industry.
However, Callamard said the region has an opportunity to show “international leadership” with robust regulation of AI, and member states “must not undermine the AI Act by bowing to the tech industry’s claims that adoption of the AI Act will lead to heavy-handed regulation that would curb innovation.”
“Let us not forget that ‘innovation versus regulation’ is a false dichotomy that has for years been peddled by tech companies to evade meaningful accountability and binding regulation.”
She said this rhetoric from the tech industry highlights the “concentration of power” from a small group of tech companies who want to be in charge of the “AI rulebook.”
Related: US surveillance and facial recognition firm Clearview AI wins GDPR appeal in UK court
Amnesty International has been a member of a coalition of civil society organizations led by the European Digital Rights Network advocating for EU AI laws with human rights protections at the forefront.
Callamard said human rights abuse by AI is “well documented” and “states are using unregulated AI systems to assess welfare claims, monitor public spaces, or determine someone’s likelihood of committing a crime.”
“It is imperative that France, Germany and Italy stop delaying the negotiations process and that EU lawmakers focus on making sure crucial human rights protections are coded in law before the end of the current EU mandate in 2024.”
Recently, France, Germany and Italy were also part of a new set of guidelines developed by 15 countries and major tech companies, including OpenAI and Anthropic, which suggest cybersecurity practices for AI developers when designing, developing, launching and monitoring AI models.
Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
241125: Crypto liquidations reach $470M as Bitcoin retraces, altcoins surge
Dogecoin, XRP, Stellar and Sandbox saw a larger liquidation share than usual as some top altcoins from the 2020-2021 cycle soared as high as 50%. Bitcoin retreated after failing to break the $100,000 milestone on Nov. 24, causing one of the largest weekend crypto liquidation events in over half a y
Crypto Faces Ongoing Challenges Despite Pro-Crypto Shift in U.S. Leadership
Australia Seeks Public Input on Adopting Global Crypto Reporting Standards