Australia is stepping up to the challenge of regulating Artificial Intelligence (AI). The country announced plans to introduce targeted AI rules focused on human intervention and transparency. This comes amidst a global rise in AI adoption and growing concerns about its potential pitfalls.
“Australians want stronger protections on AI, we’ve heard that, we’ve listened. Australians know AI can do great things, but people want to know there are protections in place if things go off the rails. We don’t think that there is a right to self-regulation anymore. I think we’ve passed that threshold”, said Industry and Science Minister Ed Husic in a statement on September 5, 2024.
New AI Rules Proposed
Currently, Australia lacks specific laws governing AI. In 2019, the country introduced eight voluntary principles for its responsible use. However, a recent report by the government highlighted the shortcomings of this approach, particularly for high-risk scenarios.
The proposed new regulations aim to address these shortcomings. While details are still forthcoming, Minister Husic revealed the focus will be on:
Human oversight: Ensuring there’s always a human “in the loop” to make critical decisions and prevent AI biases from perpetuating discrimination or unfairness.
“Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms,†stated a report by the government accompanying the announcement.  It further stated that businesses need to be open about AI’s involvement in content creation.
Transparency: Companies will need to be transparent about AI’s role in their operations, particularly when it comes to generating content. This includes informing users when they’re interacting with an AI system.
New AI work plan. Source: Australia’s Department of Industry, Science and ResourcesThe proposed regulations are voluntary for now, but the government has hinted at the possibility of making them mandatory for high-risk settings in the future. This follows a similar approach taken by the European Union, which recently passed landmark AI laws imposing strict transparency obligations on high-risk applications.
A Global Trend towards AI Regulation
Australia’s proposed regulations are part of a growing global trend towards AI governance. Several countries, including the US, UK, and EU, are actively exploring ways to regulate AI to ensure its responsible development and deployment.
One recent development in this area is the potential for a global AI treaty. The US, UK, and EU are reportedly in talks to establish the first international agreement focused on safeguarding human rights and democracy in the age of AI.
Reuters reported that the international AI treaty will be open for signing on Thursday by the countries that negotiated it.
The AI Convention, which has been in the works for years and was adopted in May after discussions between 57 countries, addresses the risks AI may pose, while promoting responsible innovation.
“This Convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law,” Britain’s justice minister, Shabana Mahmood, said in a statement, as quoted by Reuters.
The signatories can choose to adopt or maintain legislative, administrative or other measures to give effect to the provisions, the Reuters report added.
The Road Ahead
The Australian government is currently undergoing a one-month consultation period to gather public feedback and determine whether to make the use of AI systems in high-risk environments mandatory in the future. This feedback will be used to refine the final policy framework.
With the growing popularity of generative AI systems like Google’s Gemini and Microsoft-backed OpenAI’s ChatGPT, regulators worldwide have expressed worries about fake news and misinformation produced by AI technologies.
Source: Read More