POSTED: 28 Oct, 2024
Artificial Intelligence (AI) is appearing in many aspects of our life and work, and advancements are rapid and continuous. For most of us, it has been hard to keep up. Regulations designed to protect our way of life and conditions of work, have also struggled to keep pace with the development of AI in ways that can reduce harm arising from the use of AI, while ensuring Australia can capitalise on the possibilities that AI offers.
Recognising that Australia’s current regulatory environment has not kept pace with AI capability, and following extensive consultations, the Australian Government recently released proposed guardrails for the safe and responsible development and deployment of AI. Outlining ‘high-risk AI’ these guardrails are put forward in the proposals paper titled: Introducing mandatory guardrails for AI in high-risk settings, which can be found here.
The guardrails complement the previously released Voluntary AI Safety Standards and provide some guidance to developers, organisations and individuals, on how to build and use AI responsibly and safely. Unfortunately, like many technologies, even when created with the best of intentions, AI can be used in ways that are deliberately or inadvertently harmful with negative consequences for individuals or society. For example, case examples and much academic research has already demonstrated that AI can not only replicate existing biases but embed them in automated decisions that result in individuals being excluded or otherwise discriminated against on the basis of race or gender. This can have significant implications especially when AI is used to automate decisions that impact on the lives or livelihoods of individuals.
One situation that has been explored in academic studies is when AI is used to automate recruitment shortlisting or hiring decisions. In these cases, research has shown that without human oversight, AI training data can contain pre-existing biases that may exclude under-represented groups from the AI-compiled shortlist for a job. This has obvious implications for access to employment and an income for individuals or particular groups, and it also has implications for diversity and the associated benefits of innovation, creativity and idea generation within organisations. Organisations may also experience more direct effects arising from the malicious use of AI to expose enterprise vulnerabilities or as they are subjected to more sophisticated scams, fraud and cyber-security attacks.
Taking a risk-based approach to regulation similar to that adopted by the several States in the USA and the European Union in the EU AI Act 2024, the guardrails proposed in Australia focus on the development and deployment of AI in high-risk settings. While the Australian guardrails are still in development, the proposals paper provides a useful summary of high-risk settings identified in other countries. These include (among others):
- biometrics used to assess behaviour, mental state or emotions;
- AI systems used to determine access to education or employment (as in some automated recruitment systems);
- AI systems used to determine access to public assistance or benefits; and
- AI systems used as safety components in critical infrastructure.
Research currently being undertaken by Australian Cobotics Centre researchers, suggests that some organisations in Australia are using AI for biometric identification or for recruitment or in other ways that may be considered ‘high-risk’ under the use cases applied in other country contexts. It is therefore critical for Australian organisations to monitor the Australian Government’s Consultation Hub and ongoing work on Artificial Intelligence to keep abreast of proposed regulatory changes, and consider how any current or planned use of AI within their organisation aligns with principles for promoting safe and responsible use of AI in Australia.
Recent News
ARTICLE: Industry 5.0 and Cobot Adoption
TL;DR Industry 5.0 highlights environmental sustainability, human centricity, and resilience, pushing corporate responsibility to the social and ...
Meet our E.P.I.C. Researcher, Phuong Tran
Phuong Tran is a PhD researcher based at the Queensland University of Technology and her project is part of the Collaborative Robots and Humans' Work ...
ARTICLE: From Lab to Market (Part II): Bridging the Gap – Solutions for Effective Industry-Academic Collaboration
In today's rapidly evolving technological landscape, the synergy between academic research and industrial innovation has never been more critical. Yet ...