On 1 June 2023, the Australian Federal Department of Industry, Science, and Resources released a discussion paper on safe and responsible AI in Australia. The focus of the paper “is to identify potential gaps in the existing domestic governance landscape and any possible additional AI governance mechanisms to support the development and adoption of AI.” The paper poses 20 questions on which responses are sought by 26 July 2023.
The paper recognises that AI has applications and gives rise to risks across the economy. It adds that “feedback on this paper will inform consideration across government on appropriate responses. This will help support coordinated and coherent responses, recognising that these issues are cross-cutting and related to a broad range of interests.” The paper also recognises that there have been sector-specific responses to the rise of AI and automated decision-making (for example, in the context of privacy and human rights). It’s focus appears to be on a whole-of-economy and whole-of-government approach, which is to be commended. There is a dimension of international cooperation that will need to be addressed too, but a coherent and comprehensive approach in Australia will be a good start.
The paper uses the term “governance” to include regulatory and voluntary mechanisms to address potential risks.
The paper is not limited strictly to the consideration of AI. Where relevant, it considers related applications that may not necessarily use AI, such as automated decision-making (ADM). In the paper, AI includes any products or services using AI techniques. These may range from simple rules-based algorithms guided by human-defined parameters to more advanced applications like neural networks.
Opportunities and risks
The paper notes that “the safe and responsible deployment…of AI presents significant opportunities for Australia to improve economic and social outcomes.” At the same time, this also carries with it the potential for significant risks. For example:
“Rich, large and quality data sets are a fundamental input to AI. AI systems depend on these training data sets to allow algorithms to be designed, tested and improved. However, access to and application of these data sets have the potential for individuals‘ data to be used in ways that raise privacy concerns. Privacy protection laws and access to quality data must be carefully balanced to enable fair and accurate results and minimise unwanted bias from AI systems.”
The paper notes that there are competition concerns too:
“Ownership of large rich data sets by certain entities or corporations may pose barriers to potential competitors entering or expanding into the market. This can also lead to imbalances between individuals or smaller organisations and the larger or more economically powerful organisations developing and deploying sophisticated AI.”
The paper is a useful resource for several reasons, including that it:
- outlines existing Australian domestic regulatory initiatives (both general and sector-specific);
- gives examples of the possible application of the Australian Consumer Law to AI;
- outlines a range of possible approaches to the governance of AI, ranging from voluntary principles to mandatory laws;
- refers to existing Australian AI governance initiatives, in regulatory as well as voluntary mechanisms; and
- outlines governance initiatives internationally, from selected countries.
The paper asks 20 consultation questions which are helpful to frame and channel responses:
1. Do you agree with the definitions in this discussion paper? If not, what definitions do you prefer and why?
Potential gaps in approaches
2. What potential risks from AI are not covered by Australia’s existing regulatory approaches? Do you have suggestions for possible regulatory action to mitigate these risks?
3. Are there any further non-regulatory initiatives the Australian Government could implement to support responsible AI practices in Australia? Please describe these and their benefits or impacts.
4. Do you have suggestions on coordination of AI governance across government? Please outline the goals that any coordination mechanisms could achieve and how they could influence the development and uptake of AI in Australia.
Responses suitable for Australia
5. Are there any governance measures being taken or considered by other countries (including any not discussed in this paper) that are relevant, adaptable and desirable for Australia?
6. Should different approaches apply to public and private sector use of AI technologies? If so, how should the approaches differ?
7. How can the Australian Government further support responsible AI practices in its own agencies?
8. In what circumstances are generic solutions to the risks of AI most valuable? And in what circumstances are technology-specific solutions better? Please provide some examples.
9. Given the importance of transparency across the AI lifecycle, please share your thoughts on:
a. where and when transparency will be most critical and valuable to mitigate potential AI risks and to improve public trust and confidence in AI?
b. mandating transparency requirements across the private and public sectors, including how these requirements could be implemented.
10. Do you have suggestions for:
a. Whether any high-risk AI applications or technologies should be banned completely?
b. Criteria or requirements to identify AI applications or technologies that should be banned, and in which contexts?
11. What initiatives or government action can increase public trust in AI deployment to encourage more people to use AI?
Implications and infrastructure
12. How would banning high-risk activities (like social scoring or facial recognition technology in certain circumstances) impact Australia’s tech sector and our trade and exports with other countries?
13. What changes (if any) to Australian conformity infrastructure might be required to support assurance processes to mitigate against potential AI risks?
14. Do you support a risk-based approach for addressing potential AI risks? If not, is there a better approach?
15. What do you see as the main benefits or limitations of a risk-based approach? How can any limitations be overcome?
16. Is a risk-based approach better suited to some sectors, AI applications or organisations than others based on organisation size, AI maturity and resources?
17. What elements should be in a risk-based approach for addressing potential AI risks? Do you support the elements presented in Attachment C?
18. How can an AI risk-based approach be incorporated into existing assessment frameworks (like privacy) or risk management processes to streamline and reduce potential duplication?
19. How might a risk-based approach apply to general purpose AI systems, such as large language models (LLMs) or multimodal foundation models (MFMs)?
20. Should a risk-based approach for responsible AI be a voluntary or self-regulation tool or be mandated through regulation? And should it apply to:
a. public or private organisations or both?
b. developers or deployers or both?
For further information regarding the matters covered in the paper or require legal input in drafting your response to the consultation questions, please contact the authors or any of the key contacts listed below.
This information and the contents of this publication, current as at the date of publication, is general in nature to offer assistance to Cornwalls’ clients, prospective clients and stakeholders, and is for reference purposes only. It does not constitute legal or financial advice. If you are concerned about any topic covered, we recommend that you seek your own specific legal and financial advice before taking any action.