Federal Government’s Generative AI Consultation Criticized for Lack of Transparency and Rushed Process

Critics are voicing concerns over the Canadian federal government’s consultation process regarding a proposed code of conduct for generative artificial intelligence (AI) systems similar to ChatGPT. Critics argue that the process lacks transparency, democratic engagement, and public education, raising questions about the government’s commitment to regulating AI technology effectively. The consultation, which aims to establish guidelines for Canadian companies developing generative AI systems, has been criticized for its secretive nature and rushed timeline.

Unconventional and secretive consultation process

Teresa Scassa, a prominent law professor specializing in AI and privacy at the University of Ottawa, has expressed her reservations about the consultation process. She characterizes it as “very odd” due to its lack of transparency and public engagement. The public became aware of the consultation only in mid-August, following an accidental online notice posted by the government. The process appears to be hurried and shrouded in secrecy, raising concerns about its effectiveness and impact on the rapidly evolving AI landscape.

Lack of transparency undermining key objectives

One of the key criticisms directed at the consultation is its lack of transparency, which goes against the fundamental principles of a proper consultation process. Transparency, democratic engagement, and public education are essential components of effective consultations, but the closed-door roundtable discussions with limited participants have raised doubts about whether these objectives are being met. Unlike the European Union’s approach, which involved public input and robust debate, Canada’s process is criticized for its secretive nature.

Code of conduct and its scope

The proposed voluntary code of practice for generative AI systems entails several important provisions. Companies would be required to prevent malicious and harmful uses of their AI systems, mitigate biases in results, and ensure that AI-generated content is distinguishable from human-generated content. The code is intended to be established before the passage of Bill C-27, a privacy legislation that incorporates an AI component known as the Artificial Intelligence and Data Act (AIDA). However, critics argue that the voluntary nature of the code might lack the necessary teeth to ensure ethical behavior and safety standards.

Limited engagement and consultation period

Critics, including Teresa Scassa and Conservative MP Michelle Rempel Garner, have expressed concerns about the consultation’s brevity and exclusivity. The consultation, which began on August 4 and is scheduled to conclude on September 14, is considered too short to allow for meaningful engagement and thorough discussion. The closed-door roundtables have been by invitation only, preventing a wider range of stakeholders from participating. The limited timeframe and exclusivity of the process have led critics to question the government’s commitment to soliciting diverse perspectives.

Contrast with European Union’s approach

The European Union’s consultations on their AI act are highlighted as a contrasting example of transparency and engagement. The EU’s consultations were characterized by extensive public participation, debates, and discussions, ensuring a comprehensive understanding of the issues at hand. In contrast, Canada’s consultation process has been criticized for its secretive nature and lack of public input.

Government’s response

Audrey Champoux, spokesperson for Innovation Minister François-Philippe Champagne, defended the government’s approach by stating that a brief consultation involving AI experts from academia, industry, and civil society is ongoing. However, critics argue that the closed-door discussions and lack of public engagement undermine the credibility and effectiveness of the consultation process.

Concerns over meaningful consultation

Conservative MP Michelle Rempel Garner expressed concerns that the consultation had largely taken place behind closed doors, without the input of Parliament or an official parliamentary committee. This lack of transparency and engagement raises doubts about the meaningfulness of the consultation process. Additionally, Rempel Garner noted the unusually short consultation period, leading to suspicions that the outcomes might already be predetermined.

Substance of the voluntary code

Teresa Scassa criticized not only the transparency of the consultation process but also the content of the proposed voluntary code itself. She pointed out that the code’s language, particularly its use of terms like “would” instead of stronger language like “should,” indicates a cautious and tentative approach. This lack of assertiveness could potentially weaken the code’s impact and efficacy in ensuring ethical AI development.

Regulation vs. Voluntary codes

Ignacio Cofone, Canada research chair in artificial intelligence law and data governance at McGill University, emphasized the superiority of regulations over voluntary codes of conduct. He argued that regulations, with clear guidelines, legal obligations, and consequences, provide a stronger framework for curbing potential harms associated with AI technology. The critics’ concerns reflect a broader sentiment that voluntary codes might not have the necessary enforcement mechanisms to ensure responsible AI development.

Missed opportunities for constructive engagement

Michelle Rempel Garner questioned why the government did not take a more proactive and constructive approach to engage the country on such a significant issue. She criticized the government for seemingly prioritizing bureaucratic processes over genuine public engagement and meaningful dialogue. Given the transformative impact of AI technology, critics argue that a more thorough and transparent consultation process is essential for effective regulation and policymaking.

The Canadian government’s consultation process for a code of conduct for generative AI systems has faced significant criticism for its lack of transparency, rushed timeline, and limited engagement. Critics argue that these factors undermine the consultation’s effectiveness and its ability to address the complex challenges posed by rapidly advancing AI technology.

Source: https://www.cryptopolitan.com/federal-generative-ai-criticized/