Subscribe to LSNN Daily News

Method
Channel
Author

Enter your email address:

QrCode
2023-08-18 19:12:20
Friday 19:19:31
August 18 2023

Canada: Pioneering Generative Artificial Intelligence - Code of Practice for a New Era

View 448

words 865 read in 4 minutes, 19 Seconds

Generative Artificial Intelligence, known as generative AI, is capturing global attention through cutting-edge systems like ChatGPT, Dall-E 2, and Midjourney. These systems are trained on massive amounts of text, images, or other data, and their distinctive feature is the ability to generate innovative content across various contexts. A single system can perform multiple tasks, from translating text to generating code.

However, despite the numerous advantages, generative AI systems are powerful tools that can be used maliciously or inappropriately. Their generative capability, combined with widespread deployment, presents distinct and potentially extensive risk profiles. These characteristics have led to an urgent call for action on generative AI, even among industry-leading AI experts. In recent months, the international community has taken significant steps to make these systems safer and more reliable.

Code of Practice for Generative AI - Key Elements

The Government of Canada announced plans on August 16, 2023, for the creation of a code of practice for Generative Artificial Intelligence. In line with this objective, the Government disclosed potential elements of its code of practice on generative AI and is seeking public comments on the same.

Privacy Law Reform, Artificial Intelligence

The Code of Practice aims to ensure that developers, users, and operators of generative AI systems can avoid harmful impacts and build trust in their systems, among other things. It will be based on six key elements:

  1. Safety
  2. Fairness and Equity
  3. Transparency
  4. Human Oversight and Monitoring
  5. Validity and Robustness
  6. Accountability

The Future of Generative AI

As a leading nation in AI innovation, Canada has taken significant steps to ensure this technology evolves safely. The Artificial Intelligence and Data Act (AIDA), introduced in June 2022 as part of Bill C-27, provides the legal foundation to regulate AI systems, including generative ones.

Safety will be evaluated at a systemic level, considering potential impacts and malicious uses. Measures will be implemented to assess and mitigate the risk of distorted outputs, and a reliable method will be provided to detect AI-generated content.

Transparency and Human Oversight

Generative AI systems can be challenging to explain, making transparency crucial. Creators should provide a reliable method to detect generated content and explain the development process.

Generative AI requires careful human supervision to ensure proper functioning. Mechanisms will be introduced to identify and report negative impacts and to update models based on results.

Contributing to a Future of Reliability

In addition, creators, users, and operators of generative AI systems will need to implement multiple defenses to ensure safety. Policies, procedures, and training will be developed to define roles and responsibilities clearly.

Canada positions itself as a leader in the evolution of AI, promoting trust and security through the Code of Practice. Community involvement is vital to ensure these are the right elements to build a reliable future in the era of generative AI.

#GenerativeAI #CanadaInnovation #SecureTech

Contact: If you have questions or need further information on this consultation, contact us via email at domesticteamaihub-...ria@ised-isde.gc.ca.

Technical Glossary

  • Generative Artificial Intelligence (generative AI): A branch of artificial intelligence that focuses on creating models and systems capable of generating original and creative content, such as text, images, or sounds.
  • Artificial Intelligence and Data Act (AIDA): Canadian law that provides the legal basis for regulating artificial intelligence systems, including generative ones.
  • Code of Practice: A set of guidelines and standards followed by creators, users, and operators of generative AI systems to ensure proper functioning, safety, and transparency of their systems.
  • Fairness and Equity: The need to ensure that generative AI systems do not perpetuate harmful biases or stereotypes by using representative data and measures to mitigate biases.
  • Transparency: The requirement for generative AI systems to provide clear and accessible explanations of the development process, data training, and nature of generated content.
  • Human Oversight and Monitoring: The importance of having adequate human supervision to prevent negative impacts and identify issues and risks in generative AI systems in a timely manner.
  • Validity and Robustness: The goal of ensuring generative AI systems work as intended and are resilient across a wide range of contexts, including cybersecurity measures.
  • Accountability: The need for creators, users, and operators of generative AI systems to implement procedures and training to define roles and responsibilities clearly and ensure system security.
  • Artificial Intelligence (AI): A field of computer science that aims to create systems and machines capable of simulating human intelligence, performing complex tasks such as pattern recognition, language understanding, and problem-solving.
  • Dataset: A collection of data used to train artificial intelligence models, which can include text, images, or other types of information.
  • Bias: Systematic deviations or distortions in data that can influence the results of artificial intelligence models and lead to non-representative or unfair outputs.
  • Cybersecurity: The field focused on protecting computer systems, including those based on artificial intelligence, from cyberattacks and cyber threats.
  • Data Training (Training) : The process in which an artificial intelligence model is exposed to a large number of examples to learn and improve its performance in specific tasks.
  • Human Supervision: The act of having human beings monitor and control the output and functioning of artificial intelligence systems, intervening when necessary.
  • Malicious: Term used to describe intentionally harmful or damaging behaviors or uses of technologies, such as using generative AI for malicious purposes.

Source by Redazione

Articles Similar / Canada: ...a New Era