"The Good, the bad and the regulated"

Activité: Participation ou organisation d'un événementParticipation à une conférence, un congrès

Description

Artificial intelligence is a multifaceted technology which has pervaded virtually every aspect of our lives. Such technological changes carry the potential to yield new solutions for society and business. However, they create not only new opportunities, but also new risks. AI technologies are being developed and deployed in multiple contexts, at all societal levels. They are, therefore, governed through multiple sets of legal, ethical, social and other norms in a global normative order characterised by a diversity of regulatory structures.

New risks might require swift regulatory responses, systemic changes and perhaps even a complete overhaul of this global, regional and national regulatory infrastructure. However, is our legal infrastructure capable of dealing with the risks of AI development?

Just recently, the United Nations Secretary-General’s High-level Panel on Digital Cooperation has emphasised the need of further deepening of digital cooperation between actors on different levels; a cooperation rooted in shared human values, multilateralism and multi-stakeholder participation. Importantly, the panel suggested a controlled “systems” approach for cooperation and regulation based on adaptivity, agility, inclusiveness, fitness for purpose and anticipatory small-scale testing – all towards accelerated attainment of the sustainable development goals.

Undoubtedly, the achievement of the sustainable development goals is reliant upon concerted action regarding the production, utilisation and sharing of various types of digital technologies and content in cyberspace – collectively referred to as ‘digital public goods’. However, the continuing fragmentation in the legal and regulatory landscape at global, regional and national level has contributed to unprecedented diffusion of control making it unfit to deal with the emergence of new risks and new accountability challenges.

For example, Industry 4.0 and its highly automated ‘factories of the future’, Industrial Internet of Things, collaborative robots, cyber ranges and industrial cyber-physical systems promise optimised, more secure and more efficient manufacturing processes. Unlike traditional manufacturing, however, smart manufacturing is fuelled by fundamentally different ‘raw’ materials, namely operational data, machine learning models, digital twins and other digital assets. Smart manufacturing is manifest not only in cyberspace, but also in the physical world. It depends on underlying operational and information technology infrastructure, such as cloud-based infrastructure which may be geographically spread across multiple States. This results in the emergence of new, ‘digital supply chains’ overlaying the traditional manufacturing value chain. These assets and infrastructure are increasingly interconnected, automated and geographically distributed which exposes the digital supply chain actors to greater risk of non-compliance with internationally recognised human rights. For example, datasets used to train machine learning models may originate from non-democratic regimes and their original collection may be the result of flagrant violations of the right to privacy. The same holds true of foundational AI technologies (eg, facial recognition, emotion detection, etc.) which are deemed essential in future manufacturing processes.

Another example where the potential of AI-supported technology is tremendous is the healthcare sector. AI could, for instance, be used to enhance patients’ capabilities, augment healthcare professionals’ performance beyond what is considered normal, it could help better understand medical knowledge, and facilitate sharing and usage of medical data for advanced patient care irrespective of national boundaries, in a cost-effective and safe manner. At the same time, the risks brought along should not be underestimated. Legal issues of liability, privacy and data protection rights, inequality, algorithmic non-transparency, safety and security are yet to be addressed, as well as ethical concerns regarding personal autonomy, identity and justice.

Against this background, the conference ponders over a number of difficult questions. How to protect international human rights and the commercial interests of businesses in such collaborative settings where AI tools are increasingly being installed at critical nodes in both public and private decision-making processes? Which principles should underpin the production and use of these (new) ‘raw’ materials? How to operationalise in a context (allegedly) universal concepts such as fairness, equity and justice? How to protect the fundamental objectives of liability in an environment where no single actor decides anything alone? Who has the “duty of vigilance” in an environment where control is diffused far beyond the physical boundaries of a single object, single actor or the territory of a single State? Have companies, and not nation-states, become the new ‘trustees of humanity’? If so, how should they be governed in a world which increasingly escapes the logic of Westphalian sovereignty? What is the role of international, regional and national law against regulation embedded in chips and enforced by the inexorable precision of Boolean algebra? What is the potential of new ways of regulation such as public and private certification, standardisation, voluntary schemes, regulatory sandboxing and mere “soft” law to deal with the challenges of AI and AI-backed collaboration? And will they cast the death spell on law or, rather, revamp it entirely?

AI is not good or bad. It is not binary although binary is what AI ‘understands’. Its developers, designers and deployers constantly move along a sliding scale of risks with a dangerously thin line between infinitesimally small- and infinitesimally large-scale risks. In a fragmented multilevel legal order the proper understanding and assessment of such risks is crucial and should become the common denominator of any effort to build a multilateral and multi-stakeholder ‘systems’ approach to AI.
Période18 févr. 2020
Type d'événementUne conférence
LieuLeuven, BelgiqueAfficher sur la carte
Niveau de reconnaissanceInternational