Article I: Foundational Principles & Prohibitions
Section 1.1: Unacceptable Risks (Prohibitions)
The development, procurement, or deployment of AI systems designed for, or having the primary effect of, the following purposes is strictly and unequivocally prohibited:
-
(a)
Cognitive Behavioural Manipulation: Systems that deploy subliminal, manipulative, or deceptive techniques to materially distort a person’s or a specific vulnerable group's behavior in a manner that causes or is likely to cause harm.
-
(b)
Social Scoring: Systems that classify, evaluate, or score individuals or groups based on their social behavior, socio-economic status, or personal characteristics, where such scoring leads to detrimental or unfavorable treatment.
-
(c)
Indiscriminate Biometric Surveillance: Systems that engage in the untargeted scraping of facial images or other biometric data from the internet or public-access (CCTV) footage to create or expand biometric identification databases.
-
(d)
Emotion Recognition in Sensitive Contexts: Systems used to infer emotions or mental states of individuals in the contexts of employment and educational institutions.
Expert Annotation:
This is the Constitution's "hard firewall" and a core component of a 100-point framework (Pillar 1.2). It is non-negotiable. It moves beyond vague ethical suggestions by establishing "red lines" derived directly from the "Unacceptable Risk" category of the EU AI Act. This provides immediate, unambiguous legal and ethical clarity to all developers and partners. A "100-point" framework does not "weigh" these risks; it eliminates them from the organization's activities.
Section 1.2: Core Principles
All AI systems not prohibited by Section 1.1 shall be designed, deployed, and governed in accordance with the following seven core principles:
-
(a)
Human-Centricity & Dignity: AI systems shall serve humanity. They must respect, protect, and promote internationally recognized human rights, fundamental freedoms, and human dignity.
-
(b)
Fairness & Non-Discrimination: AI systems shall be designed to treat all individuals and groups equitably and to actively mitigate and avoid "unfair bias". Systems shall not perpetuate or exacerbate discriminatory biases.
-
(c)
Transparency & Explainability: The operation of AI systems shall be transparent. Technical "Explainability" (XAI) shall be provided to operators, and simple "Transparency" shall be provided to end-users, ensuring they know when they are interacting with an AI.
-
(d)
Robustness, Safety, & Security: AI systems shall be safe, secure, and robust throughout their entire lifecycle. They must function appropriately for their intended use and be resilient against attacks or conditions that could cause harm.
-
(e)
Privacy & Data Governance: AI systems shall comply with all privacy laws and be "built with privacy by design". Data governance must ensure training, validation, and testing datasets are "relevant, sufficiently representative and, to the best extent possible, free of errors and complete".
-
(f)
Accountability & Human Oversight: Mechanisms shall be in place to ensure human oversight, responsibility, and accountability for AI systems and their outcomes. AI systems shall not be the final authority on decisions that "produce legal effects".
-
(g)
Sustainability & Environmental Flourishing: AI systems shall be designed, trained, and operated to promote sustainable development. Their full environmental, societal, and economic impact must be assessed to ensure they contribute to "environment and ecosystem flourishing".
Expert Annotation:
This section codifies the global consensus identified in Pillar 1.1 and its corresponding table. It synthesizes the core principles from the world's most critical intergovernmental (OECD, UNESCO) and corporate (Microsoft, Google, Meta) frameworks. Crucially, it includes "Sustainability & Environmental Flourishing," a principle central to the UNESCO framework and EU guidelines that is often missed in corporate-only documents. This makes the framework more comprehensive and future-facing.