Answer ... (a) Healthcare
To date, there is no specific regulation of AI in the healthcare sector. However, in 2021, the Ministry of Health published the Digital Health Strategy, which highlights the need to:
- develop data platforms and technological infrastructure that support AI;
- create strategic missions in health;
- promote the definition of an ethical and regulatory framework that strengthens the protection of individual and collective rights; and
- ensure inclusion and social wellbeing.
One example of the implementation of the objectives of this strategy is the creation of a healthcare data pool through the Digital Health Commission of the Interterritorial Council of the National Health System in accordance with the Recovery, Transformation and Resilience Plan.
(b) Security and defence
There is no specific regulation of AI in the security and defence sector. However, the secretary of state for defence, through Resolution 1197/2023, has approved a strategy for the development, implementation and use of AI in the Ministry of Defence, with the aim of increasing the efficiency of the ministry’s missions and tasks.
(c) Autonomous vehicles
The Law on Traffic, Circulation of Motor Vehicles and Road Safety provides that ‘drivers’ are persons who are in command of a vehicle. In the case of vehicles operated by a learner driver, the person behind the additional controls is considered to be the driver.
Article 11bis of the law provides that the owner of an automated driving system must communicate to the Vehicle Registry of the Central Traffic Department the capabilities or functionalities and operational design of the automated driving system, both:
- at the time of registration; and
- subsequently whenever there is any update of the system throughout the useful life of the vehicle.
In Instruction VEH 2022/07, the Directorate General of Traffic:
- defines an ‘automated vehicle’ as a “motor vehicle designed and built to move autonomously for certain periods of time without continuous supervision by the driver but for which the driver’s intervention is still expected or needed”;
- sets out the procedure and requirements for the authorisation of tests or research trials carried out with automated vehicles on roads open to traffic in general; and
- sets out the requirements to apply for such authorisation and the procedure for the designation of an authorised technological recognition centre for the purposes of the instruction.
(d) Manufacturing
There is no specific regulation of AI in the manufacturing sector. At the national level, some projects are underway which are promoted by the General State Administration. One example is the Gaia-X National Hub for the development of open and secure data infrastructure, which has seen the establishment of:
- several working groups focused on specific sectors such as:
-
- health;
- industry 4.0;
- engineering and construction;
- enabling technologies;
- finance and public administration; and
- four other cross-cutting working groups focused on:
-
- legal;
- technical;
- projects; and
- ethics.
(e) Agriculture
There is no specific regulation of AI in the agricultural sector. At the national level, the Gaia-X National Hub for the development of open and secure data infrastructure has a working group dedicated to the agri-food sector. The digitalisation strategy for the agri-food and rural sector also includes important measures for this sector. It covers matters such as:
- open data;
- training and advice on digital skills;
- the generation of information; and
- funding for digital entrepreneurship.
(f) Professional services
There is no specific regulation of AI in the professional services sector. This notwithstanding, the measures envisaged in the National AI Strategy include:
- developing digital capabilities;
- enhancing national talent; and
- attracting global talent.
These are considered essential to enhance technical AI skills among the active population in order to:
- facilitate access to quality new jobs; and
- address challenges in the future job market.
(g) Public sector
Within the framework of the National AI Strategy, the Charter of Digital Rights and European initiatives regarding AI, Article 23 of Law 15/2022 on equal treatment and non-discrimination stipulates that public administrations must:
- encourage the implementation of mechanisms to ensure that algorithms involved in decision-making processes seek to minimise bias and enhance transparency and accountability whenever technically feasible. These mechanisms should include the design and training data and address the potential discriminatory impact of the algorithms. To this end, impact evaluations should be conducted to determine potential discriminatory bias;
- prioritise transparency in the design, implementation and interpretability of decisions made by algorithms involved in decision-making processes;
- promote the use of ethical and reliable AI that respects fundamental rights; and
- promote quality certification for algorithms.
At the regional level, for example, Decree-Law 2/2023 on urgent measures to boost AI in Extremadura establishes an essential framework for measures aimed at supporting, promoting and developing AI systems in the autonomous community of Extremadura.