In the last five decades we have been assisting on the digitalization of society and consequently of the healthcare sector; we have been using terms such as eHealth, medical informatics, health informatics, telemedicine, telehealth and mHealth, depending on the available technologies and accessibility of the baseline infrastructure.
These terms have been used to describe the application of information and communication technologies (ICTs) to areas of health, health care and wellbeing.
More recently, with the introduction of Artificial Intelligence, we have been assisting with the transition from eHealth to Digital Health that is flexible enough to foster diversity of purposes, technologies and other specificities.
The World Health Organization stated that: “Moving from eHealth to Digital Health puts more emphasis on digital consumers, with a wider range of smart-devices and connected equipment being used, together with other innovative and evolving concepts as that of Internet of things (IoTs) and the more widespread use of artificial intelligence (AI), big data and analytics. Digital Health is changing the way health systems are run and health care is delivered”.
Digital Health also includes the following aspects: Telemedicine, Electronic Healthcare File and Artificial Intelligence.
This article aims to give a succinct overview of the current rules that cover such aspects in in Italy by looking at local regulations and recent developments at a European and international level.
On the 3rd of March 2015, Italy adopted a 6-year plan for digital growth called "2014-2020 digital growth strategy". It followed the "2019-2021 three-year plan". Both plans tackle digital transformation in the public sector, including healthcare.
In the transformation of healthcare into e-health, they play a decisive role in the introduction of telemedicine, the electronic file and artificial intelligence.
Telemedicine is not expressly regulated by Italian law. However, on 17 March 2014, the Italian Ministry of Health adopted guidelines (hereinafter the “Guidelines”) which contain several principles and rules applicable to Telemedicine.
Under the Guidelines, Telemedicine is defined as a way to provide healthcare services through the use of innovative technologies, in particular Information and Communication Technologies (ICT), in situations where the healthcare professional and the patient, or two or more healthcare professionals, are not in the same place.
Pursuant to the Guidelines, Telemedicine for the purposes of secondary prevention, diagnosis, treatment, rehabilitation and monitoring, can encompass the following services:
Telemedicine services can be provided in the following ways: tele-examination, tele-consultation, tele-cooperation.
During the tele-examination, it is possible to prescribe medicinal products and health treatments. For this purpose, it is necessary that the doctor can see and interact with the patient at a distance (through a telecommunications infrastructure). Such examination can take place in real time or it can be deferred.
Tele-health: this concerns the primary healthcare and “covers the systems and services that connect patients, especially the chronic ones, with the doctors to assist in the diagnosis, monitoring, management and accountability of the same.”
Tele-monitoring: this “covers remote assistance, we mean a social assistance system to take care of elderly or fragile at home, through the management of alarms, activation of emergency services, calls to "support" by a service centre. Remote assistance has a predominantly social content [...] in order to guarantee the continuity of the assistance”.
According to Section 3 of the Guidelines, Telemedicine services should be organized as follow:
the Users – individuals using a telemedicine service, and for this purpose sending health information and/or receiving relevant results). User can be:
the patient / caregiver (tele-examination; tele-consultation);
the doctor in the absence of the patient (tele-consultation);
the doctor or another health care professional (tele-examination, tele-cooperation).
the Provider [OB1]– providers of healthcare services through a telecommunications network (or infrastructure). A Provider may be:
Entities belonging to the Italian National Healthcare System (NHS), authorized or accredited, either public or private;
Healthcare professionals belonging to the NHS (i.e. general practitioners, paediatricians, medical specialists).
The Provider [OB2]receives health information from the User and returns the results of the service.
- the Service Centre – has the responsibility for the management and maintenance of the information system, including transmission and storage of relevant information.
Since Telemedicine is treated as a way to perform “traditional” healthcare services by overcoming “common physical barriers”, Providers should hold all authorizations normally required for the performance of health and socio-health activities in a traditional manner (see Italian Legislative Decree n. 502/92) and all additional authorisations which may be required in relation to the IT tools used to perform the relevant healthcare services.
In addition, if a Provider intends to offer Telemedicine services at cost of the NHS, it should also obtain a regional accreditation and (sometimes) conclude contractual arrangements with the concerned Italian Regions  where the telemedicine service will be provided.
Given the possible developments and the importance that Telemedicine might have in the future, in March 2019 a National Study Group on the “economic assessment of telemedicine service” was created. Its purpose is to create a model, compatible with the Italian NHS, to evaluate and programme services provided in Telemedicine and to identify new pricing systems.
ELECTRONIC HEALTHCARE FILE
Electronic Healthcare File (EHF) is essentially “the electronic management of clinical practices”, intended to facilitate the sharing of documents through platforms. EHF was first introduced by Article 12 of Legislative Decree n. 179/2012.
The main aims of the EHF are: a) to facilitate patient healthcare; b) to offer a service that facilitates the integration of the various professional skills in the healthcare sector; and c) to provide consistent clinical information about a patient.
Italian law regulates the relevant data sharing as follows: (i) patient identification should be ensured by associating a patient social security number to a patient “unique code” (see Article 1, letter m) of Legislative Decree n. 178/2015), whilst (ii) interoperability of electronic documents within the NHS should be ensured by the adoption of regional platforms which are able to safely dialog with INI, thanks to the adoption of common methods and characteristics.
Artificial Intelligence (AI) systems are “software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. [..] AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems)”
Currently, the application of AI to the healthcare sector is not expressly regulated under Italian law. However, given the increasing importance of AI, the World Health Organization, European Commission and National Governments have set up ad hoc committees for the purpose of studying the matter and to drafting initial guidelines.
On the 4th of February 2019, the US Food and Drug Administration ("FDA") published a discussion paper regarding the application of adaptive artificial intelligence ("AI") and machine learning (“ML”) in software classified as a medical device (“SaMD”). The FDA framework proposal suggests that when AI has the potential to adapt and optimize device performance in real-time to continuously improve health care for patients (so-called “adaptive software”), it should be subject to a “predetermined change control plan” in premarket submissions. This plan should include the types of anticipated modifications of the SaMD” and the associated methodology being used to implement those changes in a controlled manner while managing the risks to patients (referred to as the “Algorithm Change Protocol").
Also in relation to AI and medical devices, the European Parliament said in its resolution of the 12 February 2019 that “when AI is being used in implanted medical devices, the bearer should have the right to inspect and modify the source code used in the device […] the existing system for the approval of medical devices may not be adequate for AI technologies; calls on the Commission to closely monitor progress on these technologies and to propose changes to the regulatory framework if necessary in order to establish the framework for determining the respective liability of the user (doctor/professional), the manufacturer of the technological solution, and the healthcare facility offering the treatment”.
In line with this, on the 8th of April 2019, the European Commission’s High-level Expert Group on Artificial Intelligence (AI HLEG), published the “Ethics Guidelines for Trustworthy AI”(hereinafter “Ethics Guidelines”).
The Ethics Guidelines aim to promote a trustworthy AI and is divided into four macro-areas: I) Trustworthy AI; II) foundations of Trustworthy AI; III) Realisation of Trustworthy AI; and IV) Assessment of Trustworthy AI.
Trustworthy AI is defined as AI which is compliant with the following principles:
lawfulness (i.e. compliant with all applicable laws and regulations);
ethical (i.e. adherent to ethical principles and values; and
robust (i.e. reliable from a technical and social perspective).
The Ethics Guidelines describes these principles as follows:
Lawfulness should not only be interpreted as “what cannot be done”, but also in the sense of “what should be done or may be done”.
AI should comply with: “all legal rights and obligations that apply to the processes and activities involved in developing, deploying and using AI systems [..]."
These legal sources should include, but are not limited to: “EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws."
Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications, such as for instance “the Medical Device Regulation in the healthcare sector”.
AI systems should also comply with the ethical codes of each sector where AI is implemented and adhere to ethical principles based on fundamental rights, such as 1) respect for human autonomy; 2) prevention of harm; 3) fairness; and 4) explicability. According to the Ethics Guidelines, only a “human-centric approach” can ensure the respect of the above-mentioned principles.
AI “…should perform in a safe, secure and reliable manner [..] both from a technical perspective (ensuring the system’s technical robustness as appropriate in a given context, such as the application domain or life cycle phase), and from a social perspective (in due consideration of the context and environment in which the system operates)”.
The Ethics Guidelines provides a list of concrete requirements that should be met in order to comply with the above-mentioned principles and to achieve a trustworthy AI, these are: 1) human agency and oversight; 2) technical robustness and safety; 3) privacy and data governance; 4) transparency; 5) diversity, non-discrimination and fairness; 6) environmental and societal well-being; and 7) accountability”.
The applications of AI in the health sector are manifold.
For the European Commission "Trustworthy AI technologies can be used – and are already being used – to render treatment smarter and more targeted, and to help preventing life-threatening diseases. Doctors and medical professionals can potentially perform a more accurate and detailed analysis of a patient’s complex health data, even before people get sick, and provide tailored preventive treatment. In the context of Europe’s ageing population, AI technologies and robotics can be valuable tools to assist caregivers, support elderly care, and monitor patients’ conditions on a real time basis, thus saving lives.
Trustworthy AI can also assist on a broader scale. For example, it can examine and identify general trends in the healthcare and treatment sector, leading to earlier detection of diseases, more efficient development of medicines, more targeted treatments and ultimately more lives saved”.
Digital Health is likely to introduce new business models, which are expected to increase the efficiency of healthcare services, and reduce the time for their performance.
These efficiencies might free-up additional resources that can be used to improve the accessibility, quality and affordability of healthcare services, with the aim of guaranteeing a better life for people; in particular, the ones affected by chronic diseases.
However, Digital Health, and in particular AI, should be regarded as the “longa manus” of human agents, with the purpose of helping them but not replacing them. Coherently with that view, new technologies should be human centred and adequately overseen.
Because Digital Health is in continuous evolution, it is particularly important for all stakeholders to adequately monitor, know and understand the applicable rules as well as relevant business opportunities.
Mauro Turrini and Carmen Miriam Martorano – Studio Legale Bird & Bird
 “Global Strategy on Digital Health 2020-2024”, draft 26 March 2019, p. 3, (see: https://extranet.who.int/dataform/upload/surveys/183439/files/Draft%20Global%20Strategy%20on%20Digital%20Health.pdf)
Examples of healthcare sector digitalization include:
- In the UK, the digital organization “NHSX” (created in February 2019 by the NHS) is responsible for health care digitalization (see https://www.nhsx.nhs.uk/what-we-do). The NHSX recently published the “data ethics framework” which guides the design of appropriate data use by government and the wider public sector (see https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework); and the “Topol Review” which contains recommendations enabling NHS staff to make the most of innovative technologies such as genomics, digital medicine, artificial intelligence and robotics to improve services (see https://topol.hee.nhs.uk/). Additionally, the National Institute for health and care excellence (commonly known as NICE) published a framework for evaluating the efficacy of digital health technologies (see, https://www.nice.org.uk/Media/Default/About/what-we-do/our-programmes/evidence-standards-framework/digital-evidence-standards-framework.pdf);
- In Italy, the Italian Agency for the digitalization (“Agid”) is responsible for the digitalization of healthcare sector (as outlined in this article) with the purposes of “guaranteeing the achievement of the objectives of the Italian Digital Agenda and contributing to the diffusion of the use of information and communication technologies, favouring the innovation and economic growth” (see https://www.agid.gov.it/index.php/it/agenzia/chi-siamo).
 See: http://www.salute.gov.it/imgs/C_17_pubblicazioni_2129_allegato.pdf.
 Pursuant to Section 2.3.1 of the Guideline.
 Pursuant to Section 2.3.1, paragraph 3, of the Guideline.
 Pursuant to Section 2.3.1, paragraph 4, of the Guideline.
 Pursuant to Section 2.3.1, paragraph 5, of the Guideline.
 Pursuant to Section 2.3.2 of the Guideline.
 Pursuant to Section 2.3.3 of the Guideline.
 The Guideline lists pharmacies amongst those places where users can enjoy telemedicine services such as monitoring of physiological parameters.
 Provider centre can also act as Service Centre.
 The “WHO guideline recommendations on digital interventions for health system strengthening” (herein after “WHO guideline recommendations”), published on 2019, in “recommendation 4, client-to-provider telemedicine” and “recommendation 5, Provider-to-provider telemedicine”, said: in the first case, that telemedicine should be used “[..](i)under the condition that it complements, rather than replaces, face-to-face delivery of health services; and (ii) in settings where patient safety, privacy, traceability, accountability and security can be monitored. In this context, monitoring includes the establishment standard operating procedures that describe protocols for ensuring patient consent, data protection and storage, and verifying health worker licenses and credentials”; in the second one WHO recommends provider-to-provider telemedicine (primarily used to link less skilled health workers with more specialist ones) in settings where patient safety, privacy, traceability, accountability and security can be monitored. In this context, monitoring includes the establishment of standard operating procedures that describe protocols for ensuring patient consent, data protection and storage, and verifying health worker licenses and credentials.”, (see respectively pages 50 and 54 of “WHO guideline recommendations”, https://apps.who.int/iris/bitstream/handle/10665/311941/9789241550505-eng.pdf?ua=1)
 See page 25 et seq. of the Guideline.
 For Electronic Healthcare Record is commonly intended the digital version of a traditional clinical record of hospitalization or a specialized outpatient clinical record, whilst for Electronic Healthcare File (i.e. “fascicolo sanitario elettronico”) is commonly intended all information (e.g. medical records, preventive investigations etc.) concerning the present and past health condition of the person.
 INI means “National Infrastructure for Interoperability” which has been established by Agid Circular n. 4/2017 and Decree of the Italian Ministry of Economics and Finance of 4 August 2017.
 See page 36 of the Guideline.
 “Machine Learning is an artificial intelligence technique that can be used to design and train software algorithms to learn from and act on data. Software developers can use machine learning to create an algorithm that is ‘locked’ so that its function does not change, or ‘adaptive’ so its behavior can change over time based on new data. Some real-world examples of artificial intelligence and machine learning technologies include: (i) An imaging system that uses algorithms to give diagnostic information for skin cancer in patients. (ii) A smart electrocardiogram (ECG) device that estimates the probability of a heart attack.”, see https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device.
 “SaMD Pre-Specifications (SPS): A SaMD manufacturer’s anticipated modifications to “performance” or “inputs,” or changes related to the “intended use” of AI/ML-based SaMD. These are the types of changes the manufacturer plans to achieve when the SaMD is in use. The SPS draws a “region of potential changes” around the initial specifications and labeling of the original device. This is "what" the manufacturer intends the algorithm to become as it learns.”, see “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine learning (AI/ML)-based Software as a Medical Device (SaMD), Discussion Paper and Request for Feedback” (herein the “Discussion Paper”), p. 10, https://www.fda.gov/media/122535/download.
 “Algorithm Change Protocol (ACP): Specific methods that a manufacturer has in place to achieve and appropriately control the risks of the anticipated types of modifications delineated in the SPS. The ACP is a step-by-step delineation of the data and procedures to be followed so that the modification achieves its goals and the device remains safe and effective after the modification.”, see the Discussion paper, pages 10-11.
 See page 6 of the Ethics Guideline.
 This means that a “Trustworthy AI” shall ensure the respect of “[..] the freedom and autonomy of human beings. Humans interacting with AI systems must be able to keep full and effective self-determination over themselves and be able to partake in the democratic process [..]” (see page 12 of the Ethics Guidelines).
 This means that: “[..]AI systems and the environments in which they operate must be safe and secure. They must be technically robust and it should be ensured that they are not open to malicious use [..]”, (see page 12 of the Ethics Guidelines).
 To be intended both from a substantive and a procedural dimension; the former “[..] implies a commitment to: ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation. If unfair biases can be avoided, AI systems could even increase societal fairness. [..]; the latter “[..] entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them. In order to do so, the entity accountable for the decision must be identifiable, and the decision-making processes should be explicable”,( see page 12 of the Ethics Guideline).
 This means that the “[..] processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected[..]”, (see page 13 of the Ethics Guideline).
 According to the Ethics Guideline, the approach should be "ethics and rule of law by design", which means the adoption of "Methods to ensure values-by-design provide precise and explicit links between the abstract principles which the system is required to respect and the specific implementation decisions. The idea that compliance with norms can be implemented into the design of the AI system is key to this method. Companies are responsible for identifying the impact of their AI systems from the very start, as well as the norms their AI system ought to comply with to avert negative impacts [..] to earn trust AI needs to be secure in its processes, data and outcomes, and should be designed to be robust to adversarial data and attacks. It should implement a mechanism for fail-safe shutdown and enable resumed operation after a forced shut-down (such as an attack)", (see page 21 of the Ethics Guideline).
This means that AI systems should support human autonomy and decision-making. To be more specific:
(i) Fundamental rights: in case of AI systems negatively affect fundamental rights“[..]a fundamental rights impact assessment should be undertaken”; (ii) Human agency: “Users should be able to make informed autonomous decisions regarding AI systems [..] They should be given the knowledge and tools to comprehend and interact with AI systems to a satisfactory degree and, where possible, be enabled to reasonably self-assess or challenge the system. AI systems should support individuals in making better, more informed choices in accordance with their goals.[..] The overall principle of user autonomy must be central to the system’s functionality. Key to this is the right not to be subject to a decision based solely on automated processing when this produces legal effects on users or similarly significantly affects them”; (iii) Human oversight: “AI system should not undermine human autonomy or causes other adverse effects. Oversight may be achieved through governance mechanisms such as a human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach. HITL refers to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable. HOTL refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation. HIC refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation. [..] (in general) the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required”, (see pages 15 and 16 of the Ethics Guideline).
 This means that “[..] AI systems (should) be developed with a preventive approach to risk and in a manner such that they reliably behave as intended while minimising unintentional and unexpected harm, and preventing unacceptable harm [..]”; it includes: Resilience to attack and security; Fallback plan and general safety; Accuracy; Reliability and Reproducibility, (see pages 16 and 17 of the Ethics Guideline).
 This means an adequate data governance that covers the quality and integrity of data used; it includes: Privacy and data protection; Quality and integrity of data; Access to data, (see page 17 of the Ethics Guideline).
It includes: Traceability, that means documentation of “[…]data and yield the AI system’s decision, including those of data gathering and data labelling as well as the algorithms used”; Explainability, that means “[..] both the technical process of an AI systems and the related human decisions[..]”; Communication, that means “[..]humans have the right to be informed that they are interacting with an AI system. This entails that AI systems must be identifiable as such. In addition, the option to decide against this interaction in favour of human interaction should be provided where needed to ensure compliance with fundamental rights” (see page 18 of the Ethics Guideline).
 This means: “[..]Besides the consideration and involvement of all affected stakeholders throughout the process, this also entails ensuring equal access through inclusive design processes as well as equal treatment[..]; it includes: Avoidance of unfair bias; Accessibility and universal design; Stakeholder Participation, (see pages 18-19 of the Ethics Guideline).
 This means: Sustainable and environmentally friendly AI; Social impact; Society and Democracy, (see pages 19 of the Ethics Guideline).
 This means “[..] that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use”; it includes: Auditability; Minimisation and reporting of negative impacts; Trade-offs; Redress, (see pages 19-20 of the Ethics Guideline).
See pages 32 and33 of the Ethics Guideline.
As an example of AI implementation in assisting at a distance specially regarding chronic diseases, please see the Project conducted by Bio Medical Campus of Rome, which has experimented, on a group of 22 patients with chronic obstructive pulmonary disease, the using of a telemonitoring system from home able to detect potentially dangerous events for patients. Developed by the Processing Systems and Bioinformatics Unit of the Campus Bio-Medico of Rome, the system works through AI techniques of machine learning and acquires three times a day heart rate and haemoglobin saturation data through a pulse oximeter connected to an app specifically designed for smartphones. In this way the system evaluates possible situations of danger for the patient and reports them to the health unit. The experiment showed that the data collected through the system - recognition performance of events potentially dangerous- were better than those obtained by medical experts (see www.unicampus.it/ricerca/unita-di-ricerca/sistemi-di-elaborazione-e-bioinformatica on page 47 of White Paper on AI published by Agid on March 2018 (see footnote n 19 above).
 In this sense the WHO, in the “Recommendations on digital Interventions for Health System Strengthening”, said that: “[..]digital health interventions are not a substitute for functioning health systems, and that there are significant limitations to what digital health is able to address. Digital health interventions should complement and enhance health system functions through mechanisms such as accelerated exchange of information, but will not replace the fundamental components needed by health systems such as the health workforce, financing, leadership and governance, and access to essential medicines[..]”, (see, page V, Executive Summary, https://apps.who.int/iris/bitstream/handle/10665/311977/WHO-RHR-19.8-eng.pdf?ua=1).
 In this sense the European Parliament “recognises that while most of the investment and innovation in this area comes from private sector ventures, Member States and the Commission should also be encouraged to continue investing in research in this sector and outline their development priorities; welcomes the InvestEU proposal and other public-private partnerships that will foster private funding [..]considers that the coordination of private- and public-sector investment should be encouraged to ensure that development is focused; Stresses that investments in AI, which can be characterised by significant uncertainty, should be complemented by EU funding for example from the European Investment Bank (EIB) or the European Investment Fund (EIF), or through InvestEU and the European Fund for Strategic Investments (EFSI), schemes which can help with regard to risk sharing”, for these reasons the European Parliament “Calls on the Member States and the Commission to increase funding in health-related AI technologies in the public and private sectors [..]Calls on the Commission to work on strategies and policies that can position the EU as a world leader in the growing field of healthcare technology”, see AI European Resolution, paragraphs 2.2 “Investments” and 3.1.2. “Health” (see footnote n 23).