Article Text
Abstract
Objectives: To foster the development of a privacy-protective, sustainable cross-border information system in the framework of a European public health project.
Materials and methods: A targeted privacy impact assessment was implemented to identify the best architecture for a European information system for diabetes directly tapping into clinical registries. Four steps were used to provide input to software designers and developers: a structured literature search, analysis of data flow scenarios or options, creation of an ad hoc questionnaire and conduction of a Delphi procedure.
Results: The literature search identified a core set of relevant papers on privacy (n = 11). Technicians envisaged three candidate system architectures, with associated data flows, to source an information flow questionnaire that was submitted to the Delphi panel for the selection of the best architecture. A detailed scheme envisaging an “aggregation by group of patients” was finally chosen, based upon the exchange of finely tuned summary tables.
Conclusions: Public health information systems should be carefully engineered only after a clear strategy for privacy protection has been planned, to avoid breaching current regulations and future concerns and to optimise the development of statistical routines. The BIRO (Best Information Through Regional Outcomes) project delivers a specific method of privacy impact assessment that can be conveniently used in similar situations across Europe.
Statistics from Altmetric.com
Across Europe, there is an increasing awareness for the provision of standardised health services, stronger cross-border collaboration and comparative performance evaluation.1 Discussion of these topics frequently focuses on technicalities, playing down the size of the challenge of connecting real people whose ethical and cultural values differ, as reflected in different national legislations. Statisticians push stakeholders to gather more complete databases to support policy: as this operation involves wider networks, the interests of individuals get more diluted, and therefore conflicts with the right to privacy may arise. This is a crucial element in the process of linking different sources.
The right to privacy raises the question of how far a society can intrude into the personal lives of its citizens while ensuring that a fundamental societal goal—for example, public health—can be safeguarded. The concept is general, but definitions vary:2 privacy has been recognised as the “right to be left alone”3 or “the right of the individual to be protected against intrusion into his personal life or affairs, or those of his family, by direct physical means or by publication of information”.4
The European Commission’s public health programme supports a whole strand of projects under the banner of health information.5 The programme Best Information Through Regional Outcomes (BIRO; http://www.biro-project.eu/index.html) was started in 2005 to develop a sustainable platform for quality improvement in the treatment of diabetes by linking regional registers and creating benchmarks across various healthcare systems.6 Can such an international effort be realised collaboratively without breaching the existing regulations? If so, to what extent can health records be used for this purpose?
Surprisingly, it seems that protection of privacy had been rarely addressed explicity by design in this field, despite the many international instruments available as common terms of reference. The right to privacy was first recognised in the 1948 Universal Declaration of Human Rights7 and has been progressively reinforced by many other acts and treaties,8 9 10 including legislation by the European Union.11 The creation of the European Court of Human Rights ensured adherence to these principles.12 The advent of information technology created a need for specific rules governing the protection of privacy in the collection, handling, storage and dissemination of personal information, addressed by two crucial international instruments.13 14 Accordingly, medical data have been regulated in the European Union (EU)15 16 with greater privacy protection in consideration of their sensitivity. Finally, the signing of the European constitution, 17 incorporating the Charter of Fundamental Rights of the European Union, provided a common ground for the protection of personal data and bioethics,18 which thereafter became binding for all member states. Nevertheless, privacy norms should be interpreted in a way consistent with the goals of scientific investigation and health research, including the attainment of complete data.19
The BIRO proposal included at its inception a specific work package, a privacy impact assessment (PIA), to explore the topic in depth and support researchers and software engineers in the construction of a privacy-protective system architecture. Here we document the main features of the method and the results obtained from its application.
Methods
Introducing the BIRO system
The BIRO project aims to build a common European infrastructure for exchange of standardised information about diabetes through connected regional diabetes registers.20 It comprises a number of work packages addressing the following tasks: identifying target parameters and indicators; defining a common data set/data dictionary; developing a standardised report template; ad hoc database and statistical engines to deploy outputs; achieving a validated secure protocol for international communication; and establishing a central web portal to disseminate European reports to a variety of users. The project aims to build a shared evidence-based diabetes information system (SEDIS). Briefly, the system has a structured architecture that involves two data processing steps, corresponding to a local and a global component, linked by a unidirectional flow of information (fig 1).
A basic version of the system runs in each single register (a local SEDIS) to produce initial estimates for the local population. All partners in the network, using the same standardised procedures, repeat the process at their best convenience. Regional estimates are then sent to a central server that compiles “partial” results into a European report (the global SEDIS). A web portal then delivers user-friendly information for local registers.
Functionality of the system is ensured by three fundamental elements: a concept and data dictionary including standardised evidence-based definitions in XML (extensible markup language) format; a report template to structure the presentation of end results; and the statistical methods required to produce them. The same structure is used to automate the production of BIRO reports for individual centres and the whole network.
The data model includes a BIRO XML export, loaded by a Java-powered database manager into a local (Postgres) database that is directly accessed by R statistical routines to produce local results and “statistical objects”21—that is, “elements of a distributed information system carrying essential data in the form of embedded, partially aggregated components, that can be used to compute a summary measure or relevant parameter for the whole population from multiple sites”. Communication software is used to send objects to a central server, where an ad hoc Java importer loads them into a central BIRO database, and a global repository is maintained. Functions are used to process aggregate data submitted by local registers until a global pooled estimate is produced and published in pdf and html format on a dedicated web portal.
The BIRO method of privacy impact assessment (PIA)
Specific questions related to privacy, confidentiality and security cannot be answered by technicians independently. What is the minimum aggregation level for data exchange (individual, provider, region, state)? What security measures must be applied? How should the communication process be activated?
A structured procedure was needed to facilitate a general consensus on the following themes: legislation, information needs/content and feasibility of each alternative architecture in terms of practical limits. PIA is a flexible instrument, variously defined as a “process whereby a conscious and systematic effort is made to assess the privacy impacts of options that may be open in regard to a proposal”,22 “an assessment of any actual or potential effects that the activity or proposal may have on individual privacy and the ways in which any adverse effects may be mitigated”,23 and a “protean document in the sense that it is likely to continue to evolve over time with the continued development of a particular system”.24 The method involves prospective identification of privacy issues and risks before systems and programmes are put into place or modified. It assesses impact in broad terms and is process-oriented rather than output-oriented, is systematic and is focused on a list of relevant factors, such as the size of the organisation, the sensitivity of the personal data, the type of risk and the intrusiveness of the technology.
The adoption of PIA in BIRO seemed convenient and cost effective, as it allowed for privacy risks and concerns to be minimised by design. PIA should evolve over time with the continuous development of the system. However, the incorporation of mitigation strategies directly into the system design, whenever privacy risks cannot be fully avoided, inherently reduces retrospective adjustments of the system architecture once the system is fully operational. A multidisciplinary, dedicated PIA team was formed, led by a facilitator expert in international privacy legislation and including at least a representative from each partner institution. The procedure involved four consecutive steps: a preliminary privacy impact assessment, data flow analysis, privacy analysis and a PIA report.
The preliminary part included a discussion on the data flow, focusing on the physical/logical separation of personal information or data. It involved a systematic review of the privacy literature, whose search strategy included use of Ovid Medline with the following criteria: {privacy AND [(registr* OR register) OR (health information system*) OR (health database*)]}, and limits [human AND English Language AND yr = 2001–2006]. A total of 64 biomedical and 11 law articles were identified after exclusion of papers more related to quality of care, privacy laws on research, genetic discrimination and patient recruitment strategies. A second search was performed on law journals using the same criteria. A core set of 14 papers was selected by comparing abstracts against main project objectives. Papers were reviewed by the PIA team to complete a comprehensive report of the first step and identify a short list of possible candidate architectures.
The second step involved a data flow analysis for each of the candidate architectures identified. A modified Delphi consensus procedure was undertaken by the PIA team to define the best alternative by producing the following materials:
data flow tables, including the possible scenarios for the collection, use and disclosure of personal information/data, with a number of possible options;
an information flow questionnaire, to assign marks to each scenario or option; and
an overall consensus table, ranking scenarios or options.
Materials were assembled using the procedure represented in fig 2.
Data flow tables were initially prepared by the PIA facilitator and revised by the whole PIA team. They were finally approved and used to compile the information flow questionnaire. This provided a series of scenarios, broken down into separate suboptions, for each of which marks were assigned on the basis of a set of three essential criteria: privacy, information content for diabetes and technical complexity (feasibility). Scores ranged from 0 (not applicable) to 5 (high).
The score on privacy was split into three separate criteria:25
identifiability, a measure of how much the information available is personally identifiable, on a continuum ranging from full anonymity (no name) to full verinymity (true name);
linkability, a measure of the degree to which data elements can be used to reconstruct the true name of the subject;
observability, a measure of the degree to which any other factor relative to data processing (time, location and data contents) can potentially affect identifiability and/or linkability (effect modifier).
An overall privacy score was assigned as an average of the three privacy dimensions, according to a scale of increasing threat to privacy. The score for the information content criterion was based on the information provided by the specific scenario or option in terms of relevance for diabetes, while the technical complexity score was based on the feasibility of the implementation of the specific scenario or option. The overall mark for each option was based on the average of the three dimensions described above.
The information flow questionnaire was distributed by email to the PIA team and each member was asked to assign independent marks from remote to each option. The distribution, median and mean of scores were taken (with privacy scale reverted to higher privacy protection) and a final overall score was assigned to each option. In a second phase, the panel met to carry out an interactive consensus process, chaired by the PIA facilitator, aimed at converging towards the best architecture.
The Delphi consensus session took place in Cyprus during the 2nd BIRO Investigator Meeting (23–25 May 2007). Initial scores provided independently by members of the PIA team were collated and discussed in order to reach an agreement on common criteria. The selection process involved value judgements about different options for each criterion, requiring specific expertise. For each case, relevant experts explained the content and meaning of the option, justifying their marks. Members of the panel were given the opportunity to ask questions, allowing a completely informed consensus process to be finalised. All results were included in the overall consensus table, presenting options ranked by overall scores, with ties ranked by increasing threat to privacy. The best architecture was defined as the mix of best options for all dimensions examined.
The final step involved an analysis of the selected architecture and the compilation of all materials and results into an overall report.
Results
The accomplishment of PIA tasks provided essential input for the development of all major components of the BIRO system.
Three main candidate architectures were identified, with different levels of data sharing.
The first alternative required the transmission of “individual patient data, de-identified through a pseudonym,” secured by an encryption algorithm and privacy protective communication technologies.
The second alternative envisaged data shared as “aggregation by group of patients, with Centre’s identifications available in de-identified form, securely encrypted,” transferred using privacy protective solutions.
The third alternative was based on aggregation by region, optimised to impede reverse engineering, with the usual secure data transfer.
Details of the three alternatives were used to compile the data flow tables and data flow questionnaire. The Delphi panel selected the best alternative by ranking the three alternative scenarios, including options for their implementation. The criteria of the resulting BIRO system architecture (fig 3) were duly taken into account during implementation.
According to this structure, each participating region applies standard definitions to map the local database to a common “export” that is stored locally as a “BIRO database”. Specialised software is applied to deliver standardised reports and a set of aggregated tables produced for each local database. Such tables are structured to allow a cumulative report that allows results to be delivered for the target list of diabetes indicators relative to the whole network.
The statistical engine provides the fundamentals for aggregation. For groups of patients, the number of subjects with specific values of a single characteristic (such as Hb A1c) is saved as a count, optionally stratified for selected variables (such as sex and age). Multidimensional patterns are a special case of aggregate data used to produce risk-adjusted outcome indicators. In this case, the system stores specific combinations of values across multiple variables (for example, females over 40 who smoke, have a history of cardiac complications and present a high level of Hb A1c). In both cases, a minimum sample size has been identified (n = 5) to impede reverse identification for sparse cases.
Statistical properties (for example, arithmetic mean, percentiles, etc) are exploited to define the target objects that will be transmitted in separate bundles over the network. In this way, international reports avoid many potential risks and restrictions imposed by privacy legislation, with no exchange of individual records.
The local processing of BIRO is controlled by integrated software linking the different modules through a simple graphical user interface allowing users to export local data to XML files, to add them to a local database and to produce local reports and statistical objects for the central BIRO system.
Specialised communication software has been developed to securely transmit statistical objects as encrypted compressed folders containing comma-delimited text files (with file extension .csv). Security has been addressed comprehensively in accordance with ISO/OSI 7498-2. For authentication, digital certificates trusted by a common certification authority were exchanged and installed in sender and receiver. Access control was configured so that only trusted identities were authorised to connect to services. Security was also provided by using encryption, and data integrity and non-repudiation were assured by digital signatures.
Web services were selected as the core technology for communication for their compliance with standards set by the open world wide web consortium. A web service is a software system designed to support interoperable machine-to-machine interaction over a network. Other systems interact with the web service in a manner prescribed by its description using SOAP (simple object access protocol) for messaging, typically conveyed using HTTP (hypertext transfer protocol) for internet transport with an XML serialisation, in conjunction with other web-related standards—for example, security extensions XMLenc (XML encryption) and XMLsig (XML digital signatures). Apache Axis 2, together with Apache Rampart provided by Java 2, Enterprise Edition, was chosen for pilot development and configuration of sending and receiving applications.
Encryption and digital signatures were applied as two layers. First, transport layer security using HTTPS (ie, HTTP protocol together with SSL (secure sockets layer)), was used to protect the entire data stream exchanged between sender and receiver. Second, at the application layer, individual chunks of data were encrypted and digitally signed, giving the application full control over further utilisation, storage and processing of digital signatures and other security-related information.
A central engine is used by a server administrator to load statistical objects received from partial analyses as csv files, and to run the overall analysis for the global BIRO report. The unique administrator ensures compliance with all national and international security rules in the maintenance of the server, as specified in the preliminary PIA report.26
Results are stored in a server database that will be connected to a web portal in charge of delivering online reports to end users masses, bundled with proper data definitions and methodological references.
Discussion
The EU has adopted a comprehensive privacy protection model based on a general bulk of principles governing all aspects of the handling of personal information, from the collection to the use and dissemination, in both the public and private sectors. The EU’s data protection directive27 established a common level of privacy protection through a regulatory body, reinforcing data protection laws and establishing a range of new rights and basic principles, including the rights to know where the data originated, to have inaccurate data rectified, to have a recourse in the event of unlawful processing and to withhold permission to use data in some circumstances. Article 7 sets the criteria for “legitimate processing”, while article 8 provides for more stringent protection of the use of sensitive data, such as medical records, whose processing is considered not legitimate in principle and must be prohibited by member states unless special conditions (therein listed) occur. For instance, the processing of health data is legitimate when carried out for the purposes of preventive medicine, medical diagnosis or provision of care or management of health services and where those data are processed by a health professional subject to the obligation of professional secrecy or by another person also subject to an equivalent obligation of secrecy under national law or rules established by national competent bodies (article 8(3)). This rule justifies the collection, use and processing of health data, for the specified purposes, without the patient’s consent, which, however, would be required if the same data were to be used for research purposes or any other secondary use. The reference to professional secrecy contained in the article is also crucial for obtaining more effective protection of privacy in the handling of sensitive data. Article 8(4) states that member states may lay down additional exemptions for reasons of substantial public interest—for example, public health—either by national law or by decision of the supervisory authority.
In order to conduct scientific research without falling under the binding rules of the data protection directive, data should be rendered anonymous. Recital 26 of the directive states that “principles of protection shall not apply to data rendered anonymous in such a way that the data subject is no longer identifiable”. Article 2 specifies that an “identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity”. In order to determine whether a person is identifiable, “account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person”. When the data subject could be identified with reasonable means (directly from the data itself or indirectly through the combination of other means), data cannot be considered anonymous and, therefore, fall under the directive’s principles,28 including the need to obtain expressed consent from the data subject.
However, the identification of the data subject through “reasonable means” is a vague concept. In each particular case, reference to the state of the art in decoding and/or other similar techniques should be made to indirectly assess what “reasonable means” stands for. The definition involves an ad hoc evaluation of the likelihood of reidentification based upon technical matters.28 Data could be then considered anonymous when “it would be reasonably impossible for the researcher and for any other person to re-identify the data”.29 In such a case, the interest of data subjects to maintain their data private and confidential is protected “ipso iure” by anonymisation, rendering the processing legitimate even without consent. Accordingly, data processed anonymously for research purposes should be regarded as falling outside the scope of the directive whenever no direct/indirect identification is possible by reasonable means, according to the state of the art.
The directive imposes an obligation on member states to ensure that personal information related to EU citizens has the same level of protection when is exported to, and processed in, countries outside the EU. As a result, countries refusing to adopt adequate privacy protections may be unable to engage in certain types of information flows with Europe, particularly when they involve transmission of sensitive data.
In line with the data protection directive, in 1997 the Council of Europe enacted a recommendation regarding the protection of medical data,30 acknowledging that medical data require even more protection than other non-sensitive data and reaffirming that the respect of rights and fundamental freedoms, in particular the right to privacy, must be guaranteed in the collection and processing of medical data. The processing of medical data is in principle prohibited, unless appropriate safeguards are provided by domestic law. One such safeguard is that only healthcare professionals, bound by rules of confidentiality, should process medical data, though persons acting on their behalf are also allowed to perform the same duties if subjected to the same or similar rules. According to the recommendation, medical data may be collected, from the data subject or from other sources, if permitted by law, for public health reasons (principle 4.3(a)) and for the purposes listed in principle 4.3(b):
for preventive medical purposes or for diagnostic or therapeutic purposes (in this case data may also be processed for the management of medical service operating in the interests of the patient);
to safeguard the vital interests of a data subject;
to respect specific contractual obligations;
to establish, exercise or defend a legal claim.
Thus, the recommendation reaffirms and strengthens the rules set forth by the directive.
Medical data may be collected without consent “for the purposes of” (ie, in the interests of) public health, including the management of health services. For health research, the processing of health data is considered legitimate whenever data are rendered anonymous, with techniques being continuously updated and kept efficient. Accordingly, health data handled for research purposes must not be published in a form that enables data subjects to be identified, unless data subjects have given their consent for publication or such publication is permitted by domestic law.
In summary, the EU and international legislative instruments consider the right to privacy to be, broadly speaking, not an absolute right, but a right that should be weighed against other matters/rights that benefit societies, including public health. The exemptions to the prohibition on processing operations involving personal data, such as those envisaged for healthcare and health research, constitute clear examples of the non-absolute nature of the right to privacy. In other words, privacy protection is conceived as a general value that in principle must not unnecessarily jeopardise health research. The interest of societies in enhancing population health strongly depends on the possibility of conducting appropriate research in the health sector. The availability of personal data is fundamental for this purpose.
Considering that interests of privacy protection and health research might conflict on issues surrounding the increasing demand of researchers to access data in identifiable form, appropriate methodologies and techniques should be implemented to achieve an appropriate balance between the two interests. Privacy-impact assessment has been the solution chosen to explore the above principles in the context of BIRO, where we needed an optimal balance between privacy protection and the efficient conduction of statistical analyses.
The project involves medical records collected by diabetes registries at national or regional level, to be processed for benchmarking and public health monitoring at the international level. The privacy analysis covers the identification of privacy issues that might arise in the transfer of data from the collaborating centres to the central BIRO database.
In this case, local data processing is subject to article 8 (paragraph 3) of the EU directive: each centre collects information related to an identified or identifiable natural person for the purpose of setting up diabetes registries. Hence, data could be considered collected and processed for purposes of preventive medicine, medical diagnosis, the provision of care or treatment or the management of healthcare services. According to this article, the data collector is exempted from requesting consent from the data subject, in consideration of the need to protect the competing and general interests of societies in improved healthcare. Further processing of these data, other than caring for the patient and managing health services, would not be covered by the exemptions of article 8 (paragraph 3): in other words, consent would be required for any secondary use of those data. However, for research and statistical analysis, even if consent was required in the first instance (article 11, paragraph 2), the provision of information to the data subject could be waived if it proves impossible or would involve a disproportionate effort.
The exemptions provided by the directive are in line with the principles contained in the Convention on the Protection of Individuals for the Automatic Processing of Personal Data (1981),13 envisaging the possibility of restricting the exercise of the data subject’s rights with regard to data processing operations that pose no risk (article 9, paragraph 3). Examples of no-risk or minimal-risk operations are therein considered, in particular, the use of data for statistical work, in so far as those data are presented in aggregate form and stripped of their identifiers, as in BIRO. Similarly, scientific research is included in this category.
Hence, as far as the EU legislation is concerned, data processing in BIRO is to be considered legitimate, although domestic laws may provide more stringent rules to be specifically examined in each case.
Regarding data transmission, BIRO centres send only aggregate records to the central server (fig 3). For the most sensitive variables, aggregated records are not transmitted if groups contain fewer than five patients. Statistical objects are sent as tables stored in compressed bundles of flat-text comma-delimited (csv) files. Hence, there is no possibility, either direct or indirect, that a patient could be identified with the use of “reasonable means”. In broad terms, the disclosure of information related to clinical centres or individual professionals may also pose particular privacy concerns: the consortium felt that this factor could jeopardise the level of data sharing and possibly discourage participation in the project.
The issue raises an interesting point that may constitute a future area of contention: disclosing information about small centres may lead, without the use of unreasonable means, to the identification of doctors and, possibly, of individual patients. In addition, it could imply judgements about the performance of individual centres. Centres’ identifications have therefore been protected through the use of pseudonyms, together with a reporting system based on percentages rather than on absolute numbers. In this way, the size of a single centre would be hidden, avoiding its indirect identification by third parties.
Aggregated statistical objects are sent to the central statistical engine to carry out global analysis. Communication software has been specifically developed to ensure secure information exchange between the regional systems and the central SEDIS. To facilitate secure data transmission in BIRO, modern technologies have been selected and successfully used, complying with security requirements enshrined in both EU and international data protection norms. Global reporting does not pose any direct or indirect risk to privacy, as anonymous data sent by BIRO centres is transmitted to the SEDIS in a secure environment and is processed further in aggregate form.
The last issue relates to transborder data flow: the central database is located outside national boundaries. The BIRO system, as already demonstrated, processes only anonymous data; therefore, privacy rules should not limit its implementation. Nevertheless, the free flow of information, regardless of frontiers, is also a principle enshrined in article 10 of the European Human Rights Convention (http:www.echr.coe.int/nr/rdonlyres/d5cc24a7-dc13-4318-b457-5c9014916d7a/0/englishanglais.pdf). Accordingly, article 12 of the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data13 and article 25 of the directive on data protection27 regulate transborder data flow. The main rule contained in article 12 (paragraph 2) of the convention13 is that, in principle, obstacles to transborder data flows are not permitted between contracting states in the form of prohibitions or special authorisations of data transfers. The rationale for this provision is that all contracting states, having subscribed to a common core of data protection provisions set out in chapter II, offer a certain minimum level of privacy protection.
Where the protection of medical data can be considered to be in line with the principle of equivalent protection laid down in the convention,13 no restriction should be placed on the transborder flow of medical data towards a state that has not ratified the convention, but with legal provisions ensuring protection, according with the principles contained in the convention and related recommendation.30
Therefore, the EU directive allows the cross-border flow of personal data only when an adequate level of privacy protection is envisaged in the countries involved in the processing operations.
Consistent with the interpretation of the convention, countries that have fully implemented the directive are automatically allowed to execute transborder data flows: complying with the directive ensures, ipso iure, an adequate level of protection.
BIRO centres belong to European countries that have fully implemented the EU data protection directive27 and ratified the convention13; hence, an adequate level of privacy protection is fully guaranteed across those countries. This means that the exchange of data envisaged in the BIRO project is legally viable, considering the architecture of the system and the composition of the consortium. In accordance with EU and international legislation, reports will never allow either the data subjects or the local centres to be identified.
In the end, potential privacy risks in the usage of the BIRO system are summarised in table 1, showing the nature and level of identified risks, along with the required mitigation strategies. Technological solutions have been duly implemented in BIRO taking into account such major potential weaknesses.
Conclusions
The BIRO project aims at implementing an international health information system linking data from different diabetes registries. The present paper shows that its architecture fulfils privacy protection requirements by addressing and resolving broad concerns from different angles. Advancements should also foresee conditions beyond the usual boundaries of personal involvement, such as professional and institutional integrity in the conduct of health research. The architecture of the BIRO system flexibly affords the best privacy protection in the construction of an efficient model for the continuous production of European reports.
The project fulfils the ethical “principle of beneficence” by providing better information for planning and management that is directly associated with improved health outcomes.31
Although results on the effect of public reporting of performance indicators are still contradictory,32 33 evidence suggests that the widespread use of activity reports among health professionals may standardise quality of care and improve the performance of participating centres.35
The BIRO project also attempts to reach the best trade-off between the right to privacy and the right to better healthcare. We believe that the BIRO system fully respects individual rights by guaranteeing processing operations on anonymous data. The European Commission through its continuing support (European Best Information Through Regional Outcomes in Diabetes (EUBIROD), http://eubirod.eu/) recognises the added value of the BIRO system for European society.
The privacy impact assessment approach developed and applied in BIRO may represent a general method that can be used to tailor specific tools for the design of transborder health information systems.
Main findings
Structured linkage of very large clinical databases can help in identifying best practices across Europe.
An adequate balance between the individual and public interests should be found to avoid conflicts between privacy protection laws and public health goals.
Privacy impact assessment may represent a general solution to build robust privacy protective information systems by design.
The BIRO project defined a novel method for the construction of a European information system in the field of diabetes.
Acknowledgments
The authors are grateful to Fred Storms (CBO, Netherlands) and Amanda Adler (University of Oxford, UK) for their valuable suggestions and supportive participation in the selection of the BIRO architecture in the context of the final consensus panel.
Footnotes
Competing interests None.
Provenance and Peer review Not commissioned; externally peer reviewed.
Other content recommended for you
- Assessing data protection and governance in health information systems: a novel methodology of Privacy and Ethics Impact and Performance Assessment (PEIPA)
- Health research and systems’ governance are at risk: should the right to data protection override health?
- Rebooting consent in the digital age: a governance framework for health data exchange
- Background and current data protection legislation
- Patient data for commercial companies? An ethical framework for sharing patients’ data with for-profit companies for research
- Protecting patient privacy in digital health technology: the Dutch m-Health infrastructure of Hartwacht as a learning case
- The devil is in the details: an analysis of patient rights in Swiss cancer registries
- Remote monitoring of medication adherence and patient and industry responsibilities in a learning health system
- Delivering on NIH data sharing requirements: avoiding Open Data in Appearance Only
- Sport and exercise genomics: the FIMS 2019 consensus statement update