You Are Viewing Best Practices

GDPR and Blockchain: Can they Coexist?

Posted by fgilbert on December 16th, 2018

GDPR and blockchain do not coexist easily. GDPR attempts to ensure that personal data is retained for as short a period as possible, give in- dividuals control over their personal data, and allow easy modifica- tion, correction or erasure at any time at the individual’s request. Blockchain is intended to serve as an immutable ledger, where trans- actions cannot be repudiated, and records cannot be changed by any- one. Public or permissionless blockchains are operated under rigid rules that may not be compatible with GDPR. Private or permissioned blockchains, which can establish rules of operation, have more flexi- bility and may have a better chance of being in line with GDPR.

The GDPR applies worldwide, within and outside the European Economic Area (EEA), to the extent that personal data is processed in connection with the sale of goods or services to individuals located in the EEA. There has not been any guidance on how blockchain can meet GDPR requirements. It is clear that it might be very difficult to accommodate some aspects of the GDPR when personal data is recorded in a blockchain ledger. Given the speed of development of blockchain around the world, guidance is urgently needed.

Personal Data

Blockchain is undoubtedly a vehicle for the processing of “personal data”. Under GDPR, the term is defined broadly to apply to any infor- mation about an individual who is, or can be, identified. It incorpo- rates a wide variety of data from contact or health information to cookies, IP addresses or devices identifiers. Because of this broad defi- nition, almost anything that is or can be linked to an individual is deemed personal data under GDPR. Personal data that has undergone pseudonymization, and that could be attributed to a natural person through the use of additional information is also deemed “personal data” subject to GDPR.

Blockchain is often used to record events associated with an individual, as opposed to a corporate entity. It is common to do so by using pseudonymized information that has been associated with the public cryptographic key of the participants. The mere use of an identifier instead of the name of a person would not be sufficient to take pseudonymized data outside the scope of the definition of personal data if the person may be re-identified because that identifier is otherwise available. Only personal data that has been rendered anonymous in such a manner that the individual is not, or no longer, identifiable is outside the scope of the GDPR.

Legal Basis for the Processing

The GDPR prohibits the collection or processing of personal data un- less there is a “legal basis” for the processing. A blockchain based appli- cation must be able to identify one or more of these six “legal basis”. The most relevant ones are likely to be that the processing is necessary for: (i) the performance of a contract to which the individual is a party; (ii) compliance with a legal obligation to which the data con- troller is subject; or (iii) the processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, unless these inter- ests are overridden by the interests or fundamental rights and freedoms of the individuals. In some cases, legal basis is provided only by obtaining the consent to the process- ing of the personal data.

Blockchain projects are usually associated with the performance of a contract or a trans- action, where the parties wish the transaction to be recorded. In most cases, the project is likely to meet one or more of the require- ments above. Blockchain users should make sure that this information is recorded and shared with the affected individuals.

Storage Limitation

The GDPR wants the interaction with personal data to last only while the data is needed. Blockchain is intended to create an immutable record. One of its key features is the ability to retain data indefinitely, to enable the parties to prove that a transaction occurred. Blockchain users will have to be prepared to argue why the transaction recorded in the blockchain must remain accessible indefinitely. For example, if the event is the sale of ephemeral or perishable goods (e.g. food or flowers), while there is no doubt that the record of the sale from per- son A to person B should be kept for a certain time in order to retain evidence, it would be much more difficult to argue that it should be kept indefinitely, past the statute of limitation for claims under the sales contract. In a permissioned blockchain environment, these concerns might be addressed through the rules of operation of that blockchain, for example by allowing for the deletion or archival of the data after a specified period of time.

Security, Integrity and Confidentiality

The GDPR requires both data controllers and data processors toadopt a written information security program to reduce the risk of se- curity breach, intrusion, modification of the data, or ransomware at- tack. The program is expected to include appropriate technical, physical and administrative measures. Who is responsible for main- taining proper security when the network can be accessed through multiple nodes? To date, blockchain technology has suffered spectacu- lar security breaches, in particular targeting cryptocurrencies. Keep in mind that any chain or network is only as strong as its weakest link.

Data Protection by Default

The GDPR requires that companies follow “data protection by de- fault” principles. “Data Protection by Default” requires that, by de- fault, the data should not be accessible to an indefinite number of natural persons without the data subject’s intervention. In the blockchain ecosystem, the content of the ledger must be accessible to others. Until the meaning of “data protection by default” is clarified, there is a problem. Should the blockchain application ensure that no personal data of a participant is recorded, until the participant has confirmed that their personal data can be made public?

Cross Border Data Transfers

The GDPR restricts the transfer of personal data to countries that do not provide adequate protection. Aside from a small number of coun- tries outside the EEA (for example, Canada, Israel, Switzerland or Uruguay), the remainder of the world does meet the GDPR standards. A permissionless blockchain ignores borders. It is intended to be ac- cessible from any geography through multiple nodes. In that case, all nodes might be required to execute proper data processing agree- ments that incorporate appropriate EU Commission Standard Con- tractual Clauses to guarantee proper protection of the personal data of EEA residents. Further, any entity that accesses data stored on the blockchain may also have to provide appropriate guarantees that it will meet the GDPR standards.

A permissioned blockchain might be better able to address cross- border data transfer restrictions. It could make it a condition for participation that the applicant execute all documents necessary as part of the admission process, and these documents could include EU Standard Contractual Clauses or a Code of Conduct that meets the GDPR requirements.

Right of Correction

The GDPR grants numerous rights to data subjects, some of which appear to be incompatible with the blockchain. The GDPR grants the right to have incorrect personal data rectified and to have incomplete personal data supplemented. The structure of the blockchain does not allow for any such changes. Any attempt to modify the information recorded about a prior transaction could break the chain, and the transactions that were conducted in reliance on the preexisting data could not be erased or superseded. In a permissioned blockchains, there might be more flexibility, through the addition of special rules. However, it should be kept in mind that individuals cannot give up their right to have incorrect personal data rectified. This is a funda- mental right in the European Union, under Article 8 of the EU Char- ter of Fundamental Rights.

Right of Erasure

The blockchain may be able to resist the “right of erasure” under the GDPR. The “right of erasure” exists only in limited specific circum- stances, including:

  • The data is no longer necessary for the purpose for which it was collected.
  • The data subject withdraws consent to the use of the data
  • The data subject objects to the processing of the data and there are no other legal grounds for the processing
  • The data subject objects to use of the data for marketing purposes
  •  The data has been unlawfully collected

There are numerous exceptions; two of them appear the most vi- able in the blockchain environment. The right of erasure does not apply if the data is necessary “for archiving purposes in the public in- terest” in so far as the erasure likely would “render impossible or seri- ously impair the achievement of the objectives of that processing.” It also does not apply if the processing is necessary for the establish- ment, exercise or defense of legal claims. Since the primary purpose of the blockchain is to provide the ability to prove that a transaction has occurred, it seems that either or both of these exceptions would stop attempts at erasing existing records.


Some of the essential features of blockchain tend to conflict with GDPR. Blockchain promotes immutability and data sharing, among others. With GDPR, personal data must be able to be changed so that it remains accurate, and data sharing is prohibited without permis- sion. Companies that wish to take advantage of blockchain should carefully evaluate the potential obstacles created by the GDPR when structuring their application. When privacy is a concern, a permis- sioned blockchain might be a more viable option than a permission- less one because it allows the creation of supplemental rules of operation that might have a better chance of meeting the numerous, stringent GDPR requirements.

Comments Off on GDPR and Blockchain: Can they Coexist?

The EU General Data Protection Regulation and Its Implications for US Insurance Companies

Posted by fgilbert on August 2nd, 2018

An article published by Francoise Gilbert in collaboration with the Greenberg Traurig Insurance Department.

Summer 2018 Magazine Reprint

Comments Off on The EU General Data Protection Regulation and Its Implications for US Insurance Companies

All you wanted to know about the GDPR

Posted by fgilbert on April 2nd, 2018

Extensive presentation by Francoise at a Bay Pay event.


Comments Off on All you wanted to know about the GDPR

90 days to May 25, 2018 – Does your Business Meet its GDPR Obligations?

Posted by fgilbert on February 21st, 2018

The EU General Data Protection Regulations – or GDPR – goes into effect in 90 days, on May 25, 2018.  With such a name, it would be easy to conclude that the law governs only the activities of businesses established in the European Union (EU) or European Economic Area (EEA), and that those established elsewhere are not concerned.

This is not the case.  Organizations that are not established within the EU/EEA are subject to GDPR when they process personal data of individuals who are in the EU/EEA if the processing activities are related to:

  • The offering of goods or services to such individuals in the EU/EEA, even if payment is not required, or
  • The monitoring of their behavior, to the extent that their behavior takes place within the EU/EEA. Profiling of individuals based on their use of the Internet is an example of such monitoring.

In practice, most US businesses – probably 70% – are subject to the GDPR where they collect or process the personal data of individuals located in the US.  According to our observations, only a very small fraction of those US businesses that are subject to the GDPR have completed their GDPR compliance overhaul.  Those who have ignored the GDPR or have failed to properly evaluate the extent to which the GDPR might apply to their activities should rethink this analysis and take action as soon as possible to address these obligations, if relevant.

The GDPR is a significant, complex document.  Compliance, therefore, is commensurate to its complexity.  For most businesses, evaluating their practices and conducting all activities that are required to achieve compliance can take three to six months. Numerous larger businesses, such as multinationals, have been working on GDPR implementation for more than two years.

The list of obligations under the GDPR is very long.  The document is comprised of 272 provisions, which are divided into 173 recitals and 99 Articles. It is also supplemented by documents issued by the EU institutions, or the Member States themselves. The EU’s Article 29 Working Party, so far, has published at least 13 guidelines. Some local supervisory authorities have published their own guidelines. Some Member States have adopted laws or amendments that relate to the GDPR.

Here are some highlights to keep in mind, among the many others that are written in the GDPR and related documents.

  • Violations of the law are subject to significant administrative fines that can reach up to 20 Million euros, or in the case of multi-national businesses, 4% of their global revenue.
  • In addition, individuals have a private right of action that allows them to file a complaint in court when they believe that their rights under the GDPR have been violated as a result of the processing of their personal data in non-compliance with the GDPR. They can mandate certain non-profit organizations to lodge the complaint and exercise their right to receive compensation on their behalf, a process that, in its effect, is likely to be similar to that of class action lawsuits customary in the United States.
  • Businesses are prohibited from collecting or processing personal data unless one of six circumstances occurs. They are required to state on their privacy notice why they have the right to collect and process the personal data of individuals. Company can no longer just infer from a person’s visit of a website that the individual has consented to the collection and use of his/her data. Specific consent is required.
  • Businesses have significant obligations that go well beyond current common practices. In particular, there are significant record keeping requirements as well as limitation to data retention.
  • Products must be designed in accordance with Data Protection by Design and Data Protection by Default principles. In some cases, businesses are required to conduct Data Protection Impact Assessments.
  • Individuals have significant rights, such as right of access, right of correction, right of data portability or right to be forgotten. Businesses have 30 days to respond to a request, which makes it necessary to implement the appropriate technical measures and administrative procedures to respond promptly to requests from individuals.
  • If a company’s core activities require the regular and systematic monitoring of individuals on a large scale, or the processing of special categories of data on a large scale, it must appoint a Data Protection Officer. Special categories of data include, for example, data about health, genetic data and biometric data, religion or sexual life.
  • Privacy notices must be updated to include a large amount of information required by the law.
  • Businesses must amend most of their contracts with third party service providers, or with their own customers if they act as service provider to another entity. These contracts must include numerous provisions mandated by the GDPR.

These are just example. There is much more. GDPR compliance project takes a significant amount of time.

To address their obligations under the GDPR, businesses must to conduct numerous activities, such as:

  • Start with understanding whether and how the business may have access to personal data of individuals in the EU/EEA, what is done to or with this data, with whom it shared, and how the business interacts with the individual for marketing purposes
  • Conduct a gap analysis to determine what needs to be done to comply with the GDPR, and prioritize these activities
  • Address the company’s obligations as a controller or processor
  • Address the restrictions to marketing, targeting, profiling
  • Update the contracts with data processors, subprocessors
  • Document the security program; update the security breach response plan
  • Address the crossborder data transfer restrictions
  • Identify the legal grounds for processing the personal data
  • Update the privacy notice
  • Develop processes to address obligations regarding individuals’ rights
  • Update training for personnel
  • Identify the lead supervisory authority

The GDPR has become a significant part of the US Privacy and Security legal landscape. It is important for US businesses to pay attention to compliance now because a majority of US businesses – as well as business located in other countries outside the EU/EEA – are and will continue to be subject to the GDPR for some of the personal data that they collect.

The GDPR will affect many of the business deals that a company may conduct. As businesses acquire or do business with businesses that are subject to the GDPR, the contracts that are drafted will likely have to address GDPR issues.

There are only 90 days left to take action and address GDPR compliance. There is still time if you have not already done so.  If you don’t, those individuals and businesses located in the EU/EEA with whom you want to do business may soon inquire whether your company can demonstrate whether it is compliant with the GDPR, and if your answer is not satisfactory, may take their business to others who do comply.

Comments Off on 90 days to May 25, 2018 – Does your Business Meet its GDPR Obligations?

Use of Cloud Computing in a Law Office

Posted by fgilbert on October 10th, 2013


Attorney and law firms are increasingly interested in taking advantage of the proliferation of cloud computing services in their law practice. For example, they might wish to use web-based email to interact with their clients, subscribe to customer relationship management (CRM) services that are offered as Software as a Service (SaaS) to manage their customer and prospect lists. They may be tempted to store documents in the many storage services that are offered at no charge. New options are emerging every day, as more applications are developed and marketed.

However, while cloud services present significant advantages, the use of cloud computing services by attorneys and law firms present unique challenges due to the ethical rules to which attorneys are subject. In addition to ethical concerns, services provided in a cloud computing environment present a number of technical, physical, and contractual risks. Cloud computing agreements should be reviewed carefully before venturing into this new, complex form of outsourcing.

The Advantages of Cloud Computing

Cloud computing offers so many advantages that it is difficult to resist the temptation. Many services can be obtained at a significantly low cost; in many cases, they may be offered free of charge. Thus, it may be less expensive for the law firm to acquire these services from a cloud provider rather than running and maintaining an application using one’s own server on one’s premises. The maintenance is usually included in the offering, so there may be no need to worry about keeping up with updates, as they are installed automatically. The services are accessible from anywhere, a feature of great interest to attorneys who work long hours and may take advantage of the remote access capability to telecommute if needed. Altogether, cloud computing requires less in-house expertise and capability and less infrastructure, which may result in significant savings.

Cloud computing services may provide flexibility. As these services are often sold on demand, a law firm may take advantage of the elasticity to purchase as little as it needs on a regular basis, knowing that it can quickly ramp up and add storage, computing capability, or a few new features if the need arises.

Cloud computing may also provide increased stability and security. Reputable cloud providers usually employ the most up-to-date, sophisticated security measures. Their experienced, adequately trained staff excels at implementing security measures that take into account the current trends. They have access to sophisticated tools to monitor unauthorized access to the systems or manage permissions. These entities also have the ability to put in place sophisticated disaster recovery and business continuity features that are likely to be more powerful and effective than those that a small or lean law practice could implement.

However, entrusting data to cloud providers is not without danger. For instance, a large cloud provider that is known for servicing prestigious customers might also be the target of cyber attacks aimed at disrupting these customers’ operation or accessing their critical data. In addition, attorneys are subject to stringent ethical rules that may hamper their ability to use certain types of cloud services for certain purposes or with certain categories of data.

Ethical Rules

Before starting a search for cloud services that would make your practice so much more efficient, you should first determine whether the Ethical Rules that apply to your profession would allow your law firm to use cloud services. Ethical rules vary from one jurisdiction to another, but they tend to follow some common general principles.

Competence, Confidentiality

Most Ethical Rules that apply to attorneys contain a duty of competence and a duty of confidentiality. Will the professionals who will use the new cloud based program be sufficiently proficient, and able to log in and out of a system, save or annotate documents, in a manner that does not put at risk the confidentiality or the integrity of the data?

Duty to Supervise

The Ethical Rules may also contain a duty to supervise and may require an attorney who assigns work or responsibilities to a non-attorney (e.g., the cloud provider) to make reasonable efforts to ensure that the third party’s conduct is compatible with the attorney’s professional obligations.

Duty to Safeguard Client Data

Attorneys are also generally required to keep client property, such as files, information, and documents appropriately safeguarded. Would a law firm be able to ensure proper safekeeping of the clients file if these files were stored in a cloud? Certain cloud services may host the data or several customers on the same server. Would this co-location be deemed “appropriate safeguard?

Further, the cloud provider may have structured its network so that the servers are spread throughout the world. Keep in mind that a foreign country would be likely to assert jurisdiction over any server located within its territory. These countries are also likely to have adopted different laws or standards with respect to third party or government access to data, confidentiality, or data ownership.

Duty to Communicate with Client

Finally, Ethical Rules for attorneys may contain a duty to communicate with clients. Would this duty require a attorney or law firm to promptly inform clients of any decision to store the client’s data in a third party’s cloud and to seek their consent?

Given the potential application of these and other ethical rules it would be prudent for attorneys and law firms that contemplate the use of cloud computing services to review carefully the ethical rules that apply to their profession, in their region, and review, as applicable, any opinion or guidance that may have been published by the applicable authority that regulates their profession.

How to Manage Cloud Computing Risk

Numerous precautions and measures can be taken by attorneys to reduce their exposure to legal, commercial, and reputational risk in connection with the use of cloud services.

Internal Due Diligence

Before stepping into the cloud, you should conduct an internal due diligence in order to determine the potential obstacles or constraints that might prohibit or restrict the use of cloud services by your law firm. For example, you should review the ethical rules that might apply to your organization, as discussed above. You should also determine whether the law firm or any of its professionals has entered in a confidential agreement or data use agreement that might restrict the transfer of data to third parties, even if these third parties are service providers. You should also determine whether the proposed plan to use a cloud service or host would require the prior consent of your clients.

Keep in mind, as well, that some data might be so sensitive or confidential that they should not be transferred to cloud, or the transfer might require significant precautions. This might be the case, for example, for files that pertain to high stakes mergers or acquisitions.

External Due Diligence; Contracts

Make sure that you understand the particular application or service you are contemplating to purchase. How will the servers be used to process your data? While it is important to involve your information technology team, you should understand how the service will operate, where the servers will be located, whether your data will be collocated with others customers’ data, and how your data will be protected from intrusion or disasters. Ensure that the service will be reliable and easy to use by everyone at the law firm. Conduct appropriate due diligence of the proposed vendor and the proposed applications. Check references. Conduct online searches and/or call current clients to evaluate the vendor’s reputation.

You should also review the proposed contract carefully, even if you are told that it is not negotiable. First, it might actually be possible to negotiate changes. And even if it is not, you should understand the consequences and implications of the engagement you are making. Pay special attention to the disclaimers of liability, confidentiality, intellectual property, and security provisions.

Continuous Access to Data

Service outages happen regularly. It is important to ensure that the cloud service will provide alternative access to data, such as by switching to a server located in a different region if an outage affects a specific data center. The service provider should have in place a robust disaster recovery plan that alleviates the effect of outages.

Consider backing-up your data to an alternative system or a second cloud provider, to ensure that you will be able to access the data in the event of an outage in the vendor’s facility or network, or in the event of a natural or other disaster.

Ensure that you have the ability to change providers when it becomes necessary or desirable to do so. Keep in mind, however, that while it may be feasible to move from one hosting service to another, changing applications, such as a customer relationship management, is likely to be impossible, or very costly.

Many cloud contracts provide that in the event of an outage the customer will be refunded that portion of their monthly fee that corresponds to the duration of the outage. Be realistic about the actual effect of such provision. The refund might be insignificant compared to the huge inconvenience and loss of business and loss of data availability. For example, what would you do if you are in the middle of a trial or closing an acquisition, and suddenly the needed data are not available due to an outage or other force majeure event?

Security, Security Breaches

Ensure that the data will be appropriately protected from unauthorized access or modification. Specific steps that may be required such as installation of firewall, access limitations, encryption, strong passwords or other authentication measures, and electronic audit trail to monitor access to data. Ensure that you are informed of the security breaches that affect the data that your law firm uploads to the cloud. You may have a legal and/or ethical obligation to inform your clients and the regulators about an incident affecting these data. Negotiate compensation or indemnification by the service provider if the breach is caused by the cloud provider either affirmatively or through its own negligence/failure to maintain agreed-upon safeguards or reasonable security measures.

Data Ownership

Beware of obscure or confusing clauses that might give the cloud provider ownership of data stored in its services, or the metadata associated with the access to or processing of your law firm’s or clients’ data. Ensure that the contracts with the service provider(s) acknowledge that the data are owned by the law firm and/or its client, and not by the cloud provider.


Anticipate the need to terminate the service. Have an exit strategy in place so that the law firm may change its provider when it becomes necessary or desirable to do so.


Train your own staff and professionals who will use the cloud service or products, and obtain their written agreement to comply with your security measures and those that are recommended by the cloud provider such as the use of strong passwords, and the prohibition of sharing passwords.


There is no doubt that cloud computing is here to stay and that gradually companies will move most of their data to the cloud. However, switching the physical custody of one’s data to a third party does not relieve an organization from its legal obligations to protect these data, ensure adequate security and integrity, limit its use to specific purposes, or ensure its availability. Thus, any company should carefully consider the pros and cons, as well as the consequences of the use of cloud services. For lawyers and law firms, these concerns are compounded with other concerns that come from the specific ethical rules that govern the profession. Before venturing in the cloud, lawyers and law firms must evaluate the effect of the relevant rules of ethics to which they are subject, identify the categories of data that may be processed or stored in the cloud, and take other necessary measures to ensure that they will be able to fulfill all of their legal and ethical duties to their clients.

Comments Off on Use of Cloud Computing in a Law Office

How to address cybersecurity threats in medical devices

Posted by fgilbert on June 24th, 2013

The FDA has published for comments a draft guidance that is intended to assist the health industry in identifying and addressing cybersecurity threats in medical devices. Indeed, medical devices are frequently used to collect patients’ vital signs. The information is then transferred to a database within the medical office or in the cloud, for further processing. For instance a diabetic patient may be equipped with a device that collects blood samples and sends the information to a cloud-based service that makes a diagnosis, determines the right dosage of a drug, and sets the time at which the dosage should be administered to the patient.

To complete this prowess, the medical device takes advantage of wireless, network, and Internet connections in order to exchange medical device-related health information collected from patients with a remote service or practitioner. The transmittal of patient information to remote computing facilities and their storage in a cloud can cause significant cybersecurity concern. The interception and unauthorized use, modification or deletion of critical patient information could have deadly consequences.

The draft guidance provides recommendations to consider and identifies documentation to be provided in FDA medical device premarket submissions in order to assure effective cybersecurity management and reduce the risk of compromise. Not surprisingly, the guidance recommends that engineers and manufacturers should develop security controls to maintain the confidentiality, integrity, and availability of the information collected from the patient and transmitted the medical cloud that allows the storage and processing of the information.

The draft guidance suggests the use of “cybersecurity by design”, a concept similar to that of “privacy by design,” to bake into the design of the medical devices and the equipment connected to these devices, the much-needed security features that could ensure more robust and efficient mitigation of cybersecurity risks.

The proposed guideline outlines the steps to be used for this purpose and stresses the importance of documenting the different steps taken:

  • Conduct a risk analysis and develop a management plan as part of the risk analysis;
  • Identify the assets at risk, the potential threats to these assets and the related vulnerabilities;
  • Assess the impact of the threats and vulnerabilities on the device functionality;
  • Assess the likelihood that a vulnerability might exploit;
  • Determine the risk levels and suitable mitigation strategies;
  • Assess residual risk, and define risk acceptance criteria.

As always, the issue is one of balance. Balancing the universe of threats against the probability of a security breach. Factors to be taken into account would include the type medical device, the environment in which it is used, the type and probability of the risks to which it is exposed, and the probable risks to patients from a security breach. In addition, the guidance recommends that manufacturers should also carefully consider the balance between cybersecurity safeguards and the usability of the device in its intended environment of use (e.g., home use vs. healthcare facility use) to ensure that the security capabilities are appropriate for the intended users.

The FDA draft guidance recommends that medical device manufacturers should be prepared to provide justification for the security features chosen and consider appropriate security controls for their medical devices including, but not limited to:

  • Limit access to trusted users only;
  • Ensure trusted content;
  • Use fail-safe and recovery features.

The proposed guidance also identifies the type of documentation that should be developed in preparation for premarket submission filed with the FDA. This information includes:

  • Hazard analysis, mitigations, and design considerations pertaining to intentional and unintentional cybersecurity risks associated with the device;
  • Traceability matrix that links the cybersecurity controls to the cybersecurity risks that were considered;
  • Systematic plan for providing validated updates and patches to operating systems or medical device software;
  • Documentation to demonstrate that the device will be provided to purchasers and users free of malware; and instructions for use and product specifications related to recommended anti­virus software and/or firewall use appropriate for the environment of use.

The Draft Guidance is available at


Comments Off on How to address cybersecurity threats in medical devices

New York State launches investigation of top insurance companies’ cybersecurity practices. Who’s next?

Posted by fgilbert on June 4th, 2013

The State of New York has launched an inquiry into the steps taken by the largest insurance companies to keep their customers and companies safe from cyber threats. This is the second inquiry of this kind.  Earlier this year, a similar investigation targeted the cyber security practices of New York based financial institutions.

On May 28, 2013, the New York Department of Financial Services (DFS) issued letters pursuant to Section 308 of the New York Insurance Law (“308 Letters”) to 31 of the country’s largest insurance companies, requesting information on the policies and procedures they have in place to protect health, personal and financial records in their custody against cyber attacks.

Among other things, the 308 Letters request:

  • Information on any cyber attacks to which the company has been subject in the past three years;
  • The cyber security safeguards that the company has put in place;
  • The company’s information technology management policies;
  • The amount of funds and other resources that are dedicated to cyber security;
  • The company’s governance and internal control policies related to cyber security

The insurance companies will have a short period to respond to the questionnaire.  For further detail see Press Release of the New York Governor’s Office.

It is not clear what the State of New York will do with the information collected from the responses to this inquiry, but it is certain that this initiative is likely to be followed with great interest by other State Insurance and Financial industry regulators.  Indeed, both the insurance and financial services institutions collect, process and retain a significant amount of highly sensitive personal information about prospective, current and past customers.

Companies in the insurance or financial services sectors, as well as their respective service providers, should take the time to review their risk assessments, policies and procedures, especially with a focus on evaluating whether they adequately address known vulnerabilities, meet the current “best practices” standards, and are keeping up with the most recent technologies and forms of cyber attacks.

Hot Issues in Data Privacy and Security

Posted by fgilbert on April 22nd, 2013

Data privacy and security issues, laws and regulations are published, modified and superseded at a rapid pace around the world. The past ten years, in particular, have seen a significant uptake in the number of laws and regulations that address data privacy or security on all continents. On March 1, 2013, a program held at Santa Clara University’s Markkula Center for Applied Ethics, titled “Hot Issues in Global Privacy and Security”, featured attorneys practicing on all continents who provided an update of the privacy, security and data protection laws in their respective countries.

The second half of the program featured a panel moderated by Francoise Gilbert, where the chief privacy counsel of McAfee, Symantec and VMWare talked about how to drive a global privacy and security program in multinational organizations.

Videos of the program are available by clicking here.

The program was the second part of a two-day series of events. The first event was held in San Francisco on February 28, 2013, and was sponsored by Box, Inc. and the Cloud Security Alliance. This program focused on US and Foreign Government Access to Cloud Data and started with an overview of the laws that regulate US government access to data, presented by Francoise Gilbert. A panel featuring European and North American attorneys followed; they discussed the equivalent laws in effect in their respective countries. The program concluded with a presentation by the general counsel of Box, Inc., who spoke about the way in which his company responds to government requests to access to data stored.

Videos of the program are available by clicking here.

Comments Off on Hot Issues in Data Privacy and Security

Article 29 Working Party’s Opinion on Mobile App Privacy

Posted by fgilbert on March 15th, 2013

On March 14, 2013, the European Union’s Article 29 Working Party published its opinion on the unique privacy and data protection issues faced by applications used on mobile device.  The 30-page opinion provides an analysis of the technical and legal issues, and concludes with a series of recommendations to app developers, platform developers, equipment manufacturers and third parties.

In many respects, this new opinion of the Article 29 Working Party is very similar to the document that the Federal Trade Commissions has published recently on the same topic.  It addresses many themes also found in the FTC documents regarding the use of mobile applications in general, or that mobile applications directed to children.

The Article 29 Opinion WP 202 provides two series of recommendations for application developers.  The first set of recommendation is in fact a recitation of general principles set forth in the proposed Data Protection Regulation, but adapted to the specific context of the mobile world, with references to location data, unique device identifier, SMS.   There are also references to other modern concepts, such as privacy design, also found on the proposed Data Protection regulation, but absent from Directive 95/46/EC, the directive currently in effect.

The second set of recommendations to application developers includes specific guidance on the actions to be taken.  These include:

  • Adopting appropriate measures that address the risks to the data;
  • Informing users about security breaches;
  • Telling users what types of data are collected or 
accessed on the device, how long the data are retained and what security measures are used to protect these data;
  • Developing tools to enable users to decide how long their data should be retained, based on their specific preferences and contexts, rather than offering pre-defined retention terms;
  • Including information in their privacy policy dedicated to European users;
  • Developing and implementing simple but secure online access tools for users, without collecting 
additional excessive personal data;
  • Developing, in cooperation with OS and device manufacturers and others, innovative solutions to adequately inform users on mobile devices, such as through layered information notices combined with meaningful icons.

The remainder of the recommendations is addressed to app stores, OS and device manufacturers, and third parties.

The protection of children reappears as a common theme in the different recommendations to the different players in the mobile market.  Each set of recommendations provided in WP 202 stresses that they should limit their collection of information from children, and especially refrain from processing children’s data for behavioral advertising purposes, and refrain from using their access to a child’s account to collect data about the child’s relatives or friends.

Comments Off on Article 29 Working Party’s Opinion on Mobile App Privacy

FTC v. Google V2.0 – Lessons Learned

Posted by fgilbert on August 13th, 2012

The Federal Trade Commission has published its long-awaited Proposed Consent Order with Google to close its second investigation into Google’s practices (Google 2). Under the proposed document, Google would agree to pay a record $22.5 million civil penalty to settle charges that it misrepresented to users of Apple Safari’s browser that it would not place tracking cookies on their browser, or serve targeted ads. It would also have to disable all tracking cookies that it had said it would not place on consumer’s computers, and report to the FTC by March 8, 2014 on how it has complied with this remediation requirement.

Google 2 Unique Aspects

Unlike most consent orders published by the FTC, the Google 2 Consent Order does not address primarily the actual violations privacy promises made. Rather, it addresses the fact that Google’s activities allegedly violate a prior settlement with the FTC, dated October 2011 (Google 1).

As such, beyond evidencing the FTC’s ongoing efforts to ensure that companies live up to the privacy promises that they make to consumers, Google 2 clearly shows that the FTC takes seriously the commitments that it requires from companies that it has previously investigated. When an FTC consent decree requires a 20-year commitment to abide by certain practices, the FTC may, indeed, return and ensure that the obligations outlined on the consent decree are met.

Privacy Promises are made everywhere

A significant aspect of the proposed Google 2 Consent Order and related Complaint, is that privacy promises are made in numerous places beyond a company’s online privacy statement. They are found, as well as, in other representations made by the company, such as through its regulatory filings, or in its marketing or promotional documents. In the Google 1 enforcement action, the FTC looked at the promises and representations made in Google’s Safe Harbor self-certification filings. In the Google 2 enforcement action, the FTC looked at the promises and representations made in Google’s statements that it complied with the Self-Regulatory Code of Conduct of the Network Advertising Initiative (NAI).

Misrepresentation of compliance with NAI Code

In the third count of the FTC Complaint in Google 2, the FTC focuses on Google’s representation that it adheres to, or complies with the NAI Self-Regulatory Code of Conduct. The alleged violation of this representation allows the FTC to claim that Google violated its obligation under Google 1 to not “misrepresent the extent to which it complies with, or participates in, a privacy, security, or other compliance program sponsored by the government or any other entity”.

Evolution of the FTC Common Law

Google 2 shows a clear evolution of the FTC “Common Law” of Privacy. As the concept of privacy compliance evolves, the nature of the FTC’s investigations becomes more refined and more expansive. In its prior cases, the FTC first focused on violations of companies’ privacy promises made in their public Privacy Statements. Then, more recently, in several consent orders – including Google 1 – the FTC expanded the scope of its enforcement action to include violations of the Safe Harbor Principles outlined by the US Department of Commerce and the EU Commission. Now, with Google 2, the FTC expands again the scope of its enforcement actions to include potential violation of representations made of compliance with the NAI Self Regulatory Code of Conduct. This trend is likely to continue, and in future cases, we should expect to see an expansion of the FTC investigations into verifying compliance with statements made that a company follows other self-regulatroy industry standards.

What consequences for Businesses

Companies often use their membership in industry groups or privacy programs as a way to show their values, and to express their commitment to certain standards of practice. This was the case for Google with the Safe Harbor of Department of Commerce and of the European Union (Google 1), and with the Network Advertising Initiative (Google 2).

These promises to comply with the rules of a privacy program are not just statements made for marketing purposes. The public reads them, and so do the FTC and other regulators.

Privacy programs such as the Safe Harbor or the NAI Code have specific rules.  As shown in the Google 1 and Google 2 cases, failure to comply with the rules, principles and codes of conducts associated with membership in these programs could be fatal.

If the disclosures made are not consistent with the actual practices and procedures, such deficiency would expose the company to claims of unfair and deceptive practice; or in the case of Google, to substantial fines for failure to comply with an existing consent decree barring future misrepresentation.

If your company makes promises or statements about its privacy – or security – practices, remember and remind your staff that these representations may have significant consequences, and may create a minefield if not attended to properly; and

  • Look for these representations everywhere, and not just in the official company Privacy Statement; for example, look at the filings and self-certification statements, the cookie disclosures, the marketing or sales material, the advertisements;
  • Periodically compare ALL promises that your business makes with what each of your products, services, applications, technologies, devices, cookies, tags, etc. in existence or in development actually does;
  • Educate your IT, IS, Marketing, Communications, Sales, and Legal teams about the importance of working together, and coordinating efforts so that those who develop statements and disclosures about the companies policies and values fully understand, and are aware of all features and capabilities of the products or services that others in the Company are designing and developing;
  • If your company claims that it is a member of a self-regulatory or other privacy compliance program, make sure that you understand the rules, codes of conduct or principles of these programs or industry standards; and ensure that the representations of your company’s compliance with these rules, codes of conduct, principles are accurate, clear and up-to-date;
  • Ensure that ALL of your company’s products and services comply and are consistent with All of the promises made by , or on behalf of, the company in ALL of its statements, policies, disclosures, marketing materials, and at ALL times.
Posted in Best Practices, FTC
Comments Off on FTC v. Google V2.0 – Lessons Learned