You Are Viewing Trends

Proposed Principles for Artificial Intelligence Published by the White House

Posted by fgilbert on January 19th, 2020

A draft memorandum outlining a proposed Guidance on Regulation of Artificial Intelligence Application(“Memorandum“) for agencies to follow when regulating and taking non-regulatory actions affecting artificial intelligence was published by the White House on January 7, 2020. The proposed document addresses the objective identified in an Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence, (“Executive Order 13859”) published by the White House in February 2019.2

The Memorandum sets out policy considerations that should guide oversight of artificial intelligence (AI) applications developed and deployed outside the Federal government. It is intended to inform the development of regulatory and non-regulatory approaches regarding technologies and industrial sectors that are empowered or enabled by artificial intelligence and consider ways to reduce barriers to the development and adoption of AI technologies.

Principles for the Stewardship of AI Applications

The memorandum sets forth ten proposed principles:

  • Ensure public trust in AI
  • Public participation in all stages of rulemaking process
  • Scientific integrity and information quality
  • Consistent application of risk assessment and management
  • Maximizing benefits and evaluating risks and costs of not implementing
  • Flexibility to adapt to rapid changes
  • Ensure Fairness and non-discrimination in outcomes
  • Disclosure and transparency to ensure public trust
  • Promote safety and security
  • Interagency cooperation

Details on each of these principles are provided below

  1. Public Trust in AI.

Government regulatory and non-regulatory approaches to AI should promote reliable, robust and trustworthy AI applications that contribute to public trust in AI.

  1. Public Participation.

Agencies should provide opportunities for the public to provide information and participate in all stages of the rulemaking process. To the extent practicable, agencies should inform the public and promote awareness and widespread availability of standards, as well as the creation of other informative documents.

  1. Scientific Integrity and Information Quality.

Agencies should hold to a high standard of quality, transparency and compliance information that is likely to have substantial influence on important public policy or private sector decisions governing the use of AI. They should develop regulatory approaches to AI in a manner that informs policy decisions and fosters public trust in AI. Suggested best practices would include: (a) transparently articulating the strengths, weaknesses, intended optimizations or outcomes; (b) bias mitigation; and (c) appropriate uses of the results of AI application.

  1. Risk Assessment and Management.

The fourth principle caution against an unduly conservative approach to risk management. It recommends the use of a risk-based approach to determine which risks are acceptable, and which risks present the possibility of unacceptable harm, or harm whose expected costs are greater than expected benefits. It also recommends that agencies be transparent about their evaluation of risks.

  1. Benefits and Costs.

The fifth principle provides that agencies should consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of an AI application. Agencies should also consider critical dependencies when evaluating AI costs and benefits because data quality, changes in human processes, and other technological factors associated with AI implementation may alter the nature and magnitude of risks.

  1. Flexibility.

When developing regulatory and non-regulatory approaches, agencies should pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications. Agencies should also keep in mind international uses of AI.

  1. Fairness and Non-Discrimination.

Agencies should consider whether AI applications produce discriminatory outcomes as compared to existing processes, recognizing that AI has the potential of reducing present-day discrimination caused by human subjectivity.

  1. Disclosure and Transparency.

The eighth principle comments that transparency and disclosure may increase public trust and confidence. These disclosures may include identifying when AI is in use, for instance, if appropriate for addressing questions about how an application impacts human end-users. Further, agencies should carefully consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measures for disclosure and transparency.

  1. Safety and Security.

Agencies are encouraged to promote the development of AI systems that are safe, secure, and operate as intended, and to encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Particular attention should be paid to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems. Further, agencies should give additional consideration to methods for guaranteeing systemic resilience, and preventing bad actors from exploiting AI system weaknesses, cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity’s AI technology.

  1. Interagency Cooperation.

Agencies should coordinate with each other to ensure consistency and predictability of AI-related policies that advance innovation and growth in AI, while appropriately protecting privacy, civil liberties, and allowing for sector- and application-specific approaches when appropriate.

Non-Regulatory Approaches to AI

The Memorandum recommends that an agency consider taking no action or considering non-regulatory approaches when it determines, after evaluating a particular AI application, that existing regulations are sufficient, or the benefits of a new regulation do not justify its costs. Examples of such non-regulatory approaches include: (a) sector-specific policy guidance or frameworks; (b) pilot programs and experiments; and (c) the development of voluntary consensus standards

Reducing Barriers to the Development and Use of AI

The Memorandum points out that Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence, instructs OMB to identify means to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.  The Memorandum provides examples of actions that agencies can take, outside the rulemaking process, to create an environment that facilitates the use and acceptance of AI. One of the examples is agency participation in the development and use of voluntary consensus standards and conformity assessment activities.

Next Steps

The Memorandum points out that Executive Order 13859 requires that implementing agencies review their authorities relevant to AI applications and submit plans to OMB on achieving the goals outlined in the Memorandum within 180 days of the issuance of the final version of the Memorandum. In this respect, such agency plan will have to:

  • Identify any statutory authorities specifically governing agency regulation of AI applications;
  • Identify collections of AI-related information from regulated entities;
  • Describe any statutory restrictions on the collection or sharing of information, such as confidential business information, personally identifiable information, protected health information, law enforcement information, and classified or other national security information);
  • Report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications; and
  • List and describe any planned or considered regulatory actions on AI.

Conclusion

This draft guidance marks defines a concrete structure for outlining regulatory and non-regulatory approaches regarding AI. Businesses should evaluate the extent to which their own AI strategies have the ability to address the ten principles.

In addition, since the development of AI strategies is likely to have global consequences, they should also take into account similar initiatives that have been developed elsewhere around the world, such as by the OECD (with the “OECD Recommendation on Artificial Intelligence”), the European Commission (through its “Ethics Guidelines for Trustworthy Artificial Intelligence”) or at the country level, for example in France (with the “Algorithm and Artificial Intelligence: CNIL Report on Ethics Issues”).

Posted in Trends, US Law
Comments Off on Proposed Principles for Artificial Intelligence Published by the White House

My last day at Greenberg Traurig

Posted by fgilbert on July 31st, 2019

After nearly four wonderful years, today is my last day at the firm. I have enjoyed my time there, and the ability to work with many great lawyers and professionals.

I am taking off for new adventures in privacy and cybersecurity in the US and throughout the world. I look forward to interacting or working with you in one of my new capacities.

Please come back on this site to hear about my new projects, and of course, to read about new legal developments in the areas of Data, Privacy and Cybersecurity.

Posted in Trends
Comments Off on My last day at Greenberg Traurig

The FTC Recommends Data Broker Legislation

Posted by fgilbert on May 27th, 2014

The Federal Trade Commission (FTC) is calling for legislation to shed some light on data brokers’ practices and give consumers some control over the use of their personal information. In its 110-page report, “Data Brokers: a Call for Transparency and Accountability”, published on May 27, 2014, the FTC outlines the content of legislation that it is recommending to enable consumers to learn of the existence and activities of data brokers and to have reasonable access to information about them held by data brokers.

This report is the result of an 18 month-study of the practices of nine data brokers – Acxiom, CoreLogic, Datalogix, eBureau, ID Analytics, Intelius, PeekYou, Rapleaf, and Recorded Future. This study started in December 2012, when the FTC served orders on these data brokers, requiring them to provide information about their collection and use of consumers personal data.

In its Data Broker Report, the FTC observes that data brokers collect and store billions of data elements covering nearly every U.S. consumer. The extent of consumer profiling is such that these data brokers know every minute details of consumers’ everyday lives, such as income, socioeconomic status, political and religious affiliations, online and in-store purchases, social media activity, magazine subscriptions. The ability to create such detailed and precise profiles creates significant privacy concerns. According to the FTC Data Broker Report, one of the data brokers studied holds 700 billion data elements, and another adds more than 3 billion new data points to its database each month.

In most cases, data is collected behind the scenes, without consumer knowledge. The FTC Data Broker Report notes that personal data often passes through multiple layers of data brokers who share data with each other. Data brokers combine online and offline data, which may result in potentially sensitive inferences such as those related to ethnicity, income, religion, political leanings, age, or health conditions, such as pregnancy, diabetes, or high cholesterol. Many of the purposes for which data is collected pose risks to consumers, such as the unanticipated secondary uses of the data. For instance, data collected to offer discounts to potential purchasers of motorcycles, could also be interpreted by an insurance provider as a sign of risky behavior, resulting in an increase in life insurance premium. Some data brokers unnecessarily store data about consumers indefinitely, which may create security risks, in addition to the privacy risks described above.

The FTC Data Broker Report recommends that Congress enact legislation to require the following:

For brokers that provide marketing products:

  • The creation of a centralized mechanism, such as an Internet portal, where data brokers can identify themselves, describe their information collection and use practices, and provide links to access and opt-out tools;
  • Data brokers to give consumers access to their data, including any sensitive data, at a reasonable level of detail;
  • Data brokers to inform consumers that they derive certain inferences from raw data;
  • Data brokers to disclose the names and/or categories of their data sources, to enable consumers to correct wrong information with the original source;
  • Consumer-facing entities (e.g., retailers) to provide prominent notice to consumers when they share information with data brokers, along with the ability to opt-out of such sharing; and
  • Consumer-facing entities to obtain consumers’ affirmative express consent before collecting and sharing sensitive information with data brokers.

For brokers that provide “risk mitigation” products:

  • When a consumer-facing company uses a data broker’s risk mitigation product to assist in the decision making process, that company would have to identify the information on which it relied when it decided to limit a consumer’ ability to complete a transaction;
  • Data brokers to allow consumer to access the information used and to correct it, as appropriate.

 For brokers that provide “people search” products:

  • Data brokers to allow consumers to access their own information;
  • Data brokers to allow consumers to opt-out of having the information included in a people search product;
  • Data brokers to disclose the original sources of the information so consumers can correct it;
  • Data brokers to disclose any limitations of an opt-out feature. 

What the FTC Data Broker Report means for data brokers and others

For the past few years, the Federal Trade Commission has monitored, and attempted to guide online behavioral advertising and behavioral targeting. However, while it has repeatedly requested the advertising industry to self regulate its practices, it has not suggested, or even less outlined, proposed legislation.

With its 18-month evaluation of the data broker industry, and the issuance of its Data Broker Report on May 27, 2014, the Federal Trade Commission increases the pressure. This time, without asking for self-regulation, the FTC calls directly for legislation requiring transparency and accountability from data brokers and the availability of access and correction rights for consumers. This is an important step, which may also provide guidance in related areas.

In its Data Broker Report, the Federal Trade Commission limited the scope of its initiative to the use of big data by data brokers, i.e. entities that collect and process data for resale or licensing purposes. It did not address the use of big data by non-brokers – entities that are using the new, sophisticated tools available from big data technologies to mine a wide range of data about their own customers that they have accumulated over the years. While limiting its focus to a segment of the big data users, the FTC made a powerful call for legislation, and provided very specific direction on the principles that should be addressed in that legislation.

The FTC Data Broker Report is a major milestone, compared with the recent White House Big Data Report (May 2014), which suggested legislation that would be based on the White House Consumer Privacy Bill of Rights (February 2012), but did not identify with specificity the elements that this legislation should address or contain, or pointed to the White House Consumer Privacy Bill of Rights without explaining in what way legislation that would be based on the White House Consumer Privacy Bill of Rights would address the specific and unique issues raised by the use of big data technologies and techniques by data brokers.

The FTC Data Broker Report, on the other hand, provides a blue print for legislation that focuses on the unique issues raised by the massive collection of personal data. The principles outlined by the FTC are more directly useable, more practicable, and more pragmatic. They are also better adapted to the idiosyncracies of the world of data brokers, where all uses of data are secondary uses, and were not anticipated – and probably not disclosed – in the privacy disclosures of the customer facing companies that collected the data in the first place.  Thus, it would be much easier to act upon the call for action, and draft legislative text.

It should be further noted that, while the FTC Data Broker Report is limited to a specific market, the ideas that it submits to the U.S. legislator could easily be expanded or extrapolated to all users of big data, i.e. those entities other than data brokers who use big data techniques and massive computing powers for their internal purpose. Thus, entities other than data brokers that process large amounts of data with the intent of producing personal profiles or inferring personal interests, practices, or other characteristics of individuals should consider evaluating the guidance provided in both the FTC Data Broker Report – in addition to that provided in the White House Big Data Report – when trying to anticipate the direction that laws, regulations, and enforcement might take in the next few years with respect to the secondary uses of personal data.

The FTC Data Broker Report is published at: http://www.ftc.gov/news-events/press-releases/2014/05/ftc-recommends-congress-require-data-broker-industry-be-more

 

 

 

Posted in Big Data, FTC, Trends
Comments Off on The FTC Recommends Data Broker Legislation

Internet of Things: Significant Privacy & Security Issues

Posted by fgilbert on January 9th, 2014

The Internet of Things has the potential to transform many fields, including home automation, medicine, and transportation. It will connect more things and more people to the Internet, and ultimately, connect more people with each other. Our devices will know, more than ever, who we are, what we pay, what we sign up for, or with whom we interact. As a result, one of the significant issues raised by the Internet of Things is consumer privacy and data security.

Because interconnected devices and services often collect and share large amounts of personal information, companies offering products as part of the Internet of Things must ensure that they safeguard the privacy and security of users. Policymakers and members of the technology community must also be sensitive to consumer privacy and data security issues.

The recent Federal Trade Commission action against TRENDnet provides a vivid example of the potential mishaps that can occur when proper privacy and security measures are missing. TRENDnet sold its Internet-connected SecurView cameras for purposes ranging from home security to baby monitoring. Defective software allowed unfettered online viewing and in some instances listening, by anyone with the camera’s IP address. As a result, hackers posted live feeds of nearly 700 consumer cameras on the Internet, showing activities such as babies asleep in their cribs and adults going about their daily lives. In addition, TRENDnet transmitted user login credentials in clear, readable text over the Internet.

The Federal Trade Commission charged that TRENDnet’s lax security practices exposed the private lives of hundreds of consumers to public viewing on the Internet and found that TRENDnet’s practices were deceptive and unfair. Among other things, the settlement requires TRENDnet to establish a comprehensive information security program and to obtain third-party assessments of its security programs every two years for the next 20 years. TRENDnet must also notify customers about the security issues with the cameras and the availability of the software update to correct them, and provide free technical support for the next two years to assist customers in updating or uninstalling their cameras.

Mobile devices and wearable devices play an important role in the Internet of Things, as well. They collect, analyze, and share information about users and their environment, such as their current location, travel pattern, speed, or the noise levels in their surroundings. They allow users to connect with each other in all sorts of settings, and share – knowingly, or not – a wide variety of information among themselves and with the service provider.

Mobile app providers have an obligation to inform their customers about their collection and use. This is specifically required by the California Online Privacy Protection Act. The Federal Trade Commission agrees, as well. In February 2013, the Federal Trade Commission investigated the practices of Path, a social network that allows users to keep journals about moments in their life and share them with up to 150 friends.

In its complaint against Path, the FTC identified circumstances where Path deceived users by collecting personal information, such as information from their address books, without the users’ knowledge or consent. The FTC concluded that the collection of personal information from a mobile phone without disclosure or permission may be a deceptive or unfair practice under the FTC Act. The final consent decree requires Path to establish a comprehensive privacy program and obtain independent privacy assessments every other year for the next 20 years. Path will also have to pay a fine of U.S. $800,000 to settle charges that it illegally collected personal information from children without their parents’ consent.

This case has obvious implications for other Internet-connected devices that collect personal information about users. Such technologies should include some way to notify users and obtain their permission. This raises questions of how businesses should convey, on the small phone screen, information about what data, sometimes of a highly sensitive nature, these devices and apps collect, use, and share.

Providing notice to consumers may be complicated in the case of devices with a limited or no user interface. Activity trackers have only very basic user interfaces on the device itself. Smart light bulbs may not have any consumer-facing user interface. Similar issues arise with wearable devices, such as smart watches, wristbands or glasses. Addressing consumers’ privacy concerns over such devices will present business, engineering, and policy challenges that will require constant innovation.

The Internet has evolved to one of the most dynamic forces in the global economy. It is reshaping entire industries and changing the way we interact on a personal level. The Internet of Things promises even greater progress, but raises significant information privacy and security issues.

Comments Off on Internet of Things: Significant Privacy & Security Issues

Hot Issues in Data Privacy and Security

Posted by fgilbert on April 22nd, 2013

Data privacy and security issues, laws and regulations are published, modified and superseded at a rapid pace around the world. The past ten years, in particular, have seen a significant uptake in the number of laws and regulations that address data privacy or security on all continents. On March 1, 2013, a program held at Santa Clara University’s Markkula Center for Applied Ethics, titled “Hot Issues in Global Privacy and Security”, featured attorneys practicing on all continents who provided an update of the privacy, security and data protection laws in their respective countries.

The second half of the program featured a panel moderated by Francoise Gilbert, where the chief privacy counsel of McAfee, Symantec and VMWare talked about how to drive a global privacy and security program in multinational organizations.

Videos of the program are available by clicking here.

The program was the second part of a two-day series of events. The first event was held in San Francisco on February 28, 2013, and was sponsored by Box, Inc. and the Cloud Security Alliance. This program focused on US and Foreign Government Access to Cloud Data and started with an overview of the laws that regulate US government access to data, presented by Francoise Gilbert. A panel featuring European and North American attorneys followed; they discussed the equivalent laws in effect in their respective countries. The program concluded with a presentation by the general counsel of Box, Inc., who spoke about the way in which his company responds to government requests to access to data stored.

Videos of the program are available by clicking here.

Comments Off on Hot Issues in Data Privacy and Security

USA Patriot Act Effect on Cloud Computing Services

Posted by fgilbert on December 11th, 2012

Recent reports and press articles, with attention grabbing headlines, have expressed concern, and at times asserted, that the U.S. government has the unfettered ability to obtain access to data stored outside the United States by U.S. cloud service providers or their foreign subsidiaries. They point to the USA PATRIOT Act (“Patriot Act”) as the magic wand that allows U.S. law enforcement and national security agencies unrestricted access to any data, anywhere, any time. In fact, the actual impact of the Patriot Act in this cloud context is negligible.

To the extent that the U.S. law enforcement or national security agencies can access data held in the cloud or elsewhere, it is not through the Patriot Act but through decades-old laws and judicial decisions. For more than 40 years, government access to personal data and communications in the context of national security and law enforcement matters has been regulated by a wide range of federal and state laws. These laws were enacted long before the passage of the Patriot Act, and have been amended further since then. These laws are not so different from those that are in effect elsewhere. Most other nations have in place comparable provisions for access to data in the context of national security or law enforcement. Others do not, and in this case, their governments have unrestricted powers to access any data anywhere from anyone.

This article will examine the actual role and effect of the Patriot Act, and briefly describe some of the U.S. laws that govern access to data by the U.S. law enforcement, national security, and intelligence services. A subsequent article will address how other countries, in Europe and elsewhere, regulate access to data by their respective governmental entities in similar circumstances.

 

Only a Series of Amendments

Contrary to press reports, the Patriot Act is not “the” US law that governs the rules for access to data or communications by law enforcement and national security agencies. Signed into law in 2001 after the September 11 attacks, the USA PATRIOT Act (acronym for Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act) is primarily a combination of amendments to existing laws that were enacted in the 1970’s and 1980’s.

The amendments brought in by the Patriot Act were designed to make it easier for the U.S. law enforcement and national security agencies, in the context of criminal investigations, to conduct surveillance and access data for the purpose of preventing, detecting, and investigating crimes and terrorist acts. For example, previously, if law enforcement needed to have access to data held by communication providers in multiple states, it had to seek separate search warrants from separate judges. The Patriot Act amendments allowed for this type of investigation to require only one search warrant to be obtained from one federal judge. This change streamlined the process for U.S. government searches in certain cases, but did not affect the underlying laws, regulations, and prior court decisions pertaining to government requests for access to data.

 

Rules for Government Access to Data

It is not as easy as the press depicts it for U.S. prosecutors, law enforcement or national security agencies to have access to data, information, document or premises owned or controlled by private entities, enterprises, financial institutions, and the like. Numerous rules govern the circumstances and manner in which state or federal government agents may act and collect the evidence that they are seeking. In addition, other rules govern the use of this evidence. Evidence may be admitted in court only if it has been legally collected in accordance with applicable laws.

At the federal level, the basic rule written in the 4th Amendment to the U.S. Constitution grants individuals the right to be secure from unreasonable searches and seizures. In addition, several federal laws, such as the Wiretap Act, Stored Communications Act, Pen Register Act, Foreign Intelligence Surveillance Act, Communications Assistance to Law Enforcement Act, or the Economic Espionage Act define specific rules. A similar regime exists under state law. Most U.S. states have general surveillance laws as well as specific laws, such as laws that govern the use of RFID technologies for surveillance purposes.

These laws may depend on the nature of the information to be retrieved and the purpose for which it is retrieved. For example, the Wiretap Act pertains to access to data in transit, whereas the Stored Communications Act pertains to access to data in storage. There are different provisions for access to content (e.g., the actual message or communication) as opposed to access to non-content (e.g., the identity of the sender or recipient, or time of the call or communication). The law may distinguish whether the person being investigated is a U.S. citizen or resident, or, instead an “agent of a foreign power,” as is the case under the Foreign Intelligence Surveillance Act.

The laws described above define the specific rules and requirements that must be met for a federal or state investigator to have access to specific data, premises, or equipment where the data is located, and for specific purposes. In most cases, the investigator is required to obtain a subpoena, a court order, or a warrant. In rare cases, it may be possible to have access to data without a subpoena, court order, or warrant; these cases are specifically identified in the applicable law, and are generally associated with extraordinary circumstances and grave hostile acts. There, other types of control and oversight apply.

 

Stored Communications Act

The rules of the Stored Communications Act are frequently used in the context of access to data stored by cloud service providers. Enacted in 1986, the Stored Communications Act governs access to wire, oral, and electronic communications in storage (as opposed to communications in transit). The law contains general prohibitions against access to these communications, and exceptions, such as rules that allow disclosure of these communications by providers of electronic communications services (e.g., Verizon, AT&T). It also contains an exception for allowing a governmental entity to obtain access to data stored by communication and computing service providers. These rules are very complex and detailed.

When the data are held by an electronic communications service provider, the rules for obtaining access to the content differ according to the length of the period during which the service provider has held the data. The threshold is 180 days. The requirements are most stringent for access to data held for less than 180 days than for data held for longer than 180 days. This dichotomy was developed in the late 80’s, at a time when the Internet, as we know it now, did not exist, and before we started using servers for storage purposes. At that time, it was deemed that a communication that had been stored for 180 days was abandoned and thus deserved less protection.

When a governmental entity seeks to obtain access to content that an electronic communications service has held in storage for less than 180 days, it must first obtain a search warrant. The standard for obtaining a warrant is very high: the government agent must show that “probable cause” exists, based on his or her personal observation or hearsay information, to show that evidence of a crime would be found in the requested search.

On the other hand, to obtain access to the same content held by the same electronic communications service provider for more than 180 days, a subpoena or court order would suffice. The requirement for a subpoena or a court order is much less stringent than that for a search warrant. However, if the government elects to use a subpoena or a court order, it must give prior notice to the subscriber or customer of that service. If the government wants to avoid providing notification, then a warrant is required.

This is just an example of the complexity of these rules; they are detailed in lengthy provisions, with numerous exceptions and nuances. For example, the rules described above apply only to “electronic communication services” (“ECS”) (i.e., services that send or receive wire or electronic communications). Different requirements apply to access to data held by “remote computing services” (“RCS”) (i.e., services that provide computer storage or processing services”). In this case, the 180-day dichotomy does not apply and the requirements are different. Further, while the rules above would apply for access to “content” (i.e., what was said, what was the message), there are different rules for access to “non-content” (i.e., when the messages was sent, from whom, to whom).

 

Foreign Intelligence Surveillance Act and Amendment

Enacted in 1978, the Foreign Intelligence Surveillance Act (FISA) prescribes procedures for physical searches and electronic surveillance of activities of foreign entities and individuals where a significant purpose of the search or surveillance and the collection of information is to obtain “foreign intelligence information.”

The term “foreign intelligence information” is defined to include information that relates to actual or potential attacks or grave hostile acts of a foreign power or an agent of a foreign power, sabotage, international terrorism, weapons of mass destruction, clandestine intelligence activity by or on behalf of a foreign power, or similar issues.

Like for the other laws described in this article, the Patriot Act did not create the FISA, it only amended it. For example, the Patriot Act enlarged the scope of the existing law to apply when “a significant purpose” of the search or surveillance is the collection of foreign intelligence, whereas the scope of FISA was initially limited to searches where “the primary purpose” was the collection of foreign intelligence.

The FISA allows the President of the United States, through the U.S. Attorney General, to authorize electronic surveillances without a court order in order to acquire foreign intelligence for a period of up to one year. Instead, the government must seek an order from the FISA Court (or “FISC”), a special court that oversees surveillance activities under the FISA. The application to conduct the surveillance must set out the facts to support a finding by the FISC judge reviewing the application that there is probable cause to believe that the proposed target is a foreign power, and must describe the premises or property that is the proposed subject of the search or surveillance. The U.S. Attorney General representative must certify, in writing and under oath, that the electronic surveillance is solely directed at the acquisition of communications between or among foreign powers and that the proposed procedures meet the “minimization procedures” requirement. The U.S. Attorney General representative must immediately transmit, under seal, a copy of this certification procedure to the FISC.

The FISA was amended in 2008 through the FISA Amendment Act (FAA) to permit the U.S. Attorney General and the Director of National intelligence to jointly authorize the targeting of non-U.S. persons reasonably believed to be located outside the United States, in order to acquire foreign intelligence information. Targeting under the FAA requires a determination by the U.S. Attorney General and the Director of National Intelligence that exigent circumstances exist because intelligence important to the national security of the United States may be lost.

There are numerous limits to the way in which the targeting may be conducted, and minimization procedures must be used. In addition, the targeting must be conducted in a manner consistent with the Fourth Amendment to the U.S. Constitution, which prohibits unreasonable searches and seizures.

The U.S. government does not have jurisdiction over non-U.S. entities located outside the U.S. territory. The FAA does not grant U.S. governmental entities the right to access servers held outside the United States. It only defines the rules that federal agents must follow to target communications made by non-U.S. persons believed to be located abroad.

 

Annual Reports

The issuance of search warrants or orders allowing access to or interception of communication is highly controlled. It is not enough that each investigator must provide substantial information to show why the search is needed, and provide the grounds for why the content is relevant or material. In addition, any judge who has issued an order for an interception or has denied the request for access to data must provide detailed reports on the approvals or denials annually to the Administrative Office of the United States Courts.

Concurrently, the U.S. Attorney General who made a request for access must also file a report to the courts’ administrative office. This report must contain detailed information about each investigation, including, for example, the number of persons whose communications were intercepted, number of arrests resulting from the interception, or number of convictions. Compilations of the judge reports and U.S. Attorney General reports are prepared annually, and a summary report is provided to Congress. These reports are publicly available for anyone to review and posted on the Internet.

Consequently, investigations are not initiated lightly; having to prepare so many applications, sworn statements and reports would already be a deterrent. In addition, each such investigation is very costly. According to the report of these investigations filed in 2010, the average cost of an “interception” ranges from $20,000 to over $100,000, with a median around $50,000.

 

U.S. Government Access to Data Outside the U.S.

What happens when an investigation would require access to data held in a foreign country? Generally, a U.S. prosecutor or investigator will not be permitted to conduct an investigation or to interview witnesses abroad. In most cases, the help of the local government will be necessary. To this end, over the years, nations have agreed on a variety of bilateral or multilateral treaties that define how they will cooperate in certain matters.

For example, the U.S. is party to several Mutual Legal Assistance Treaties (MLAT) for the purpose of gathering and exchanging information in an effort to enforce civil or criminal laws. There are numerous MLATs related to police and law enforcement cooperation and MLATs with respect to tax evasion, for example.

In addition, the U.S. is a member of the Council of Europe Convention on Cybercrime, which it ratified in 2007. The Convention governs electronic surveillance, sharing of evidence and computer crime. It allows governments to request and provide mutual assistance in the investigation and prosecution of a number of crimes, such as hacking, unauthorized access to computer systems, child pornography, or copyright infringements.

In some cases, law enforcement may attempt to obtain access to information held abroad by making the request from the U.S. affiliate of a company located abroad that may have custody or control over the documents or information at stake. In the U.S., courts have held that a company with a presence in the U.S. is obligated to respond to a valid demand for information by the U.S. government (made under one of the applicable U.S. laws) so long as the company retains custody or control over the data. The key question is whether the U.S. company does have the required level of “custody or control” to be forced to respond to the government request.

The question whether a U.S. based company has custody or control over data held outside the United States has been the subject of many cases and controversies. The seminal case in this area involves the Bank of Nova Scotia, where a U.S. court required the U.S. branch of the Canadian bank to produce documents that were held in the Cayman Islands for criminal proceedings in the U.S. This principle of extraterritorial reach has been followed elsewhere, for example in Australia. In the 1999 case of the Bank of Valletta PLC vs. National Crime Authority, the Australian branch of a Maltese bank was required to produce documents held in Malta for use in an Australian criminal proceeding.

 

Government Investigations and Privacy

There is an inherent opposition between governments’ requests for access to data in the context of criminal investigations or the fight against drugs or terrorism, and the basic rights of individuals to privacy in their home or their papers. The laws that govern government access to data and communications have attempted to provide a balance between the individual interest of a person and the community’s interest in fighting crime and terrorism, but have also recognized that national security may trump personal privacy. The laws discussed above are intended to curb the enthusiasm limit the powers of law enforcement and national security personnel in their quest for evidence.

In the European Union, there is a similar analysis. Directive 95/46/EC, the foundation document that defines the principles of privacy protection for all individuals and that is implemented into the national laws of each E.U. and E.E.A. Member State, recognizes that there are cases where privacy rights have to defer to other rights. The Directive has carved out from the blanket protection of individuals with respect to the processing of personal data, the ability for governments to have access to, or use of, personal information in connection with investigations that pertain to national security, defense and related areas. Some of the issues of privacy in the context of police and judicial investigation are addressed in a separate document, the Council Framework Decision 2008/977/JHA of November 2008 on the protection of personal data processed in the framework of police and judicial cooperation in criminal matters.

A similar carve-out is provided in the E.U.-U.S. Safe Harbor Principles, which state, “adherence to [the Safe Harbor] principles may be limited to the extent necessary to meet national security, public interest, or the requirements of law enforcement.”

 

What Rules Apply Abroad?

While rules that pertain to government access to data and communications in the United States have received a lot of attention, most countries also have laws authorizing government investigations for national security and other purposes. We will examine these foreign laws in an upcoming article.

 

 

NOTE:  A prior version of this article was published in May 2012 by TechTarget under the title: Demystifying the Patriot Act; Cloud Computing Impact.

 

Comments Off on USA Patriot Act Effect on Cloud Computing Services

Compliance By Design

Posted by fgilbert on October 15th, 2011

How to build cloud applications that anticipate your customers’ legal constraints?

To succeed and gain market share, developers of cloud services and cloud-based applications must take into account the compliance needs of their prospective customers. For example, a cloud that offers services to the health profession must anticipate that its customers are required to comply with HIPAA, the HITECH Act, and the applicable medical information state laws. If it fails to do so, it will not be able to sign-up customers. Similarly, a cloud that uses servers that are located throughout the world must be sensitive to the fact that foreign data protection laws will apply, and that these laws have stringent requirements that differ from those in effect in the United States. If you fail to address these obstacles, your potential customers will take their business elsewhere.

Understand the Legal Constraints that Govern your Customers

Companies that use cloud services or cloud based applications remain responsible for fulfilling their legal obligations and compliance requirements. These restrictions and requirements come from federal laws or state laws, and their related regulations, may stem from standards or from preexisting contracts, or may result from foreign laws.

These companies will demand that their cloud service providers be aware of these requirements and design their applications and offerings in such a manner that it provides the customer with the necessary tools to comply with its own legal or contractual obligations.

A savvy cloud architect, designer or developer will anticipate its customers’ needs and design applications that facilitate the customers’ compliance requirements, and help them fulfill their legal obligations.

Consider, for example, the following:

– Federal Laws

Numerous federal laws and their related regulations may apply to the specific category of data that are hosted in the cloud. Several laws and regulations, as well as orders issued by the Federal Trade Commission, require companies to adopt specific privacy and security measures to protect certain categories of data, and to pass along these requirements when entering into a contract with a third party such as a service provider or a licensee.

There are other requirements, such as ensuring the authenticity and integrity of financial records in order to comply with the Sarbanes Oxley Act. On the marketing side, anti-spam and other laws limit the use of personal data for commercial purposes and require the use of exclusion databases to ensure that communications are made only to the appropriate party.

– State Laws

Numerous state laws also create obligations on companies, and these obligations follow the data when these data are entrusted to third parties. For example, there are restrictions on the use of social security numbers or driver license numbers. If your application requires the processing of these data, it should include the required technology to mask the numbers from most users, and block mailings that would disclose these protected numbers, when required by law.

Some state laws require that companies enter into written contracts with their service providers – including of course cloud providers – and these contracts must contain very specific provisions. If you are not prepared to sign these contracts and abide by the related requirements, do not waste time building a cloud application.

– Standards

Standards such as PCI DSS or ISO 27001 define specific information security requirements that apply to companies, and flow down to subcontractors, in a domino effect similar to that of federal or state laws.

– Foreign Laws

Cloud customers will also want to know in which country their data will be hosted, because the location of the data directly affects the choice of the law that will govern the data. If the data reside in a foreign country, it is likely that that country’s laws will govern at least some aspects of access to the servers where the data are hosted. For example, that country’s law may permit the local government to have unlimited access to the data stored in its territory whereas you may be more familiar with the stricter restrictions to access to US stored data by US law enforcement.

– Crossborder Transfer Prohibitions

When servers are located abroad, there is also a significant obstacle:  the prohibition against the cross border transfers of personal data. This is for example the case throughout the European Union, where the data protection laws of all member states have implemented in their national laws the 1995 EU Data Protection Directive prohibitions against transfers of personal data out of the European Economic Area to countries that do not offer an adequate level of protection for personal data and privacy rights.

As part of your Compliance by Design endeavor, you should anticipate that your customers might be concerned about where the personal data of their employees or clients will be hosted or located, because foreign data protection laws may impose restrictions on these data. And you should design your offering accordingly.

Ensure Personal Data Protection

A substantial amount of data that might be held in the cloud will be personal data. In the US and abroad, personal data are protected by a growing number of privacy and data protection laws. In general, these laws put on the entity that originally collected the data and has become the custodian of these personal data, an obligation to protect the privacy rights of the individuals to whom these data pertain.

In a cloud environment, each entity or data steward must continue to be able to fulfill the legal requirements to which it is subject and to meet the promises and commitments that it made to the third parties from whom it collected the personal data. It must also ensure that individuals’ choices about their information continue to be respected, even when the data are processed in a cloud environment. For example, individuals may have agreed only to specific uses of their information. Data in the cloud must be used only for the purposes for which they were collected, whether the data were collected in or through the cloud, or otherwise.

Anticipate the Need to Provide for Access, Modification, and Deletion of Personal Data

In addition to the above, the applicable law or privacy notice may allow individual data subjects to have access to their personal data, and to have this information modified or deleted if inaccurate or illegally collected. In this case, the cloud service provider must design its application in anticipation of the fact that the application will have to allow, easily and conveniently, for the exercise of these access, modification and deletion rights to the same extent and within the same timeframes, as it would in an off-cloud relationship.

Ensure Adequate Information Security

You should also be prepared to address your customer’s security needs. All data entrusted to you will require a reasonable level of security, whether they are the photos of the company picnic, or the secret formula for that special product for which your customer is famous. In addition, many categories of data that might be hosted in the cloud, such as personal data, financial data, customer purchases and references, or R&D data are sufficiently sensitive to require being protected through more extensive security measures.

The obligation to provide adequate security for personal data stems from numerous privacy and data protection laws, regulations, standards, cases, and best practices. For some categories of data, such as personal data or company financial data, specific laws or security standards require the use of specific security measures to protect these data. These laws and standards include, among others, the Sarbanes Oxley Act, GLBA, HIPAA, Data Protection Laws in Europe or Asia, as well as the PCI DSS and the ISO 27001 security standards. Further, the common law of information security created by the FTC or State Attorney General rulings also requires that adequate security measures be used to protect sensitive data. The obligation to maintain a reasonable level of security may also result from contracts or other binding documents where the cloud customer has previously committed to a third party that it would use adequate security measures.

You should design the security foundation and architecture of your cloud offering to address the applicable security requirements of the market that you wish to reach. You should also be prepared to commit to your client that you will use specified information security measures to protect the personal data processed through your cloud application.

Be Prepared to Disclose Security Breaches

Security incidents are prone to occur. The US States and an increasing number of foreign countries have adopted security breach disclosure laws that require the custodian of specified categories of personal data to notify individuals whose data might have been compromised in a breach of security. Frequently, the local State Attorney General, Data Protection Supervisory Authority, or other government agency must be notified, as well.

If a security incident occurs in the cloud, the customer – who usually has the primary contact with the concerned individuals –, expects to be informed of the incident, so that it can, in turn, notify the affected business contacts, employees or clients of the occurrence of the breach. To do so, the cloud customer must have been informed promptly of the occurrence, nature, and scope of the breach of security.

Thus, as a cloud service provider you should have in place the processes necessary to identify a security breach, and to promptly notify your customers of the occurrence of the breach. Just like your own customers, you should have in place a security incident response plan to address the security breach thoroughly and expeditiously, promptly stop any leakage of data, eliminate the cause of the breach of security, identify who and which category of data were or might have been affected, and interact with your customers to mitigate the effect and consequences of the breach.

Ensure Business Continuity

Your customers and prospects may also be required by law or by contract to ensure the continuity of their operations and uninterrupted access to their data. This is the case, for example, under the HIPAA Security Safeguards. A hospital that provides technology or medical information database services to the physicians on its staff must provide continued access to patient information in order to ensure proper patient care. This requirement applies as well to the business associates that provide services to the hospital. The PCI DSS standard also requires companies to have an incident response plan that includes business recovery and continuity procedures.

When these applications are hosted in a cloud, the customers or prospects will want to ensure that the cloud service provider has in place proper business continuity and disaster recovery capabilities because they are essential to ensure the viability of their own operations and in some cases because this is required by applicable law. Thus, if you design a cloud offering, be sure to plan and implement appropriate disaster recovery and business continuity measures, so that you can help your customers meet their own business continuity requirements.

Be Prepared to Assist your Client with its E-Discovery Obligations

If there is a civil suit in which the cloud service customer is a party, or if there is an investigation by a government agency, the cloud service provider is likely to receive a request for access to the information that it holds as the hosting entity. This request may come directly from the customer, for the benefit of the customer, or it may come from third parties who wish to have access to evidence against the customer.

You should anticipate your customers’ request for assistance in implementing a litigation hold or responding to a request for documents. You should be ready to respond to inquiries from your prospects or potential customers about how you will work and cooperate with them to address compliance with the requirements of the E-Discovery provisions of the Federal Rules of Civil Procedure and the State equivalents to these laws. You should plan and agree ahead of time on each other’s roles and responsibilities with respect to litigation holds, discovery searches, the provision of witnesses to testify on the authenticity of the data, and the provision of primary information, metadata, log files and related information.

Anticipate Requests for Due Diligence and Monitoring

Whether it is required to do so by law, by contract, or otherwise, your customer or prospect will also want to conduct due diligence before entering into the contract, and will also want to be able to periodically monitor the performance and security of your applications. Consider, for example the monitoring and testing requirements under the Security Safeguards under HIPAA or GLBA, or those in the orders issued by the Federal Trade Commission or the State Attorneys General.

Be prepared to respond to these requests for due diligence, monitoring, or inspection and provide for the cloud customer’s ability to conduct its investigation in a manner that satisfies the customer’s needs while not disrupting your operations. For example, develop a security program that is consistent with industry standards, provide for easy to access logs for access to data, and put in place controls that prevent the modification of data.

Ensure a Smooth Termination

No one wants to lose a good customer. Be realistic, however, and accept that termination might occur. Do not be an obstacle to the termination of a contract, or your reputation will suffer. Show your prospective clients that they can trust you, and that they will not be kept hostage if they want to move on.

Accept that, in case of termination of the contract, the cloud customers must be able to retrieve their data, or to have destroyed data that are no longer needed. Make it easy for them to do so; show respect for, and awareness of your customers’ own constraints. Be prepared to respond to a customer’s request for the return, transfer, or destruction of the data, assess in advance the costs associated with it, and have in place technology, processes and procedures to be used to address the special needs resulting from termination.

Planning for termination will reduce disputes and the resulting disruptions. If termination is not planned properly, problems might occur. The data might have been commingled with other customers’ data to save space or for technical reasons. This entanglement might make it difficult, time consuming, expensive, or perhaps impossible to disentangle the data.

Conclusion

If you want your cloud offering to be successful, put yourself in your customers’ shoes. Anticipate their needs. Help them comply with their obligations. Design a cloud offering that will allow them to continue to comply with their own obligations in the same way as they did when their data, files, trade secrets, and other crown jewels were in their direct control.

Comments Off on Compliance By Design

Department of Commerce Publishes Green Paper on Privacy

Posted by fgilbert on January 6th, 2011

On December 16, 2010, the Department of Commerce released its Internet Policy Task Force Privacy Green Paper, which details recommendations on the protection of consumer privacy online.  Titled “Commercial Data Privacy and Innovation in the Internet Economy:  A Dynamic Policy Framework”, the Report provides a set of recommendations to strengthen data privacy while protecting innovation, job creation, and economic growth.

The Report recognizes that more than self-regulation is needed.  It acknowledges the economic and social importance of preserving consumer trust in the Internet, and the need to keep pace with changes in technology, online services and Internet usage.  To do so, consumers need more transparency and control over the use of their personal information.  The new framework must help increase protection of consumers’ commercial data while supporting innovation and evolving technology.

The Report makes recommendations in several key areas:

  • Establish Fair Information Practice Principles comparable to a “Privacy Bill of Rights” for Online Consumers

The Report recommends that the US Government articulate certain core privacy principles in order to assure baseline consumer protections.  These principles would define how online companies can collect and use personal information for commercial purposes. They would build on existing Fair Information Practice Principles (FIPPs), which include the principles of notice, choice and ensuring security.  The additional principles would include limitation, purpose specification and accountability.

  • Develop Enforceable Privacy Codes of Conduct in Specific Sectors with Stakeholders; Create a Privacy Policy Office in the Department of Commerce.

The Report recommends the creation of a Privacy Policy Office in the Department of Commerce.  This office would be tasked with examining the commercial uses of personal information and evaluating whether there are gaps or uncertainty in privacy protection. It would also convene stakeholder dialogues, and help develop enforceable privacy codes of conduct.

  • Encourage Global Interoperability to Spur Innovation and Trade.

The Report acknowledges that disparate privacy laws create regulatory barriers and have a negative impact on global competition. It recommends that the U.S. Government work together with its trading partners to find practical means of bridging differences in privacy frameworks and reduce the significant business compliance costs.

  • Harmonize Security Breach Notification Rules

The Report recommends looking at ways in which to harmonize the security breach disclosure laws, which require businesses to notify customers about security breaches that expose personal data. It envisions that a Federal Law would help to reconcile inconsistent state laws, streamline compliance, and allow businesses to develop a strong, nationwide data management strategy. This Federal law would authorize enforcement by the Federal Trade Commission, and preserve the existing enforcement power of state authorities.  The law would not preempt other federal security breach notification laws for specific sectors, such as healthcare.

  • Review the Electronic Communications Privacy Act for the Cloud Computing Environment

The Report recommends the revision of the Electronic Communications Privacy Act (ECPA) to address privacy protection in cloud computing and location-based services, so that ECPA can continues to appropriately protect individuals’ privacy expectations and punish unlawful access and disclosure of consumer data, as technology and market conditions change.

The Department of Commerce is seeking public comments to its proposed framework.  Comments must be provided by January 28, 2011.

To download a copy of the Report, visit http://www.commerce.gov/node/12471.

Posted in Trends
Comments Off on Department of Commerce Publishes Green Paper on Privacy

Of Cookies and Spam

Posted by fgilbert on June 22nd, 2010

 

What’s Cookin’ in the European Union?

The European Union Member States will soon change the rules that apply to cookies and unsolicited messages. Recent amendments to the ePrivacy Directive require the Member States to implement new restrictions in their national laws by June 2011. These changes are likely to significantly affect the procedures and processes used for marketing in, or with, the European Union. The most important change creates new rules for the use of cookies.

1. Background and the 2009 Directive

The e-Privacy Directive is “the other directive” that applies to the protection of personal data in the European Union, in addition to the 1995 EU Data Protection Directive. Adopted in 2002, this directive identifies the restrictions that are intended to protect personal data in the context of wire or Internet communications.

The European Parliament and the Council of the European Union approved Directive 2002/58/EC Concerning the Processing of Personal Data and the Protection of Privacy in the Electronic Communications Sector in order to address the uses and abuses of new, advanced digital technologies. The ePrivacy Directive supplements Directive 95/46/EC by providing a framework to respond to unsolicited commercial messages, the use of fax and similar technologies for telemarketing purposes, and creates a framework for the use of cookies, traffic data, location data, and public directories.

The 2002 version of the ePrivacy Directive was amended through Directive 2009/136/EC, which became effective in December 2009, and requires Member States to modify their national laws accordingly by June 2011. This amendment was part of a larger series of amendments that updated the existing regulatory framework for electronic communications networks and services. Several provisions of the 2002 ePrivacy Directive are significantly altered through these amendments. This, in turn, will cause a ripple effect when these amendments are implemented in the national laws of each of the EU Member States. Unfortunately, portions of the 2009 Directive are confusing, which is likely to cause significant discrepancies in the way that the Member States interpret this new directive.

2. New Rules for Cookies and Spyware

The most important change that is brought by the 2009 Directive is a new requirement for the use of cookies. The 2009 Directive replaces Article 5(3) of the ePrivacy Directive in order to strengthen the rights of the individuals.

Under the 2002 version of Article 5(3) of the ePrivacy Directive, Member States’ national laws must ensure that electronic communications networks are not used to store information or to gain access to information stored in the terminal equipment of a subscriber or user unless the subscriber or user concerned:

  • Has first received clear and comprehensive information in accordance with the 1995 Data Protection Directive about the purposes of the processing; and
  • Is offered the right to refuse such processing by the data controller.

According to the Preamble of the 2002 Directive, information and the right to refuse may be offered once for the use of various devices or pieces of code to be installed on the user’s terminal equipment during the same connection and may also cover any further use that may be made of those devices during subsequent connections. The methods for giving information, offering a right to refuse, or requesting consent should be made as user-friendly as possible. Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose.

a. Notice requirement

The 2009 version of Article 5(3), which supersedes the prior version, retains the notice requirement of the 2002 draft. It states that the subscriber must be provided with “clear and comprehensive information about the purposes of the processing” in accordance with the 1995 Data Protection Directive.

b. Consent or Right to Refuse?

The new Article 5(3) of the ePrivacy Directive requires the user’s consent. Member States national laws must provide that electronic communications networks may store information, or gain access to information already stored, in a user’s or subscriber’s equipment only “if the subscriber or user concerned has given his or her consent.”

There is no definition of “consent” either in the 2002 version of the ePrivacy Directive or in the 2009 Directive. Further, while the 2002 Directive distinguishes “consent” from “explicit consent,” the 2009 Directive does not.

The “right to refuse” in the 2002 version had been understood as an “opt-out.” The user should have the ability to “refuse” the cookies by setting its browser accordingly. An activity occurs unless the user stops the processing and indicates his opposition by using the relevant browser setting. The user is free to change the browser settings at any time.

If the requirement for “consent” is to replace that of a “right to refuse,” what is the difference between the two options? Does it mean that each website should have a landing page in which it provides information about its cookies, so that a visitor can then agree to the policy before entering the site?

Unfortunately, Recital 66, in the preamble to the 2009 Directive, which should provide background and comments on these provisions, only adds confusion. There are discrepancies between the text of the amended Article 5(3), and that of Recital 66. While Article 5(3) requires the user’s “consent,” Recital 66 refers to both the “right to refuse” and the obligation to obtain “consent.” For example, one sentence states: “the methods of offering … the right to refuse should be as user-friendly as possible.” The next sentence, however, provides “where it is technically possible and effective, … the user’s consent to processing may be expressed….”

To add even more to the confusion, the Recital 66 also indicates that the user can express his consent through his browser. The drafters comment that the user could express his “consent to the processing” by using appropriate settings on his browser or other application. If this is the case, then what is the difference with the current state, and what did the amendment intend to accomplish?

These amendments to the 2002 ePrivacy Directive now have to be implemented and interpreted by the 27 Member States. The different possible interpretations of the new Article 5(3) and of Recital 66 of the Preamble of the 2009 Directive are likely to result in significant discrepancies in the laws of the different Member States, the opposite effect of what a directive should accomplish.

c. Exceptions

Like the 2002 version, the new Article 5(3) of the e-Privacy Directive provides exceptions to the consent requirement for certain types of cookies. The exceptions are the same in both the 2002 version of Directive and the 2009 Directive.

One exception allows the use of cookies without consent when the technical storage of, or access to, information is for the sole purpose of carrying out the transmission of a communication over an electronic communication network. The other exception is when the use of a cookie is strictly necessary for the provider of an information society service explicitly required by the subscriber or user to provide the service.

According to the Preamble to the 2009 Directive, these exceptions should be limited to those situations where the technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user. Thus, presumably, session cookies that can take a user from one page to another (e.g. from a page where an order is placed to a checkout page where payment is made) would be allowed. Persistent cookies and web beacons, which may be used for web analytics, but could also be used for behavioral targeting purposes, would require consent.

3. Unsolicited Commercial Messages

In addition to the provisions pertaining to the use of cookies, the 2009 Directive modifies slightly the regime that applies to unsolicited commercial messages. It extends the scope of the ePrivacy Directive. Under Article 13 of the e-Privacy Directive, the national laws of the EU Member States must provide that unsolicited commercial messages may be sent to an individual unless the individual has previously opted-in to receive the message. If the merchant has a relationship with a preexisting customer, an opt-out is acceptable, but only when the communication is made for marketing similar products or services to that customer, and subject to several prerequisites. In this case, the merchant may use the contact information that it collected from the customer in the course of the sale of a product or service.

Several conditions need to occur:

  • The contact information must have been obtained in the context of the prior sale of a product or a service to the individual by the same company.
  • The contact information must have been obtained in compliance with the applicable Member State national law that implements the 1995 Data Protection Directive. Notice must be provided and consent obtained. The notice to the individual must describe which data are being collected, the identity of the entity collecting the information, for which purpose(s) the data will be used, the recipients or categories of recipients of the data, and the existence of the right of access to and the right to rectify the data concerning the customer.
  • The company must inform the customer in a clear and distinct manner that the data might be used again for direct marketing.
  • The Customer must be given clearly and distinctly the opportunity to object, free of charge and in an easy manner, to such use.
  • The ability to opt-out of the further use of the contact information must be provided both when the information is first collected, and on each use of the information.
  • The use of fairly-and-lawfully-previously-obtained contact information is limited to sending the customer information about products or services that are similar to those previously provided to that customer.

These provisions apply only to protect natural persons. However, the directive urges Member States to consider similar provisions to protect legal persons, as well, from unsolicited communications.

The 2009 Directive expand the scope of these provisions:

a. Opt-In for Robocalls, Fax, Email, and Text Messages

The e-Privacy Directive requires that Member States national laws implement an opt-in regime for automatic calling machines, facsimile machines, and emails and text messages used for direct marketing.

The 2009 Directive extends the restriction to MMS and similar applications.

b. Person-to-Person Voice Telephony

For communications other than through automated calling machines, facsimile machines, email, text, SMS, or MMS messages, and the use of email addresses obtained in a prior relationship, such as person-to- person voice telephone calls, the e-Privacy Directive allows Member States to choose between an opt-in and an opt-out regime.

In the 2002 version of the ePrivacy Directive, the privilege was granted only to the subscriber; the 2009 Directive extends the privilege to the users, as well.

Comments Off on Of Cookies and Spam

Information Privacy And Security Current And Emerging Issues In The United States

Posted by fgilbert on June 4th, 2010

altNot so long ago, the Internet was a separate world.  We distinguished e-commerce and other activities in “cyberspace” from those that were conducted in the brick and mortar world.  Today, most companies are exploiting at the same, and to the fullest extent possible, all of the vast resources that are available through the Internet, the World Wide Web and otherwise.

Concurrent with the convergence of cyberspace with the brick and mortar world, telephone and information technologies are converging.  From one single device, we can make calls, send emails, browse the web, review our documents, and even pay for our lattes.  With this convergence, and the ubiquitous need for access to personal information databases, data protection issues have gained greater importance.  Without customer information, companies cannot create products adapted to client needs or target the right client for a sale.

However, holding personal information without adequate safeguards may lead to disaster.  Companies have lost goodwill, to the point of bankruptcy, for having failed to address privacy and information security issues.

This article will look at selected current issues and trends in information privacy and security.

Current Issues

  • Accountability for Proper Security

While information privacy and security concepts were first developed in the early 1970s, it is only with the enactment of the modern data protection laws, such as GLBA and HIPAA, that certain markets became aware of, and required to implement security safeguards to protect the confidentiality, integrity, and authenticity of personal information.  Today, this requirement has been extended to all companies that hold sensitive personal information.  The Federal Trade Commission has made it an “unfair practice” under Section 5 of the FTC Act to hold personal data without providing adequate security.  California law requires companies that hold social security numbers or bank account numbers in combination with the first and last name of individuals to implement “reasonable security measures.”  It also requires these companies to implement the same in their contracts with their service providers.

The liability thresholds have also been raised by a recent Minnesota law, which became effective in the summer of 2007.  Under this new law, companies that retain credit card data after receiving the authorization of the transaction will be held strictly liable for any damages caused by a breach of security.  If data have been exposed, liability will follow without a plaintiff having to prove that the business was negligent.  Damages will include the cost of “reasonable actions undertaken” by financial institutions to respond to the breach, such as the costs to cancel or reissue any access device affected by the breach; close accounts affected by the breach and take any action to stop payments or block transactions with respect to the accounts; open or reopen accounts affected by the breach; make any refund or credit to a cardholder to cover the cost of unauthorized transactions related to the breach; and notify the cardholders affected by the breach. The financial institution will also entitled to recover the costs for damages that it paid to cardholders injured by the breach.  Businesses will be also responsible for violations by their service providers.

Security to protect personal information has also been required under the laws that have implemented the 1995 European Union Data Protection Directive.  US Companies that wish to self certify under the Safe Harbor, or that are contemplating the use of the Model Contracts must ensure that they do have security measures, and that their service providers do the same.

Failure to have adequate security measures is likely to lead to security breached, which US companies are required to report to the affected parties, clients or employees, under the Security Breach Notification Laws enacted in over 40 States.  Japanese companies have the same obligation.  The European Union is said to contemplate revisions to its laws to implement a similar requirement, as well.

  • E-Discovery, Records Retention and Destruction Issues

The need for adequate security measures and document control is also created by the new E-Discovery rules that result from a recent amendment of the US Federal Rules of Civil Procedure which were adopted after several well-reported cases took unexpected turns when the parties battled each other on the production of evidence.  The courts questioned the quality and completeness of the files produced and the so convenient loss, misplacement, or destruction of electronic evidence that was key to the case.

In the employment discrimination case Zubulake v. UBS Warburg, 220 F.R.D. 212 (SDNY 2004), which spanned over several years (because of evidentiary issues), for example, the court ruled that the employer had willfully deleted relevant emails despite contrary court orders.  The court granted the plaintiff’s motion for sanctions and ordered the employer to pay costsbecause it had failed to locate relevant information, to preserve that information, and to timely produce that information.

The amendments to the Federal Rules of Evidence, recently adopted, create a new regime for litigation in an era where emails and other electronic documents constitute a crucial component of the litigants’ case.  Organizations have to take affirmative steps to prevent spoliation of electronic evidence, negligent or intentional.  They must guarantee that identified relevant documents are preserved by placing a “litigation hold” on the documents, communicate the need to preserve them, and arrange for safeguarding of relevant archival media.

U.S. courts will not hesitate to impose sanctions for spoliation of electronic documents, even if it results from document mismanagement.  In this new era, companies have to address document retention and preservation issues.

Companies must take affirmative steps to implement appropriate Enterprise Security Programs that ensure that the location of all documents is known, and that these documents are protected and only destructed according to appropriate policies.  When a suit is filed, they must ensure that all sources of discoverable information are retained, and produced.

  • Proper Treatment of Customer Databases in Corporate and Commercial Transactions

Due diligence and other checklists for corporate or commercial transactions have also evolved with the current data protection trends.  A company can no longer simply transfer or license its database of customer information.  Both parties to the transaction must first ensure that the transfer is not prohibited.  They must review each other’s privacy policies.  This duty is imposed on both parties.

In a recent case were a database of personal information was used in connection with a services agreement, the client was found to have an obligation to verify that its service provider had the right to use the personal information it was using to provide the service. Relying only on a mere representation or warranty in a contract was deemed insufficient. http://files.ali-aba.org/thumbs/datastorage/lacidoirep/articles/PL_ACFF154_thumb.pdf)

In that case, the company was in the business of sending emails to consumers.  In order to promote the products and services of its advertising clients, it obtained the email addresses from list providers, which had gathered these lists through a variety of means.

The New York Attorney General’s investigation of the provenance of these marketing lists revealed that some of the company’s list providers, on their own websites, had promised consumers they would NOT sell, rent, or share their information to or with third parties.  On the other hand, the company represented on its website that recipients of its email campaigns “have all requested to receive information about products and services”.

In its March 2006 settlement, the company agreed to pay $1.1 million as penalties, disgorgement, and costs. Reliance on the list provider’s representations or warranties that the use of the contact information was permissible was found insufficient, on its own, to fulfill the obligation of an independent review.  The settlement agreement stated that the party that is acquiring personal information must first independently confirm that such acquisition is permissible under relevant seller privacy policies.  It must independently review all applicable privacy policies that were in effect when the information was collected, and independently confirm that such policies clearly disclosed that the information collected would or might be shared.  In the absence of such explicit terms, it must confirm, through first-hand investigation, that consumers affirmatively opted-in to permit such sharing.

It is therefore recommended that in the event of a corporate or commercial transaction that involves personal information, the recipient of this information (a) conduct due diligence; (b) conduct a thorough review and analysis of the co-contractor’s or target’s information privacy and security policies and practices; and (c) do not rely solely on written representations and warranties.

  • Outsourcing, outsourcing, outsourcing

Many US companies continue to feel that  “outsourcing, outsourcing, outsourcing” is the key to success.  “Outsourcing,” here, encompasses IT outsourcing, Business Process Outsourcing, Legal Process Outsourcing, Offshoring, and similar agreements.  Indeed, outsourcing might provide savings, efficiencies associated with standardization, and attractive balance sheets; but it presents great risks for client and employee personal information.

Poor privacy and information security safeguards have caused great losses, embarrassment, and loss of goodwill when outsourcers or service providers failed to use adequate security.  For example, Master Card, Visa, Discover, American Express and other large financial institution, were forced to reissue cards, pay for credit record monitoring services, and rebuild customer trust when a hacking at their service provider Card Systems caused the compromise of 40 million credit card numbers. (http://money.cnn.com/2005/06/17/news/master_card/index.htm)

When outsourcing contracts involve providing or giving access to personal information, thorough due diligence is essential to investigate the privacy awareness and security practices of the potential service providers.  Comprehensive and detailed contracts must define safeguards and other mechanisms to ensure adequate security to protected personal information, and compliance with privacy laws.  During the performance phase, companies must keep monitoring the performance of their vendor.  Failure to address seriously privacy and security concerns during these three faces would create exposure to great liability.  Several US laws and current jurisprudence require companies to ensure the protection of certain personal information in their custody, and this obligation extends to subcontractors and service providers of these entities.

Emerging Issues

As we are moving into the Web 2.0 era, and we are seeing the emergence of new uses of technology that seem to be stepping out of science fiction books, numerous legal issues are being raised.  Information privacy and security are likely to continue to be a top concern and priority.  Consider, for example, the following trends:

  • New Advertising models.  The customers’ footsteps are tracked to serve “better content,” more adapted to the customer’s needs.
  • Digital rights management.  These systems track customer uses.  What song or movie is accessed, when, how, where from which machine?
  • Social engineering.  My Space, Facebook are providing forums for disclosing the undisclosable.
  • RFID, GPS, and location based servicesallow tracking individuals, and cause serious privacy and security concerns (Nowhere to Hide, by Francoise Gilbert,  http://itlawgroup.com/privacy_publications.html)
  • Mobile web.  Avertisements sent to cellphones.  Electronic payments made easy.  Customers tracked everywhere.  Privacy might be achieved only by turning off the device.
  • Second Life.  Do avatars have feelings, and … a right of privacy?

While most of the emerging trends above are exciting, creative business activities, certain practices might have dramatic consequences for personal privacy.  In addition, current practices might also take a sour turn.  For example, as the cost of living increases in India or Eastern Europe where many companies have outsourced their call centers, so does the cost of the personnel entrusted with the delicate missions outsourced red to them.  If the outsourcer cannot increase the fees paid by its American client, it may attempt to unload the engagement elsewhere, to transfer its work to others with lower wages, and possibly lesser privacy or security practices and awareness.

Conclusion

The information and communications technologies that were created at the end of the XXth century are becoming very powerful and creating new opportunities.  Physical and geographical boundaries are crumbling, allowing for greater exchange.  Individuals seem to become more empowered.  The blogger becomes a journalist, the YouTube user a movie star.  The Second Life avatar can be a superhero.  However, in this emerging world where individuals seem more valued and powerful, privacy might be under attack and security might be endangered.  Legal issues will abound.

Comments Off on Information Privacy And Security Current And Emerging Issues In The United States