On March 22, 2016, the Equal Employment Opportunity Commission (“EEOC”) filed suit in the United States District Court for the Western District of Missouri against Grisham Farm Products, Inc. alleging that its employment application violated the Americans With Disabilities Act (“ADA”) and the Genetic Information Non-Discrimination Act (“GINA”). Equal Employment Opportunity Commission v. Grisham Farm Products, Inc. 16-cv-03105.  According to the EEOC’s Complaint, Grisham

violated the ADA and GINA by requiring job applicants . . . to fill out a three-page ‘Health History’ before they would be considered for a job.

Plaintiff applied for a warehouse position at Grisham. The application contained 43 “yes or no” health-related questions. The questions were ones that might be seen when visiting a physician for the first time. For example, the EEOC’s Complaint alleges that the application inquired whether in the past 10 years, the applicant had (alphabetically) allergies, arthritis, bladder infections, eating disorders, gallstones, sexually transmitted diseases, etc. The form also inquired about prior hospitalizations, HIV infection, treatment for alcoholism, and whether the applicant “consulted a doctor, chiropractor, therapist, or other health care provider in the past 24 months.”

The application’s Health History section stated in large letters that:

All question must be answered before we can process your application.

According to the EEOC, after answering the first question, plaintiff stopped. Plaintiff had medical conditions and disabilities he would have revealed had he fully and completely answered each question. The EEOC claims that the plaintiff believed he did not have to reveal his medical history to any potential employer. As such, he telephoned Grisham Farm and a company representative with whom he spoke said that if the health history was not fully completed, it would not accept his Application. Accordingly to the Complaint, Sullivan refused to complete the health history.

In addition to requesting a permanent injunction against Grisham Farm from making any pre-employment medical inquiries, the EEOC suit seeks monetary and punitive damages for the plaintiff.

In a statement issued in conjunction with the filing of the Complaint, the EEOC referred to the health form as being “among the most egregious we have seen.” This case should serve as a reminder to employers that pre-employment health inquiries can be made only after a conditional offer has been made, if the inquiries are made to all applicants for that job category, and provided the inquiries are job-related and consistent with business necessity.

On March 24, 2016, Tennessee’s breach notification statute was amended when Governor Bill Haslam signed into law S.B. 2005.

Under the amendment, notification of a data breach must now be provided to any affected Tennessee resident within 45-days after discovery of the breach (absent a delay request from law enforcement).  Previously, and like the vast majority of states, Tennessee’s statute required disclosure of a breach to be made in the most expedient time possible and without unreasonable delay.  Florida, like the Volunteer State, previously amended its breach notification statute to also require notification within a set time period.

Perhaps even more important than the specific timing requirement for notice, S.B. 2005 also amends Tennessee’s statute to remove the provision in the existing statute requiring notice only in the event of a breach of unencrypted personal information.  Accordingly, by expanding this provision, it appears Tennessee will be the first state in the country to require breach notification regardless of whether or not the information subject to the breach was encrypted.

Lastly, the bill also amends the statute to specify an “unauthorized person” includes an employee of the information holder who is discovered to have obtained personal information and intentionally used it for an unlawful purpose.  This amendment is likely focused on entities which failed to provide notification of data incidents which were the result of improper access by employees.

The law takes effect July 1, 2016.

One year ago, in March 2015, the Federal Communications Commission (“FCC”) reclassified broadband Internet access service as a common carrier Telecommunications Service subject to regulation under Title II of the Communications Act.  At that time, however, the FCC recognized that the then-current rules were not well suited to broadband privacy.  On March 10, 2016, the FCC’s Chairman Tom Wheeler circulated for consideration by the full Commission a Notice of Proposed Rulemaking (“NPRM”) that effectively represents the start of the process of adopting rules suitable to broadband service.

The proposed rules would be built on three core principles: Customer choice, transparency, and data security.

Choice – Internet Service Providers (“ISPs”) would be required to provide customers with varying degrees of choice (i.e., no consent required, opt-out or opt-in), depending on how the customer’s personal information is used.

Transparency — ISPs would be required to disclose in “an easily understandable and accessible manner” the types of information they collect, how they use that information, and the circumstances in which they will share customer information with third parties.

Security — The proposal would require broadband providers to take reasonable steps to safeguard customer information from unauthorized use or disclosure. And, at a minimum, the proposal would require broadband providers to adopt risk management practices; institute personnel training practices; adopt strong customer authentication requirements; identify a senior manager responsible for data security; and take responsibility for use and protection of customer information when shared with third parties.

In order to encourage ISPs to protect the confidentiality of customer data, and to give consumers and law enforcement notice of failures to protect such information, the Chairman’s proposal includes common-sense data breach notification requirements. Specifically, in the event of a breach, providers would be required to notify:

  • Affected customers of breaches of their data no later than 10 days after discovery.
  • The Commission of any breach of customer data no later than 7 days after discovery.
  • The Federal Bureau of Investigation and the U.S. Secret Service of breaches affecting more than 5,000 customers no later than 7 days after discovery of the breach.

The proposed rule would apply exclusively to providers of broadband Internet access service and not to providers such as Amazon and Facebook or other operators of social media websites.

The proposal will be voted on by the full Commission on March 31, and, if adopted, would be followed by a period of public comment.

Yesterday, the federal Office for Civil Rights (OCR) announced Phase 2 of its HIPAA Audit Program (Program). In its announcement, the OCR reports that the Program is underway and provides some helpful FAQs for covered entities and business associates about the Program. Preparation is critical and there are some key points covered entities and business associates should focus on.

Every covered entity and business associate is eligible for an audit. So, don’t think that because you are a small health care provider or sponsor a group health plan for employees you will be out of the Program’s reach. Auditee selection will be based on a number of criteria including include size of the entity, affiliation with other healthcare organizations, the type of entity and its relationship to individuals, whether an organization is public or private, geographic factors, and present enforcement activity with OCR. The OCR appears to be looking to examine a healthy cross-section of covered entities and business associates. On the bright side, OCR stated it will not commence an audit under the Program where there is an open complaint investigation or a current compliance review.

Potential auditees will be screened. OCR may send a questionnaire to covered entities asking them to identify their business associates and provide their contact information. OCR warns that if it does not receive responses to these requests it will use publically available information to create its audit pool, and nonresponsive entities still may be selected for an audit or subject to a compliance review. In fact, OCR informs covered entities and business associates that it expects them to check their junk or spam email folders for OCR communications about the Program.

…we expect you to check your junk or spam email folder for emails from OCR

The Program will include Desk Audits, followed by On-site Audits. The first stage of the Program will involve desk audits for covered entities, followed by desk audits for business associates, all of which will be completed by year end. After that, audits will be onsite and will examine a broader scope of requirements from the HIPAA Rules than desk audits. Some desk auditees may be subject to a subsequent onsite audit. The audits will examine compliance with specific requirements of the HIPAA Privacy, Security, or Breach Notification Rules. So, for example, OCR might want to look at your documented risk assessment, or your breach notification response plan. Auditees will be notified of the subject(s) of their audit in a document request letter, but OCR confirmed the audits will not cover compliance with state privacy laws.

Consider the audit process and timeline. Covered entities and business associates selected for a desk audit should expect to receive an email informing them of the selection and requesting documents and other data. Auditees will be able to submit documents on-line via a secure audit portal on OCR’s website. OCR expects that the documents and data will be provided within 10 business days of the request.

After submitting the documents and data, auditees will receive draft findings from OCR. Auditees will then have 10 business days to review and return written comments to the auditor. Auditees should expect to receive a final audit report within 30 business days.

Onsite audits will follow a similar process. The auditors will schedule an entrance conference to discuss the audit, which can be expected to take place over three to five days onsite, depending on the size of the entity. These will be more comprehensive and cover a wider range of requirements from the HIPAA Rules. Like the desk audit, entities will have 10 business days to review the draft findings and provide written comments, and they will be provided a final audit report.

Don’t want to respond? Entities that do not respond to OCR communications still may be selected for audit or be subject to a compliance review. As noted, the agency will use public means to find you.

We’ve been audited, now what? OCR states that the Program is primarily a “compliance improvement” activity, through which it can better understand compliance efforts, and determine what types of technical assistance should be developed and what types of corrective action would be most helpful. Of course, if OCR finds a serious compliance issue, it may initiate further investigation.

There may be publicity surrounding audits. OCR states that it will not post a list of audited entities or the findings of an individual audit identifying the audited entity. However, OCR reports that it will comply with Freedom of Information Act (FOIA) requests which could make the results of your audit public.

For now, covered entities and business associates should be on the look-out for communications from OCR and be prepared to respond. It goes without saying that they also should use this as an opportunity to assess their compliance and take steps now to address any gaps.

Yes! It is the law in more places and circumstances than you suspect.

Late last year, The Wall Street Journal reported on a survey by the Association of Corporate Counsel (“ACC”) that found “employee error” is the most common reason for a data breach. CSOOnline reported on Experian’s 2015 Second Annual Data Breach Industry Forecast, stating:

“Employees and negligence are the leading cause of security incidents but remain the least reported issue.”

According to Kroll, in 31% of the data breach cases it reviewed in 2014, the cause of the breach was a simple, non-malicious mistake. These incidents were not limited to electronic data – about one in four involved paper or other non-electronic data.

No business wants to send letters to individuals – employees or customers – informing them about a data breach. Businesses also do not want to have their proprietary and confidential business information, or that of their clients or customers, compromised. Unfortunately, no “silver bullet” exists to prevent important data from being accessed, used, disclosed or otherwise handled inappropriately – not even encryption. Companies must simply manage this risk though reasonable and appropriate safeguards. Because employees are a significant source of risk, steps must be taken to manage that risk, and one of those steps is training.

It is a mistake to believe that only businesses in certain industries like healthcare, financial services, retail, education and other heavily regulated sectors have obligations to train employees about data security. A growing body of law coupled with the vast amounts of data most businesses maintain should prompt all businesses to assess their data privacy and security risks, and implement appropriate awareness and training programs.

Data privacy and security training can take many forms. Here are some questions to ask when setting up your own program, which are briefly discussed in the report at the link above:

  • Who should design and implement the program?
  • Who should be trained?
  • Who should conduct the training?
  • What should the training cover?
  • How often should training be provided?
  • How should training be delivered?
  • Do we need to document the training?

No system is perfect, however, and even a good training program will not prevent data incidents from occurring. But the question you will have to answer for the business is not why didn’t the company have a system in place to prevent all inappropriate uses or disclosures. Instead, the question will be whether the business had safeguards that were compliant and reasonable under the circumstances.

In the face of seemingly daily news reports of company data breaches and the mounting legislative concern and efforts on both the state and federal level to enact laws safeguarding personal information maintained by companies, employers should be questioning whether they should implement privacy policies to address the protection of personal information they maintain on their employees.

To date, there is no all-encompassing federal privacy law. Rather, there are several federal laws which touch upon an aspect of protecting personal or private information collected from individual, such as the Children’s Online Privacy Protection Act (giving parents control over the information collected from their children online); Federal Trade Commission Act (pursuant to which the FTC has sought enforcement against companies who failed to follow their own privacy policies relating to consumers); Gramm-Leach-Bliley Act (requiring financial institutions, such as banks, to protect consumer financial information); Health Insurance Portability and Accountability Act of 1996 (requiring covered entities to protect individually identifiable health information); and the Americans with Disabilities Act and Family and Medical Leave Act (requiring confidentiality of employee medical information obtained by employer).

State legislatures have likewise used a piecemeal approach at attacking the problem by some states mandating the protection of social security numbers, protecting credit card information, protecting consumer financial information, and securing personally identifiable information (usually aimed at preventing identity theft). Additionally, forty seven (47) states now have laws addressing notification and other requirements when a data breach occurs. While only a handful of states explicitly require a written privacy policy (such as Connecticut when collecting social security numbers and Massachusetts in connection with a written information security program), the overwhelming majority of states inexplicitly require privacy policies by requiring security of personal information (such as California which now requires encryption) and notification when a breach of personal information has occurred. As such, where companies are required to notify affected individuals of a breach, they are implicitly required to protect the information to prevent such a breach. The first step in assembling that protection armor is to institute a privacy policy.

Employers maintain various types of personally identifiable information on their employees, including, but not limited to: names, dates of birth, social security numbers, addresses, telephone numbers, financial information (such as bank account numbers and credit/debit card numbers), email addresses and passwords, driver’s license, state issued identification and passport numbers, health insurance number, biometric data, and personally identifiable information on an employee’s spouse and/or children (most commonly contained in benefit enrollment forms), and any other information maintained about an individual that could be used to identify him/her or obtain access to an online account.

Employer privacy policies should at a minimum address: (1) the types of personal information, (such as that listed above), whether in electronic or paper format, obtained and maintained regarding employees and their family members; (2) where the information is maintained/stored; (3) how the information is protected both while being maintained and also when being transferred from the employee to the employer, between the employer’s systems/departments, and outside of the employer’s organization (such as to a third party vendor); (4) who has access to the information, including any outside vendors who perform personnel-related services for the employer; (5) the effective date of the policy; and (6) identify the individual within the organization responsible for compliance with the policy.

Additionally, employers should consider training their employees on the policy. Employees who handle private information in the course of their employment should be trained on the contents of the policy; importance of maintaining the privacy of the information; methods to be used to achieve the protection of such information; limiting disclosure of the information within the duties performed by the employee with respect to use of the information; and what to do when a suspected breach of the information has occurred. The general employee population should also be trained on the contents of the policy; the importance of maintaining the privacy of the information; and what to do if the employee suspects or has knowledge that the information has been breached.

Recognizing the growing number of connected and interconnected devices, a bipartisan group of Senators recently introduced a bill which would convene a working group of Federal stakeholders to provide recommendations to Congress on how to appropriately plan for and encourage the proliferation of the Internet of Things (IoT).
The Developing Innovation and Growing the Internet of Things Act (DIGIT Act) would require the working group to examine the IoT for current and future spectrum needs, the regulatory environment (including identification of sector-specific regulations, Federal grant practices, and budgetary or jurisdictional challenges), consumer protection, privacy and security, and the current use of technology by Federal agencies and their preparedness to adopt it in the future.
While the working group would seek representatives from the Department of Transportation (DOT), the Federal Communications Commission (FCC), the Federal Trade Commission (FTC), the National Science Foundation, the Department of Commerce, and the Office of Science and Technology, the working group would also be required to consult with non-Governmental stakeholders – including: i) subject matter experts, ii) information and communications technology manufactures, suppliers, and vendors, (iii) small, medium, and large businesses, and (iv) consumer groups.  The findings and recommendations of the working group would be submitted to the appropriate committees of Congress within one year of the bill’s enactment.
Additionally, the DIGIT Act would also direct the FCC, in consultation with the National Telecommunications and Information Administration, to conduct its own study to assess the current and future spectrum needs of the IoT. The FCC would similarly have one year after the enactment of the Act to submit a report including recommendations as to whether there is adequate licensed and unlicensed spectrum availability to support the growing IoT, what regulatory barriers may exist, and what the role of licensed and unlicensed spectrum is in the growth of the IoT.
According to the bill, estimates indicate more than 50,000,000,000 devices will be connected by the year 2020 with the IoT having the potential to generate trillions of dollars in economic opportunity.  Similarly, the IoT will allow businesses across the country to simplify logistics, cut costs, and pass savings on to consumers by utilizing the IoT.  Believing the United States leads the world in the development of technologies that support the IoT and this technology may be implemented by the U.S. Government to better deliver services to the public, the DIGIT Act was introduced following a previous Senate Resolution (Senate Resolution 110, 114th Congress) calling for a national strategy for the development of the IoT.

The Consumer Financial Protection Bureau (“CFPB”) gave the fintech online payment sector a “wake up call” with an enforcement action against a Des Moines start up digital payment provider, Dwolla, Inc. (“Dwolla”).

The CFPB alleged that Dwolla misrepresented how it was protecting consumers’ data. Dwolla entered into a Consent Order to settle the CFPB charges and agreed to pay a $100,000 penalty and to change and improve its current security practices.  The CFPB never alleged that Dwolla had breached any consumer data.  According to the CFPB, Dwolla “failed to employ reasonable and appropriate measures to protect data obtained from consumers from unauthorized access,” while telling consumers that the information was “securely encrypted and stored.”  Dwolla, had over 650,000 customer accounts and was transferring as much as $5M a day in 2015.

In a nutshell, the CFPB alleged that Dwolla’s representations regarding “securely encrypted and stored data,” were inaccurate for a number of specific reasons including:

  • Failing to implement appropriate data security policies and procedures until at least September 2012,
  • Failing to implement a written data security plan until at least October 2013,
  • Failing to conduct adequate risk assessments,
  • Failing to use encryption technology to properly safeguard consumer information,
  • Failing to provide adequate or mandatory employee training on data security, and
  • Failing to practice secure software development for consumer facing applications

In addition to the fine, Dwolla agreed to take preventative steps to address security concerns including:

  • Implementing a comprehensive data security plan,
  • Conducting data security risk assessments twice annually,
  • Designating a qualified individual to be accountable for data security issues,
  • Implementing appropriate data security policies and procedures,
  • Implementing an appropriate and precise method of customer identity authentication before any funds transfer,
  • Adopting specific procedures for the selection and retention of service providers capable of maintaining security practices,
  • Conducting regular and mandatory security data training, and
  • Obtaining an annual data security audit from an independent, third party acceptable to CFPB’s enforcement director.

The Consent Order will remain in effect for five (5) years.

This is the CFPB’s first enforcement action directly related to data security and appears to expand the CFPB’s jurisdiction into this arena. In the CFPB press release Director Richard Cordray stated, “With data breaches becoming commonplace and more consumers using these online payment systems, the risk to consumers is growing.  It is crucial that companies put systems in place to protect this information and accurately informed consumers about their data security practices.”

This virgin enforcement action by the CFPB appears to be a direct response to the growing concern about the lack of regulation for fintech digital payment firms. The enforcement action is also a welcome signal to traditional banks who have argued that the fintech sector has not received near the level of oversight or enforcement as they have.  It appears regulators are attempting to find the right balance between acting too “heavy handed” and not squelching the technical advances that have made finance more convenient for consumers while still insuring an adequate level of consumer protection.

Whether Google Docs, Dropbox, or some other file sharing system, employees, especially millennials and other digital natives, are increasingly likely to set up personal cloud-based document sharing and storage accounts for work purposes, usually with well-meaning intentions, such as convenience and flexibility. Sometimes this is done with explicit company approval, sometimes it is done with tacit awareness by middle management, and often the employer is unaware of this activity.

When an employee quits or is terminated, however, that account, and the business documents it contains, may be locked away in an inaccessible bubble. Worse, the employee could access trade secrets and other information stored in the cloud to unfairly compete. For example, in 2012, the computer gaming company Zynga sued a former employee for uploading trade secrets onto the employee’s personal Dropbox account before leaving to work for a competitor. At a minimum, it may take time to recover the information or obtain the user name and password from the former employee.  Storage of proprietary information, especially personally identifiable information (PII) on personal cloud accounts also increases the risk of a company data breach if the information is hacked.  Finally, allowing business documents to be stored outside of the system can also create headaches when enacting a litigation hold or responding to electronic discovery requests in litigation. What should employers be doing now, to address this trend?

Interested in reading more? Please see the full post on the Non-Compete and Trade Secrets Report.

Earlier today, the European Commission (the Commission) issued a draft “adequacy decision” as well as the texts that will constitute the EU-U.S. Privacy Shield (the Privacy Shield). This includes the Privacy Shield Principles companies have to abide by, as well as written commitments by the U.S. Government on the enforcement of the arrangement, including assurance on the safeguards and limitations concerning access to data by public authorities.

An “adequacy decision” is a decision, adopted by the Commission, which establishes that a non-EU country ensures an adequate level of protection of personal data by reason of its domestic law and international commitments.  The practical effect of such a decision is that personal data can flow from the 28 EU Member States (and the three European Economic Area member countries: Norway, Liechtenstein and Iceland) to that third country, without any further restrictions.  Once adopted, the Commission’s adequacy finding establishes that the safeguards provided when data are transferred under the new Privacy Shield are equivalent to data protection standards in the EU.

In the Commission’s press release, Commissioner Jourová said:

Protecting personal data is my priority both inside the EU and internationally. The EU-U.S. Privacy Shield is a strong new framework, based on robust enforcement and monitoring, easier redress for individuals and, for the first time, written assurance from our U.S. partners on the limitations and safeguards regarding access to data by public authorities on national security grounds. Also, now that President Obama has signed the Judicial Redress Act granting EU citizens the right to enforce data protection rights in U.S. courts, we will shortly propose the signature of the EU-U.S. Umbrella Agreement ensuring safeguards for the transfer of data for law enforcement purposes. These strong safeguards enable Europe and America to restore trust in transatlantic data flows.

As we previously discussed, the Commission and the U.S. Department of Commerce reached agreement on February 2, 2016 for a new framework for transatlantic exchanges of personal data for commercial purposes, known as the Privacy Shield.  The Privacy Shield reflects the requirements set out by the European Court of Justice in its October 2015 ruling in Schrems which declared the old Safe Harbor framework invalid.

What are the main differences between the “Safe Harbor” arrangement and the EU-U.S. Privacy Shield?

According to the Commission, the Privacy Shield provides stronger obligations on companies in the U.S. to protect the personal data of Europeans. It requires stronger monitoring and enforcement by the U.S. Department of Commerce (DoC) and Federal Trade Commission (FTC), including through increased cooperation with European Data Protection Authorities (DPAs).

The Privacy Shield will include:

  • Strong obligations on companies and robust enforcement: the new arrangement will be transparent and contain effective supervision mechanisms to ensure that companies respect their obligations, including sanctions or exclusion if they do not comply. The new rules also include tightened conditions for onward transfers to other partners by the companies participating.
  • Clear safeguards and transparency obligations on U.S. government access: the U.S. government has given the EU written assurance that any access of public authorities for national security purposes will be subject to clear limitations, safeguards and oversight mechanisms, preventing generalized access.  The U.S. will also establish a redress possibility in the area of national intelligence for Europeans through an Ombudsperson mechanism within the Department of State, who will be independent from national security services. The Ombudsperson will follow-up complaints and enquiries by individuals and inform them whether the relevant laws have been complied with. These written commitments will be published in the U.S. federal register.
  • Effective protection of EU citizens’ rights with several redress possibilities: Complaints have to be resolved by companies within 45 days. A free of charge Alternative Dispute Resolution solution will be available. EU citizens can also go to their national Data Protection Authorities, who will work with the DoC and FTC to ensure that unresolved complaints by EU citizens are investigated and resolved. If a case is not resolved by any of the other means, as a last resort there will be an enforceable arbitration mechanism. Moreover, companies can commit to comply with advice from European DPAs. This is obligatory for companies handling human resource data.
  • Annual joint review mechanism: that will monitor the functioning of the Privacy Shield, including the commitments and assurance as regards access to data for law enforcement and national security purposes.

How will the Privacy Shield work?

U.S. companies will register to be on the Privacy Shield List and self-certify that they meet the requirements.  This procedure has to be done each year. The US Department of Commerce will monitor and actively verify companies’ privacy policies are in line with the relevant Privacy Shield principles and are readily available. The U.S. will maintain an updated list of current Privacy Shield members and remove those companies that have left the arrangement. The DoC will ensure that companies that are no longer members of Privacy Shield must still continue to apply its principles to personal data received when they were in the Privacy Shield, for as long as they continue to retain such data.

What’s Next?

A committee composed of representatives of the Member States will be consulted and the EU Data Protection Authorities (Article 29 Working Party) will give their opinion, before a final decision is issued. In the meantime, the U.S. side will make the necessary preparations to put in place the new framework, monitoring mechanisms, and the new Ombudsperson mechanism.

The Commission has encouraged companies to begin their preparations so as to be in a position to join the Privacy Shield as soon as possible after it is in place following the adoption of the Commission decision.

The Privacy Shield requires action from many actors:

  • U.S. companies must fulfill their obligations under the framework in the full knowledge that it will be strictly enforced and they will be sanctioned if they are non-compliant.  Specifically, the Privacy Shield requires commitment to the following privacy principles: 1) Notice, 2) Choice, 3) Security, 4) Data Integrity and Purpose Limitation, 5) Access, 6) Accountability for Onward Transfer, and 7) Recourse, Enforcement and Liability. The Commission also encouraged companies to opt for EU DPAs as their chosen avenue to resolve complaints under the Privacy Shield and to publish transparency reports on national security and law enforcement access requests concerning EU data they receive.
  • U.S. authorities are entrusted with overseeing and enforcing the framework, respecting the limitations and safeguards as far as access to data for law enforcement and national security purposes is concerned, and those entrusted with responding in a timely and meaningful manner to complaints by EU individuals about the possible misuse of their personal data;
  • EU DPAs play an important role in ensuring that individuals can effectively exercise their rights under the Privacy Shield, including by channeling their complaints to the appropriate U.S. authorities, triggering the Ombudsperson mechanism, assisting complainants in bringing their case to the Privacy Shield Panel, as well as exercising oversight over human resources data transfers; and
  • The Commission is responsible for making a finding of adequacy and reviewing it on a regular basis.

For additional information, please visit the Commission’s page dedicated to the Privacy Shield.

We will continue to update the status of the Privacy Shield as we await the final decision.