The California Privacy Protection Agency (CPPA) issued its first enforcement advisory concerning the California Consumer Privacy Act (CCPA). In Enforcement Advisory No. 2024-01, the CPPA tackles a foundational principle – data minimization. Much of the attention surrounding the CCPA seems to focus on website privacy policies, notices at collection, and consumer rights requests. With its inaugural advisory directed at data minimization, the CPPA may be reminding covered business, service providers and others that CCPA compliance requires a deeper review of an organization’s practices concerning the collection, use, retention, and sharing of personal information.

First, a word on CPPA “Enforcement Advisories.” Being the first of its kind for the CCPA, we thought it would make sense to convey what the agency noted about these advisories :

Enforcement Advisories address select provisions of the California Consumer Privacy Act and its implementing regulations. Advisories do not cover all potentially applicable laws or enforcement circumstances; the Enforcement Division will make case-by-case enforcement determinations. Advisories do not implement, interpret, or make specific the law enforced or administered by the California Privacy Protection Agency, establish substantive policy or rights, constitute legal advice, or reflect the views of the Agency’s Board.

Based on this language, while it appears that an enforcement advisory will not provide a compliance safe harbor, there are valuable insights to be gained concerning the potential application of the CCPA.

For any organization concerned about data risk, data minimization is certainly one way to mitigate that risk. Most organizations work diligently to design and build information systems that prevent unauthorized access to those systems. But, when that unauthorized access happens, and it does, the data is compromised. If there is less of that data in the compromised system, risk has been mitigated, even if not eliminated.

The concept of data minimization did not originate with the CCPA. For example, under HIPAA, covered entities and business associates must comply with the minimum necessary rule. According to the CPPA:

Data minimization serves important functions. For example, data minimization reduces the risk that unintended persons or entities will access personal information, such as through data breaches. Data minimization likewise supports good data governance, including through potentially faster responses to consumers’ requests to exercise their CCPA rights. Businesses reduce their exposure to these risks and improve their data governance by periodically assessing their collection, use, retention, and sharing of personal information from the perspective of data minimization.  

The process of achieving data minimization can be challenging as it does not lend itself to a one-size fits-all approach. Under the CCPA, businesses must apply the data minimization principle “to each purpose for which they collect, use, retain, and share consumers’ personal information—including information that businesses collect when processing consumers’ CCPA requests.” As noted in the Enforcement Advisory, there are many obligations under the CCPA for which data minimization must be considered and applied, such as requests to opt-out of the sale or sharing of personal information, or requests to limit the use and disclosure of sensitive personal information. Of course, even the collection of personal information by a business must be “reasonably necessary and proportionate to achieve the purposes for which the personal information was collected or processed.”

Applying this foundational principle, according to the Enforcement Advisory, essentially amounts to asking questions about the particular collection, use, retention, and sharing of personal information. In one example, the Advisory discusses how to apply data minimization to the process of verifying a consumer’s identity to process a request to delete personal information. It offers the following questions as examples of what a business might ask itself:

  • What is the minimum personal information that is necessary to achieve this purpose (i.e., identity verification)?
  • We already have certain personal information from this consumer. Do we need to ask for more personal information than we already have?
  • What are the possible negative impacts posed if we collect or use the personal information in this manner?
  • Are there additional safeguards we could put in place to address the possible negative impacts?

Considering the CCPA’s rules for verification and the needs of the business for that personal information, the business should make decisions for the verification process with minimization in mind. Further, minimization is something that should be periodically assessed.

The need to apply the principle of data minimization makes clear that CCPA compliance is more than posting a privacy policy on the business’s website. It requires, among other things, that businesses think carefully about what categories of personal information they are collecting, the sensitivity of those categories of personal information, the purpose(s) of that collection, and whether the information collected is minimized while still serving the applicable purposes.

On January 16, 2024, New Jersey’s Governor signed  Senate Bill (SB) 332, which establishes a consumer data privacy law for the state.  New Jersey becomes the 13th state to pass a comprehensive data consumer privacy law. The law would take effect one year after its enactment, on January 15, 2025.

To whom does the law apply?

The law applies to controllers defined as an individual or legal entity that alone or jointly with others determines the purpose and means of processing personal data that do business in New Jersey or produce products or services targeted at New Jersey residents and that during a calendar year either:

  • Control or process the personal data of at least 100,000 consumers, excluding personal data processed solely to complete a payment transaction; or
  • Control or process the personal data of at least 25,000 consumers and the controller derives revenue, or receives a discount on the price of any goods or services, from the sale of personal data.

Who is protected by the law?

Under the law covered consumer is defined as a person who is a resident of New Jersey acting only in an individual or household context. Like several other states, excluding California, the consumer does not include a person acting in a commercial or employment context.

What data is protected by the law?

The law will protect data that qualifies as “personal data” which is information that is linked or reasonably linkable to an identified or identifiable person. It does not include de-identified data or publicly available information.

What are the rights of consumers?

Under the law, a consumer has the following rights:

  • To confirm whether a controller processes the consumer’s personal data and access such personal data.
  • To correct inaccuracies in the consumer’s personal data.
  • To delete personal data concerning the consumer.
  • To obtain a copy of the consumer’s data.
  • To opt out of the processing of personal data for the purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.

What obligations do businesses have?

A controller shall provide a consumer with a reasonably accessible, clear, and meaningful privacy notice that shall include but may not be limited to:

  • The categories of the personal data that the controller processes.
  • The purpose of processing personal data.
  • The categories of all third parties to which the controller may disclose a consumer’s personal data.
  • The categories of personal data that the controller shares with third parties, if any
  • How consumers may exercise their consumer rights.
  • The process by which the controller notifies consumers of material changes to the notification.
  • An active e-mail address or other online mechanism that consumers may use to contact the controller.

If the controller sells personal data to third parties or processes personal data for purposes of targeted advertising, the sale of personal data, or profiling on a consumer, the controller shall clearly and conspicuously disclose such sale or processing, as well as the manner in which a consumer may opt out of the sale or processing.

A controller must respond to a verified consumer rights request from a consumer within 45 days of the controller’s receipt of the request. The controller may extend the response period by 45 additional days when reasonably necessary considering the complexity and number of the consumer’s requests.

How is the law enforced?

The attorney general shall have sole and exclusive authority to enforce a violation of the statute.

If you have questions about New Jersey’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

August 24, 2022, marked a milestone for the California Consumer Privacy Act (CCPA), the California Attorney General announced the first enforcement and settlement against beauty retailer Sephora.

Since July 2022, the California Attorney General’s (AG) office conducted an investigative sweep of online retailers to check compliance with the CCPA and sent out over 100 notices of alleged CCPA violations. The notices provided a 30-day period for businesses to correct alleged violations before an enforcement measure is taken. Attorney General Rob Bonta stated that after the notices, the “vast majority” of businesses changed their practices to comply with the CCPA.

The State alleged that Sephora violated the CCPA by failing to disclose to consumers it was selling their personal information, failed to process user requests to opt out of sale via user-enabled global privacy controls, and that the company did not cure these violations within the 30-day period of notice. Specifically, the State alleged that Sephora failed to notify its consumers that it had arrangements with third-parties (such as market research firms) where Sephora allowed them to install tracking software on its website and app so that third-parties could monitor consumers as they shopped. Under the terms of the settlement, “sale” included “sale using online tracking technology” which was broadly defined as where a business discloses or makes available consumers’ personal information to third parties through the use of online tracking technologies such as pixels, web beacons, software developer kits, third party library, and cookies in exchange for monetary or other valuable consideration, including personal information or other information such as analytics or free or discounted services. Meaning the idea of “sale” was broader than simply selling information to a third party in exchange for money.

The State considered Sephora’s arrangement with these third-parties a “sale” of consumer information under the CCPA. In short, the State alleged that: “Sephora did not tell consumers that it sold their personal information; instead, Sephora did the opposite, telling California consumers on its website that ‘we do not sell personal information.’”

The State and Sephora have reached a settlement that includes $1.2 million in penalties and as well as injunctive terms including:

  • Allow for consumers to opt-out of the sale of personal info, including via Global Privacy Control
  • Clarify its online disclosures and privacy policy
  • Conform its service provider agreements to the CCPA
  • Provide reports to the Attorney General relating to its sale of personal information

On January 1, 2023, the California Privacy Rights Act (CPRA) takes effect and amends the CCPA to eliminate the cure period and instead only allow the California Privacy Protection Agency (CPPA) discretion to provide time to cure.

In light of the State’s push toward enforcement and the rapidly approaching effective date of the CPRA, businesses must review their compliance efforts with the CCPA and CPRA. If you need assistance with compliance contact a Jackson Lewis attorney or the CCPA Team.

On June 8, 2022, the California Privacy Protection Agency (CPPA) Board, will meet to discuss and take potential action regarding a draft of its proposed regulations. The June 8th public meeting includes an agenda item where the CPPA Board will consider “possible action regarding proposed regulations … including possible notice of proposed action.”

In advance of the meeting, the CPPA posted on its website draft redline regulations for discussion purposes on the issue of revising the current regulations released by the California Attorney General (recently renumbered by the CPPA). The quietly released 66-page draft regulations, are intended to implement and interpret the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA). While the draft redline regulations address topics such as implementing “easy to understand” language for consumer CCPA requests, the draft does not address all of the 22 regulatory topics required under the CPRA. For example, the draft does not cover the opt-in/opt-out of automated decision making technology.

Here are some of the highlights of the proposed draft regulations:

  • Adds a definition of “disproportionate effort” within the context of responding to a consumer requests. For example, disproportionate effort might be involved when the personal information which is the subject of the request is not in a searchable or readily-accessible format, is maintained only for legal or compliance purposes, is not sold or used for any commercial purpose, and would not impact the consumer in any material manner;
  • Adds a new section on the restrictions on the collection and use of personal information that contains illustrative examples. One example is a business that offers a mobile flashlight app. That business would need the consumer’s explicit consent to collect a consumer geolocation information because that personal information is incompatible with the context in which the personal information is collected in connection with the app;
  • Adds requirements for disclosures and communications to consumers. This includes making sure communications are reasonably accessible to consumers with disabilities whether online or offline;
  • Adds requirements for methods for submitting CCPA requests and obtaining consumer consent. A key principle here is to ensure that the process for consumers to select a more privacy-protective options should not be more difficult or longer than a less protective option. Symmetry is the goal; and
  • Makes substantial revisions to the requirements for the privacy policy that a business is required to provide to consumers detailing the business’s online and offline practices regarding collection, use, sale, sharing, and retention of personal information. This includes new provisions concerning the right to limit the use and disclosure of sensitive personal information and the right to correct personal information.

To date, the Agency has not issued a Notice of Proposed Rulemaking to start the formal rulemaking process, but the timeframe associated with the draft regulations is still unclear – especially when the CPRA requires the CPPA to finalize regulations by July 1, 2022. It is expected that the June 8th meeting will provide details on the process.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group

The U.S. Food and Drug Administration (FDA) named University of Michigan Associate Professor Kevin Fu Acting Director of Medical Device Security in its Center for Devices and Radiological Health. This is a newly created 12-month post in which Fu will “work to bridge the gap between medicine and computer science and help manufacturers protect medical devices from digital security threats.” Fu stated that his primary activities will include

  • Envisioning a strategic roadmap for the future state of medical device cybersecurity.
  • Assessing opportunities to fully integrate cybersecurity principles through the lens of the center’s total product life cycle model.
  • Training and mentoring CDRH staff for premarket and postmarket technical review of medical device cybersecurity.
  • Engaging multiple stakeholders across the medical device and cybersecurity ecosystems.
  • Fostering medtech cybersecurity collaborations across the federal government, including the National Institute of Standards and Technology, National Science Foundation, National Security Agency, Department of Health and Human Services, National Telecommunications and Information Administration, Cybersecurity and Infrastructure Security Agency, Department of Veterans Affairs, Department of Defense, Federal Trade Commission and others.

Fu also noted that “the FDA is working closely with federal partners — HHS and CISA — on sector incident and emergency response. The FDA’s 2021 efforts for the cybersecurity focal point program will further increase the review consistency of premarket submissions.”

The creation of this new post is the latest in the FDA’s ongoing efforts to promote cybersecurity in medical devices. As we previously reported, the FDA has published draft guidance for medical device manufacturers outlining steps that can be taken in the premarket process to better protect medical devices from cybersecurity threats. We expect this focus to continue especially as we see a rise in ransomware attacks and other hacking activity.

The FDA’s increasing focus on cybersecurity is yet another reason relevant employers and medical device manufacturers should continue to assess and address potential data security risks.

On July 21, 2020, the New York Department of Financial Services (“DFS”) filed its first enforcement action under New York’s Cybersecurity Requirements for Financial Services Companies, 23 N.Y.C.R.R. Part 500 (“Reg 500”).    Reg 500, which took effect in March 2017, imposes wide-ranging and rigorous requirements on subject organizations and their service providers, which are summarized here.

According to the Statement of Charges, First American Title Insurance Co. (“First American”) failed to remediate a vulnerability on its public-facing website, thereby exposing millions of documents containing sensitive consumer information – including bank account numbers, mortgage and tax records, social security numbers, wire transaction receipts, and drivers’ license images – to unauthorized access.  More specifically, DFS claims that First American failed to:

  • Conduct a security review and risk assessment of the vulnerability – steps that were mandated by the Company’s own cybersecurity policies;
  • Properly classify the level of risk associated with the website vulnerability;
  • Adequately investigate that vulnerability (the Company reviewed only a tiny fraction of the impacted documents and, as a result, severely underestimated the seriousness of the vulnerability); and
  • Heed the advice of the Company’s internal cybersecurity team, which advised that further investigatory actions were needed.

The foregoing failures, DFS contends, violated six provisions of Reg 500.  Specifically:

  1. 23 NYCRR 500.02: The requirement to maintain a cybersecurity program that is designed to protect the confidentiality, integrity and availability of the covered entity’s information systems, and which is based on the covered entity’s risk assessment.
  2. 23 NYCRR 500.03: The requirement to maintain a written policy or policies, approved by senior management, setting forth the covered entity’s policies and procedures for the protection of its information systems and the nonpublic personal information (“NPI”) stored on those systems.
  3. 23 NYCRR 500.07: The requirement to limit user access privileges to information systems that provide access to NPI and periodically review such access privileges.
  4. 23 NYCRR 500.09: The requirement to conduct a periodic risk assessment of the covered entity’s information systems to inform the design of its cybersecurity program.
  5. NYCRR 500.14(b): The requirement to provide regular cybersecurity awareness training for all personnel as part of the covered entity’s cybersecurity program, and to update such training to reflect risks identified by the covered entity in its risk assessment.
  6. NYCRR 500.15: The requirement to implement controls, including encryption, to protect NPI held or transmitted by the covered entity both in transit, over external networks, and at rest.

The case against First American is scheduled to proceed to an administrative hearing on October 26, 2020.  DFS is seeking civil penalties, along with an order requiring the Company to remedy its violations of Reg 500.  Each violation of Reg 500 carries a potential penalty of up to $1,000 and DFS is taking the position that each instance where NPI was subject to unauthorized access constituted a separate violation.  DFS alleges that hundreds of millions of documents were exposed to potential unauthorized access as a result of First American’s alleged violations and that, according to the Company’s own analysis, more than 350,000 documents were accessed without authorization as a result of the Company’s website vulnerability.  If DFS’s position on what constitutes a single violation prevails, First American could be exposed to hundreds of millions of dollars in civil penalties.

The case against First American may signal that DFS, after giving covered organizations several years to get their compliance programs in order, now intends to aggressively enforce Reg 500’s requirements.  To prepare for this eventuality, subject organizations need to closely scrutinize their compliance programs – including their policies and procedures for conducting security reviews and risk assessments, and for investigating and responding to security incidents – and take proactive steps to plug any gaps in those programs.  We have prepared several articles, blog posts, and webinars to help organizations determine what Reg 500 requires and to assess their compliance with those requirements:

With first responders on the front lines of helping to fight the coronavirus, sharing information about potential exposure to COVID-19 is critical to protecting them and preventing further spread. In these situations, the information shared is most often “protected health information” (PHI) under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule. To help clarify when PHI can be shared in these circumstances, the Office for Civil Rights (OCR) at the U.S Department of Health and Human Services (HHS) issued guidance relating to sharing PHI about individuals who have been infected with or exposed to COVID-19 to law enforcement, paramedics, other first responders, and public health authorities.

The idea is to make clear when PHI can be given to first responders and others so they can take extra precautions or use personal protective equipment (PPE), and to remind covered entities to follow the “minimum necessary” rule in the process.

According to the guidance, the HIPAA Privacy Rule permits a covered entity to disclose PHI of an individual who has been infected with, or exposed to, COVID-19, with law enforcement, paramedics, other first responders, and public health authorities without the individual’s HIPAA authorization, in certain circumstances, including the following:

  • To provide treatment. For example, a nurse in a skilled nursing facility can alert emergency medical transport personnel that the individual they are transporting to a hospital’s emergency department has COVID-19.
  • When required by law. An example is a hospital making a disclosure of positive COVID status pursuant to a state law requiring the reporting of confirmed or suspected cases of infectious disease to public health officials.
  • When first responders may be at risk for an infection. Covered entities authorized by law to notify persons as necessary in the conduct of a public health intervention or investigation may inform first responders who may be at risk of infection. For example, HIPAA permits a covered county health department, in accordance with a state law, to disclose PHI to a police officer or other person who may come into contact with a person who tested positive for COVID-19, for purposes of preventing or controlling the spread of COVID-19. Similarly, a covered entity, such as a hospital, may provide a list of the names and addresses of all individuals it knows to have tested positive, or received treatment, for COVID-19 to an EMS dispatch for use on a per-call basis. The EMS dispatch would be allowed to use information on the list to inform EMS personnel who are responding to any particular emergency call so that they can take extra precautions or use PPE.
  • When the disclosure of PHI to first responders is necessary to prevent or lessen a serious and imminent threat to the health and safety of a person or the public. For example, a covered entity may, consistent with applicable law and standards of ethical conduct, disclose PHI about individuals who have tested positive for COVID-19 to fire department personnel, child welfare workers, mental health crisis services personnel, or others charged with protecting the health or safety of the public if the covered entity believes in good faith that the disclosure of the information is necessary to prevent or minimize the threat of imminent exposure to such personnel in the discharge of their duties.

These are just some of the examples in which PHI about an individual’s COVID-19 infection can be shared with first responders. The primary authority for these exceptions to the general rule of nondisclosure without an authorization is for treatment disclosures (45 CFR 164.502(a)(1)(ii)), legal requirements (45 CFR 164.502(a)(2)), and other purposes (45 CFR 164.512). Note, however, that unless the disclosure is required by law, for treatment purposes, or for certain other purposes, the covered entity must make reasonable efforts to limit the information used or disclosed to that which is the “minimum necessary” to accomplish the purpose for the disclosure.

Remember also that state laws may be more stringent than HIPAA concerning uses and disclosures of PHI. Thus, covered entities should consult other applicable laws (e.g., state and local statutes and regulations) in their jurisdiction prior to using or making disclosures of individuals’ PHI, as such laws may place further restrictions on disclosures that would otherwise be permitted by HIPAA.

Image result for Form 1040Tax season soon will soon be upon us and many not-so-eager taxpayers will share sensitive personal information about themselves, their dependents, their employees, and others with their trusted professional tax preparers for processing. What many of these preparers might not realize is that federal law and a growing number of state laws obligate them to have safeguards in place to protect sensitive taxpayer data. This can be overwhelming, especially considering tax preparers are already tasked with having to absorb annual federal, state, and local tax law changes, in addition to running their businesses. We hope this post provides a helpful summary of best practices and resources.

Legal Mandates.

  • Federal. The Financial Services Modernization Act of 1999 (a.k.a. Gramm-Leach-Bliley Act) authorized the Federal Trade Commission to set information safeguard requirements for various entities, including professional tax return preparers. The FTC’s Safeguards Rule requires tax return preparers to implement security plans, which should include:
    • one or more employees designated to coordinate an information security program;
    • identifying and assessing risks to client data, along with the effectiveness of current safeguards for controlling these risks;
    • maintaining a written information security program, which is regularly monitored and tested;
    • using vendors that also have appropriate safeguards, and contractually requiring them to maintain those safeguards; and
    • keeping the program up to date to reflect changes in business or operations, or the results of security testing and monitoring.
  • States. A growing number of states have enacted laws and/or issued regulations mandating businesses adopt reasonable safeguards to protect personal information. Small and mid-sized businesses typically are not excluded from these mandates. Some of these states include: California, Colorado, Florida, Illinois, Massachusetts, New York, and Oregon.

Practical next steps.

The good news is that businesses generally are permitted to shape their programs according to their size and complexity, the nature and scope of their activities, and the sensitivity of the customer information they handle. However, a small five-person tax preparation firm should not read this to mean it would be sufficient to obtain a template privacy policy from the Internet, put it on a shelf, and call it a day. Others have tried this.

The Internal Revenue Service (IRS) has issued guidance to help preparers get up to speed. The IRS’ “Taxes-Security-Together” Checklist lists

six basic protections that everyone, especially tax professionals handling sensitive data, should deploy.

These include:

  1. Anti-virus software
  2. Firewalls
  3. Two-factor authentication
  4. Backup software/services
  5. Drive encryption
  6. Virtual Private Network (VPN)

These six protections likely are not enough, other controls include:

  • Train yourself and staff to spot and avoid phishing attacks.
  • Maintain strong passwords (NOT “password” or “123456”!) – generally 8 or more characters, with special and alphanumeric characters, use phrases, etc.
  • Encrypt all sensitive files/emails.
  • Back up sensitive data to a secure external source, that is NOT connected fulltime to a network (If you have been hit with a ransomware attack, you will understand why this is important).
  • Double check return information, especially direct deposit information, prior to e-filing.
  • Only collect, use, retain, and disclose the minimum necessary information needed for the task.
  • Because no set of safeguards is perfect, have an incident response plan and practice it.

Check out IRS Publication 4557 Safeguarding Taxpayer Data for more information on these and other controls, and a helpful checklist from the FTC.

Yes, professional tax preparers that fail to take these steps can expose themselves to an FTC investigation, and a violation of their obligations as Authorized IRS e-file Providers under IRS Revenue Procedure 2007-40. But the impact on your business from a breach of client data can be far worse. The key is to get started and do something.

Co-Author: Thomas Buchan

As reported in our blog post from November 6, 2017, the New York State Attorney General announced the release of the proposed Shield Act in early November, 2017. This new legislation (we have some links for you below) would make significant changes to New York’s cybersecurity provisions (primarily under General Business Law §899-aa and its sequential provisions), including the following:

  • Expanding the coverage of New York’s data security protections to include any business that holds sensitive data of New York residents.
  • Imposing obligations on all such businesses to have “reasonable” safeguards in place to protect that sensitive data (though small businesses would have more flexible standards).
  • Changing the notification obligations under the law so that they would apply not only to the acquisition of sensitive information, but also to access to that sensitive information.
  • Increasing civil penalties in actions brought by the Attorney General’s office.

This much heralded, proposed legislation was in response to several large data breaches and ransomware attacks impacting New York residents and was often referenced by Attorney General Schneiderman as a critical measure to increase the data security of New York residents.

So, what’s the status of the SHIELD Act? First, we note that New York has been working on changing GBL §899-aa and its sequential provisions for a while. Legislation amending the law (but with different provisions) was proposed by the New York State Department of Law in the 2015 legislative session, but not passed (its last status was in Assembly and Senate committees). The SHIELD Act legislation was proposed by the Attorney General in late October, 2017, with the Assembly version sponsored by then-Assemblyman Kavanagh. Subsequently, he became Senator Kavanagh, and so the Assembly version of the legislation needed a new sponsor, and the bill was picked up by Assemblyman Titone (with nearly identical provisions, save for an amendment to provide for a “rolling” effective date based on when the legislation was passed). The (slightly) amended Assembly bill remains in the Assembly Consumer Protection Committee. The Senate version of the bill, sponsored by Senator Carlucci, was introduced to the Senate Consumer Protection Committee, and was subsequently sent to the Senate Finance Committee. As of this writing, the Assembly and Senate SHIELD Act bills have yet to move out of committee to the floor for a vote, and, therefore, the SHIELD Act is not yet a law. Jackson Lewis’ Government Relations team continues to monitor this legislation.

New York continues to focus on cyber security, however. Some examples of other laws and regulations in process are:

  • The Department of Financial Services proposed regulations impacting credit reporting agencies: These proposed regulations would impose registration requirements and detail prohibited practices for credit reporting agencies – and would require credit reporting agencies to comply with DFS’ (first-of-their-kind) cybersecurity regulations for financial institutions.
  • The New York Department of State emergency regulations on identity theft prevention and mitigation: These regulations were also implemented on an emergency basis, and would place requirements on consumer credit reporting agencies with respect to marketing identity theft prevention products. They would also empower the Division of Consumer Protection to obtain information from consumer credit reporting agencies, and inform and educate consumers with respect to protecting personal information, preventing identity theft and addressing identity theft when it does occur. These emergency regulations are still active, and expire on May 5, 2018.
  • Proposed legislation relating to the New York State Cyber Security Advisory Board, a New York State Cyber Security Action Plan and Periodic Cyber Security Reports: The first bill would establish a cyber security advisory board to be operated within the New York State Department of Homeland Security and Emergency Services (DHSES), to advise the Governor and Legislation on cyber security development, and recommend protective measures. The second bill would have several agencies working together to develop a cyber security action plan for New York. The final bill would have DHSES work with the Office of Information Technology Services, the New York State Police and the President of the Center for Internet Security (which is a private, not-for-profit organization) to do a comprehensive report of all cyber security services in New York State, every five years.   These bills are in committee, in committee and in committee, respectively.

In case you would like some more information, below are links to some of our previous blog posts dealing with cyber regulation in New York, and a link to our archived webinar on DFS regulation compliance (helpful to keep up with the continuing obligations under the regulations):

Our thanks to our Government Relations Practice Group colleagues for their assistance in preparing this blog post, and for keeping us up-to-date on these legislative and regulatory initiatives.

If you need help meeting privacy requirements, are looking for assistance with compliance, policies and procedures or training, or if you have any questions, please let the Jackson Lewis Privacy, e-Communications and Data Security Practice Group know.

 

The European Commission recently issued an overall positive review in its first annual report on the E.U. – U.S. Privacy Shield (“Privacy Shield”),  after evaluating the Privacy Shield in its joint review with the US last month.

The Privacy Shield took effect in August 2016 replacing the EU – US Safeharbor that was invalidated by the EU High Court of Justice. Over 2,500 companies and tens of thousands of EU companies rely on the Privacy Shield to transfer data between the EU and US.

First Joint Review

In September, the E.U. Justice Commissioner Vĕra Jourová and US Secretary of Commerce Wilbur Ross, launched the first annual joint review of the E.U. – U.S. Privacy Shield (“Privacy Shield”), a built-in requirement of the agreement. E.U. Commissioner Jourová anticipated “some proposals for improvement” but didn’t expect that it “will reopen negotiations again.” On the U.S. end, the White House firmly believed that the review would “demonstrate the strength of the American promise to protect the personal data on citizens of both sides of the Atlantic,” stated White House press secretary Sarah Sanders.

The review examined all aspects of the Privacy Shield administration and enforcement, including commercial and national security related matters, broader US legal developments and communication between E.U. and U.S. authorities.

EU Commission Report

After evaluating the results of the joint-review, the EU Commission published its first annual report on the functioning of the EU-US Privacy Shield (“the Report”) confirming that the Privacy Shield framework provides an adequate level of protection for personal information transferred from the EU to the US. The Report provides a green light for companies that rely on the Privacy Shield for their transatlantic data flow.  Nonetheless, the Report did have concern over US surveillance practices, and Privacy Shield oversight.

The EU Commission provided ten recommendations in the Report to help improve Privacy Shield framework implementation. Key recommendations include:

  • Greater cooperation between all enforcement entities – U.S. Department of Commerce, Federal Trade Commission and EU Data Protection Authorities.
  • More proactive and regular monitoring of corporate compliance by the US Department of Commerce (“DoC”). This includes that self-certified companies should be required to respond to compliance review questionnaires or file annual compliance reports with the DoC. 
  • Reform/Review of US surveillance practices – in particular those associated with the Foreign Intelligence Surveillance Act and privacy protections for non-US citizens.
  • Immediate appointment of the Privacy Shield Ombudsperson, and filling key positions in the Privacy and Civil Liberties Oversight Board.

Commenting on the Review, EU Justice Commissioner Jourová stated that, “Transatlantic data transfers are essential for our economy, but the fundamental right to data protection must be ensured also when personal data leaves the EU. Our first review shows that the Privacy Shield works well, but there is some room for improving its implementation. The Privacy Shield is not a document lying in a drawer. It’s a living arrangement that both the EU and U.S. must actively monitor to ensure we keep guard over our high data protection standards.”

For more information on the Privacy Shield compliance requirements and assessing whether the Privacy Shield is the proper mechanism for your company to use when transferring data outside of the EU to the US, we have prepared a Comprehensive EU – US Privacy Shield Q & A.