The CCPA’s “B2B” Exemption Is Also Extended by Governor Newsom

By signing AB 1281 into law on September 29th, 2020, California Governor Gavin Newsom amended the California Consumer Privacy Act (“CCPA”) to extend until January 1, 2022, not only the current exemption on employee personal information from most of the CCPA’s protections, but also the so-called “B2B” exemption. Welcomed by many “B2B” (business to business) organizations, this exemption originally enacted under AB 1355 removed significant amounts of personal information from the CCPA’s reach. Note, however, this exemption could be further extended until January 1, 2023, if the California Privacy Rights Act (CPRA) is approved by voters on Nov. 3, 2020.

The “B2B” exemption applies to the following:

Personal information reflecting a written or verbal communication or a transaction between the business and the consumer, where the consumer is a natural person who is acting as an employee, owner, director, officer, or contractor of a company, partnership, sole proprietorship, nonprofit, or government agency and whose communications or transaction with the business occur solely within the context of the business conducting due diligence regarding, or providing or receiving a product or service to or from such company, partnership, sole proprietorship, nonprofit or government agency

In other words, for example, the personal information obtained by a business from a consumer under the CCPA is generally exempt under this provision when that consumer is acting as a representative of another organization and the consumer engages with the business in communications or transactions that relate solely to providing or receiving products or services.  However, similar to the employee personal information exemption, certain personal information in this context remains subject to the CCPA’s private right of action if that personal information is involved in a data breach and reasonable safeguards were not in place.

CCPA covered businesses have a temporary reprieve on employment and “B2B” personal information, and will have to wait until election day to see if they will get another year.

California Governor Newsom Signs into Law Extension to CCPA Employee Personal Information Exemption, Vetoes Another Privacy Bill

On September 29th, California Governor Gavin Newsom signed into law AB 1281, an amendment to the California Consumer Privacy Act (“CCPA”) that would extend the current exemption on employee personal information from most of the CCPA’s protections, until January 1 2022. The exemption on employee personal information was slated to sunset on December 31, 2020.  It is important to highlight that under the current exemption, while employees are temporarily excluded from most of the CCPA’s protections, two areas of compliance remain: (i) providing a notice at collection, and (ii) maintaining reasonable safeguards for a subset of personal information driven by a private right of action now permissible for individuals affected by a data breach caused by a business’s failure to do so.

Notably, the operation of the extension is contingent upon voters not approving ballot proposition 24 in November, the California Privacy Rights Act (“CPRA”), which would amend the CCPA to include more expansive and stringent compliance obligations and inter alia, would extend the employment personal information exemption until January 1, 2023.

As a reminder, during this challenging time, it is important for employers, regardless of jurisdiction, to remain vigilant on the types of personal information collected from employees and how it is used. Pre COVID-19, employers, for example, were not thinking of performing temperature checks on employees or collecting other personal information in connection with COVID-19 screenings, and as a result may need to update their privacy notices to capture this category of information and the purpose it was used.

A full discussion on AB 1281 is available here.

During the same session, Governor Newsom vetoed an additional privacy bill, AB 1138, which would have required parental or guardian consent for creation of a social media or application account for children under 13. Under the federal Children’s Online Privacy Protection Act (COPPA) operators of Internet websites or online services to obtain parental or guardian consent before collecting personal information from a child known to be under 13. States have the authority to enforce COPPA.  In Governor Newsom’s veto statement, he highlighted that “Given its overlap with federal law, this bill would not meaningfully expand protections for children, and it may result in unnecessary confusion.” However, Governor Newsom concluded that his Administration is “open to exploring ways to build upon current law to expand safeguards for children online”.

California continues to be a leader in privacy and cybersecurity legislation. We will continue to update on CCPA and other related developments as they unfold.

House Passes Internet of Things Cybersecurity Improvement Act

The House of Representatives recently passed the Internet of Things (IoT) Cybersecurity Improvement Act of 2020 (the Act).  The Act has been moved to the Senate for consideration. The legislation sets minimum security standards for all IoT devices purchased by government agencies.

IoT refers to the myriad of physical devices that are connected to the internet, collecting and sharing data.  They are used by both consumers and corporations.

Common examples include products used by consumers such as fitness trackers and home thermostats, to devices used by business and government that measure air quality and the operation of military components.

Despite the tasks that can be accomplished by IoT devices, they remain vulnerable to cyberattack.  Currently, there is no national standard addressing cybersecurity for IoT devices.  There have been several attempts in recent years to develop of a national IoT strategy. For example, in late 2017, a coalition of tech industry leaders released a report that put out a call for creation and implementation of a national strategy to invest, innovate and accelerate development and deployment of IoT, and stressed the need to enact legislation which would, inter alia, require IoT security measures in a “comprehensive manner.” Further, as far back as 2015, the FTC issued “concrete steps” businesses can take to enhance the privacy and security of IoT for consumers.

According to a statement issued by Rep. Robin Kelly (D-IL), sponsor of the Act in the House, “Securing the Internet of Things is a key vulnerability Congress must address. While IoT devices improve and enhance nearly every aspect of our society, economy and everyday lives, these devices must be secure in order to protect Americans’ personal data.”  Senator Mark Warner (D-VA), who introduced the Senate version of the legislation back in 2017, and again in 2019, stated that, “manufacturers today just don’t have the appropriate market incentives to properly secure the devices they make and sell – that’s why this legislation is so important.”  Rep. Kelly’s statement noted that many IoT devices are shipped with factory-set passwords that are frequently unable to be updated or patched. IoT devices also can represent a weak point in a network’s security, leaving the rest of the network vulnerable to attack.

The Act requires the National Institute of Standards and Technology (NIST) to publish standards and guidelines on federal government agencies’ use of IoT devices.  The Act states that the Office of Management and Budget is to review government policies to ensure they are in line with NIST guidelines. Federal agencies would be prohibited from procuring IoT devices or renewing contracts for such devices if it is determined that they do not comply with the security requirements.

New technologies and devices continuously emerge, promising a myriad of societal, lifestyle and workforce advancements and benefits including increased productivity, talent recruiting and management enhancements, enhanced monitoring and tracking of human and other assets, and improved wellness tools. While these advancements are undoubtedly valuable, the privacy and security risks should be considered and addressed prior to implementation or use, even without national IoT security legislation in place.

Will the Passing of Justice Ginsburg Impact the Future of the TCPA?

The passing of U.S. Supreme Court Justice Ruth Bader Ginsburg will likely bring with it many shifts in the Court on key issues, among which are matters regarding the Telephone Consumer Protection Act (TCPA), most imminently –  what qualifies as an auto dialer. The TCPA has been ever evolving in recent years as courts and legislatures attempt to keep pace with changes in technology.

When the TCPA was enacted in 1991, most American consumers were using landline phones, and Congress could not begin to contemplate the evolution of the mobile phone. The TCPA defines “Automatic Telephone Dialing System” (ATDS) as “equipment which has the capacity—(A) to store or produce telephone numbers to be called, using a random or sequential number generator; and (B) to dial such numbers.” 47 U.S.C § 227(a)(1). In 2015, the Federal Communications Commission (FCC) issued its 2015 Declaratory Ruling & Order (2015 Order), concerning clarifications on the TCPA for the mobile era, including the definition of ATDS and what devices qualify. The 2015 Order only complicated matters further, providing an expansive interpretation for what constitutes an ATDS, and sparking a surge of TCPA lawsuits in recent years.

This past July the Supreme Court accepted petition for review of a Ninth Circuit ruling on the issue of whether the definition of “ATDS” in the TCPA encompasses any device that can “store” and “automatically dial” telephone numbers, even if the device does not “us[e] a random or sequential number generator.” The Supreme Court’s decision should help resolve the circuit split and provide greater clarity and certainty for parties facing TCPA class action litigation.

President Trump’s recent nomination of Seventh Circuit judge Amy Coney Barrett could be particularly impactful on the issue of defining ATDS under the TCPA.  In February of this year, Judge Barrett authored an opinion in which the Seventh Circuit narrowly held that the TCPA’s definition of Automatic Telephone Dialing System (ATDS) only includes equipment that is capable of storing or producing numbers using a “random or sequential” number generator, excluding most “smartphone age” dialers. The Seventh Circuit court expressly rejected the Ninth Circuit’s more expansive interpretation from a ruling in 2018 (currently under review by the Supreme Court), concluding that the TCPA covers any dialer that calls from a stored list of numbers “automatically”. These rulings are significant as most technologies in use today only dial numbers from predetermined lists of numbers.

In the Seventh Circuit case’s fact-pattern, the plaintiffs alleged that they had received over a dozen unsolicited calls over a one-year period, from the defendant. While the defendants acknowledged that that they had indeed placed the calls, they argued that this was not a TCPA violation, as their calling system required too much “human intervention” to qualify as an ATDS. Judge Barrett highlighted in the Seventh Circuit ruling that accepting the plaintiffs’ arguments against the defendant’s dialing system would have “far-reaching consequences…it would create liability for every text message sent from an iPhone. That is a sweeping restriction on private consumer conduct that is inconsistent with the statute’s narrower focus”.

Given Justice Ginsburg’s history as a proponent of protecting a consumer’s right to bring a class action both within the TCPA context and beyond, she very well may have supported a broader reading of the definition of ATDS. Whether Judge Barrett ultimately becomes Justice Ginsburg’s replacement remains to be seen, but anyone interested in the Supreme Court’s review of an ATDS under the TCPA should be following this development.

DHS IG Report Raises Questions About Department’s and its Subcontractors’ Ability to Protect Biometric Information Following Breach

Trump Administration To Test Biometric Program To Scan Faces Of Drivers |  Zero Hedge

Earlier this month, our Immigration Group colleagues reported the Department of Homeland Security (DHS) would release a new regulation to expand the collection of biometric data in the enforcement and administration of immigration laws. However, as reported by Roll Call, a DHS Inspector General report raised significant concerns about whether Department is able to adequately protect sensitive biometric information, particularly with regard to its use of subcontractors. The expanded use of biometrics outlined in the Department’s proposed regulation, just as increased use of biometric information such as fingerprint or facial recognition by private organizations, heightens the risk to such data.

The amount of biometric information maintained by DHS is already massive. The DHS Office of Biometric Identity Management maintains the Automated Biometric Identification System, which contains the biometric data repository of more than 250 million people and can process more than 300,000 biometric transactions per day. U.S. Customs and Border Protection (CBP) is mandated to deploy a biometric entry/exit system to record arrivals and departures to and from the United States, with the long-term goal to biometrically verify the identity of all travelers exiting the United States and ensure that each traveler has physically departed the country at air, land, and sea departure locations.

In 2018, CBP began a pilot effort known as the Vehicle Face System (VFS) in part to test the ability to capture volunteer passenger facial images as they drove by at speeds under 20 mph and the ability to biometrically match captured images against a gallery of recent travelers. DHS hired a subcontractor to assist with the development of the technology.

According to the inspector general’s report, DHS has a range of policies and procedures to protect biometric information, which it considers sensitive personally identifiable information (SPII). Among those policies, DHS’ Handbook for Safeguarding Sensitive PII, Privacy Policy Directive 047-01-007, Revision 3, December 2017, requires contractors and consultants to protect SPII to prevent identity theft or other adverse consequences, such as privacy incidents, compromise, or misuse of data information on them.

Despite these policies, the DHS subcontractor engaged to support the pilot directly violated DHS security and privacy protocols when it downloaded SPII, including traveler images, from an unencrypted device and stored it on its own network. The subcontractor obtained access to this data between August 2018 and January 2019 without CBP’s authorization or knowledge. Later in 2019, the subcontractor’s network was subjected to a malicious cyberattack involving ransomware resulting in the compromise of 184,000 facial images of cross-border travelers collected through a pilot program, at least 19 of which were posted on the dark web.

As one of our 10 Steps for Tackling Data Privacy and Security Laws, “Vendors – trust but verify” is critical. For DHS, its failure to do so may damage the public’s trust resulting in travelers’ reluctance to permit DHS to capture and use their biometrics at U.S. ports of entry. Non-governmental organizations that experience a similar situation with one of their vendors face an analogous loss of trust, as well as adverse impacts on business, along with compliance enforcement and litigation risks.

Among the recommendations CBP made following the breach was to ensure implementation of USB device restrictions and to apply enhanced encryption methods. CBP also sent a memo requiring all IT contractors to sign statements guaranteeing compliance with contract terms related to IT and data security.  Like DHS, more organizations are developing written policies and procedures following risk assessments and other best practices. However, it is not enough to prepare and adopt policies, implementation is key.

A growing body of law in the United States requires not only the safeguarding of personal information, including biometric information, by organizations that own it, but also by the third-party service providers that process it on behalf of the owners. Carefully and consistently managing vendors and their access, use, disclosure, and safeguarding of personal information is a critical part of any written information security program.

Indiana AG Proposed Regulations Creating Corrective Action Plan Requirement and Cybersecurity Safe Harbor

A proposal by Indiana’s Attorney General Curtis Hill on Wednesday would add a significant step in the incident response process for responding to breaches of security affecting Indiana residents. On Wednesday, during a U.S. Chamber of Commerce virtual event, he announced his proposed rule designed to better protect Hoosiers from cyberattacks. It is expected that the proposed rule will take effect by the end of the year.

In short, there are two components to the proposed regulations:

  • A requirement for data base owners to create, implement and report a corrective action plan (CAP) to the Attorney General within thirty days of the date it reports a breach to the Attorney General under the state’s existing breach notification law.
  • A “safe harbor” for what constitutes “reasonable measures” to safeguard personal information in Indiana.

If the regulations are adopted, covered entities will need to revisit their incident response plans to ensure they have steps in place to timely submit a CAP to the Attorney General’s office. They might also consider modifying their data security plans to take advantage of the safe harbor.

Currently, Indiana law imposes general requirements on data base owners to “implement and maintain reasonable procedures, including taking any appropriate corrective action, to protect and safeguard from unlawful use or disclosure any personal information of Indiana residents collected or maintained by the data base owner.” Data base owners include persons that own or license computerized data that include personal information. As in several other states, these general obligations have not been well defined. AG Hill’s proposed rule, if adopted, would provide some clarity creating several duties for data base owners.

First, the general requirement to take “any appropriate corrective action” would, in the context of a data breach, mean the following:

  • Continuously monitoring and remediating potential vulnerabilities in a timely fashion.
  • Taking reasonable steps to mitigate and prevent the continued unlawful use and disclosure of personal information following any breach of security of data.
  • Preparing a written CAP following any breach of security of data which does the following:
    • Outlines the nature and all known or potential causes of the breach with reasonable specificity and citations to applicable technical data.
    • Identifies the precise date and time of the initial breach, and any subsequent breaches, if feasible.
    • Confirms that corrected measures were implemented at the earliest reasonable opportunity.
    • Identifies the specific categories of personal information subject to unlawful use or disclosure, including the approximate number of individuals affected.
    • Identifies what steps have already been taken to mitigate and prevent the continued unlawful use and disclosure of personal information.
    • Identifies a specific corrective plan to mitigate and prevent the continued unlawful use and disclosure of personal information.
  • Certify the development and implementation of the CAP to the Attorney General under penalty of perjury within thirty (30) days of providing notice of the breach to the Attorney General under existing law. Among other requirements for the CAP, the Attorney General would be authorized to conduct random and unannounced audits.

In short, simply complying with the disclosure and notification requirements under Indiana’s existing breach notification law (IC 24-4.9-3) would not, by itself, constitute appropriate corrective action following a breach.

We need a way to separate the businesses that are taking important steps to secure data from those who are not,” Attorney General Hill said. “This rule would provide businesses a playbook on how to protect data, and would protect the businesses that follow the playbook. It’s a win for both consumers and businesses.

Second, the proposed rule outlines a “safe harbor” for what constitutes “reasonable measures” protect personal information. More specifically, the rule identifies certain data security frameworks that, if adopted, would be presumed reasonable. These include:

  • a cybersecurity program that complies with the National Institute of Standards and Technology (NIST) cybersecurity framework and follows the most recent version of specified standards, such as NIST Special Publication 800-171,
  • for certain regulated covered entities, compliance with the following:
    • The federal USA Patriot Act.
    • Executive Order 13224.
    • The federal Driver’s Privacy Protection Act.
    • The federal Fair Credit Reporting Act.
    • The federal Health Insurance Portability and Accountability Act
  • Entities that comply with the payment card industry data security standard (PCI) in place at the time of the breach of security of data.

Because data security is not a one-time process, maintaining the safe harbor under the NIST framework requires the covered entity to implement any new version of the applicable standard.  Any data security plan also would need to monitor vulnerabilities tracked by NIST National Vulnerability Database, and for each critical vulnerability commence remediation planning within twenty-four (24) hours after the vulnerability has been rated as such, and apply the remediation within one (1) week thereafter. Additionally, covered entities must conduct risk assessments annually and revise their data security plans accordingly.

The safe harbor provides further that data base owners which can bear the burden of demonstrating their data security plan is reasonably designed will not be subject to a civil action from the Office of the Attorney General arising from the breach of security of data.

It is worth nothing that the frameworks listed might not apply to all of the data maintained by a covered entity. For example, the privacy and security regulations under HIPAA would not apply to employee data or other activities of the covered entity that does not involve “protected health information,” but would involve personal information of Indiana residents. The regulations are unclear on this point, and covered entities must still consider reasonable measures for that data for the safe harbor to apply.

OCR Releases New Guidance on HIPAA for Mobile Health Technology

Over the past few years, and particularly during the COVID-19 pandemic, the Department of Health and Human Services Office for Civil Rights in Action (OCR) has made countless efforts to enhance its Health Insurance Portability and Accountability Act (HIPAA) guidance and other related resources on its website. Last week, the OCR launched a new feature on their website HHS.gov, entitled Health Apps, which updates and renames  the OCR’s previous Health App Developer Portal, and is available here.

The new site features the OCR’s helpful guidance on “when and how” HIPAA regulations may be applicable to mobile health applications, acutely relevant during the COVID-19 pandemic as many aspects of the healthcare industry shift to telehealth.

Here are the key features of the OCR’s new Health Apps:

  • Mobile Health Apps Interactive Tool
    • The Federal Trade Commission (FTC), in conjunction with OCR, the HHS Office of National Coordinator for Health Information Technology (ONC), and the Food and Drug Administration (FDA), created a web-based tool to help developers of health-related mobile apps understand what federal laws and regulations might apply to them.
  • Health App Use Scenarios & HIPAA
    • Provides various use scenarios for mHealth applications, and explains when an app developer may be acting as a business associate under the HIPAA Rules.
  • FAQs on the HIPAA Right of Access, Apps & APIs
    • Provides helpful insight on how the HIPAA Rules apply to covered entities and their business associates with respect to the right of access, apps, and application programming interface (APIs).
  • FAQs on HIPAA & Health Information Technology
    • Provides helpful insight on the relationship between HIPAA and Health IT.
  • Guidance on HIPAA & Cloud Computing
    • Assistance for HIPAA covered entities and business associates, including cloud service providers, in how to effectively utilize cloud computing while still maintain HIPAA compliance.

As telehealth has increasingly become the norm, and the US continues to implement and consider various forms of contact tracing apps, patient privacy and maintaining HIPAA privacy and security obligations has never been more important.   The increased use of mobile health applications and other related tools to assist healthcare providers with facilitation of telehealth capabilities, also comes with an increased risk of data breaches and improper disclosures of protected health information (PHI) to unauthorized individuals.  The features of OCR’s new Health apps are a great starting point for HIPAA covered entities and businesses associates that utilize mobile health apps, and want to ensure compliance with their HIPAA obligations.

Below are some of our additional resources on OCR HIPAA related initiatives of late:

 

 

 

Massachusetts Attorney General Creates Data Privacy and Security Division

The Massachusetts Office of the Attorney General has created a new Data Privacy and Security Division. This Division is charged with protecting consumers from the threats to the privacy and security of their data. The Attorney General, Maura Healey, announced “The Data Privacy and Security Division will build on our office’s commitment to empowering Massachusetts consumers in the digital economy, ensuring that companies are protecting personal data, and promoting equal and open access to the internet.”

Attorney General Healey announced that the Data Privacy and Security Division will “investigate and enforce the Massachusetts Consumer Protection Act and Data Breach Law to protect the security and privacy of consumers’ data.” This new Data Privacy and Security Division is the latest development in increasing efforts by Massachusetts officials to address cybersecurity concerns. In the Fall of 2019, Massachusetts Governor Charlie Baker introduced an expansive cybersecurity program, including statewide workshops for municipalities to work together to enhance their cybersecurity capabilities.

Notably, last Spring, Massachusetts updated its data breach notification law with changes that are likely to create opportunities for enforcement by the division. In particular, the new law expanded the content requirements for notifications to the Attorney General and Office of Consumer Affairs and Business Regulation (OCABR) to include, among other things, whether the business that experienced the breach maintains a written information security program (WISP) and whether they have updated the WISP.  Employers maintaining personal information of Massachusetts residents should revisit their incident response plan (or develop one).

Employers operating in Massachusetts or holding data on Massachusetts residents should be aware of the focus that Governor Baker and Attorney General Healey have placed on cybersecurity. These Massachusetts programs highlight the importance of conducting risk assessments to identify and address potential vulnerabilities to hackers as well as security risks created by employees and contractors.

OCR is Serious About Patients’ Rights to Access Records, Announcing Enforcement Actions Against 5 Providers

HIPAA: Second Settlement this Year Related to Right to Access Initiative | Blogs | Health Care Law Today | Foley & Lardner LLPWhen providers, health plans, business associates, and even patients and plan participants think of the HIPAA privacy and security rules (‘HIPAA Rules”), they seem to be more focused on the privacy and security aspects of the HIPAA Rules. That is, for example, safeguarding an individual’s protected health information (PHI) to avoid data breaches or avoiding improper disclosures to persons without authority for receiving same. An equally important aspect of the HIPAA Rules, however, is ensuring patient access to health records, as shown by recent enforcement activity announced yesterday by the Office for Civil Rights (OCR) at the U.S. Department of Health and Human Services (HHS).

Last year, OCR commenced its Right of Access Initiative, an enforcement priority in 2019 to support individuals’ right to timely access to their health records at a reasonable cost. At least one study found providers are struggling to fully comply with the right to access requirement under HIPAA, rights which also exist under state law. A study by medRxiv reported in HIPAAJournal highlights this issue. During the study, 51 providers were sent medical record access requests and the results showed:

More than half (51%) of the providers assessed were either not fully compliant with the HIPAA right of access or it too[k] several attempts and referrals to supervisors before requests were satisfied in a fully compliant manner…

The researchers also conducted a telephone survey on 3,003 healthcare providers and asked about policies and procedures for releasing patient medical records. The researchers suggest as many as 56% of healthcare providers may not be fully compliant with the HIPAA right of access. 24% did not appear to be fully aware of the fee limitations for providing copies of medical records.

What is the right to access under HIPAA?

The HIPAA Privacy Rule generally requires HIPAA covered entities (health plans and most health care providers) to provide individuals, upon request, with access to PHI about them in one or more “designated record sets” maintained by or for the covered entity. This includes the right to inspect or obtain a copy, or both, as well as to direct the covered entity to transmit a copy to a designated person or entity of the individual’s choice. This right applies for as long as the covered entity (or its business associate) maintains the information, regardless of the date the information was created, and whether the information is maintained in paper or electronic systems onsite, remotely, or is archived.

When implementing this rule, covered entities and their business associates have several issues to consider, such as:

  • What information is subject to the right and what information is not, such as psychotherapy notes.
  • Confirming the authority of “personal representative” to act on behalf of an individual.
  • Procedures for receiving and responding to requests – such as written request requirements, verifying the authority of requesting parties, timeliness of response, whether and on what grounds requests may be denied, and fees that can be charged for approved requests.

To assist covered entities (and business associates), the OCR provides a summary of right of access issues, as well as a set of frequently asked questions.

Enforcement of the Right to Access.

The five enforcement actions announced yesterday are not the first enforcement actions taken by OCR. In September 2019, the OCR settled a compliant with a provider for $85,000 after it alleged the provider failed to respond to a patient’s request for access. In December 2019, the OCR settled a second complaint, again for $85,000, to address similar allegations, failure to respond timely, as well as failing to forward the medical records in the requested format and charging more than the reasonably cost-based fees allowed under HIPAA.

The five more recent cases involve very similar allegations against mostly small health care providers, at least in one case a not-for-profit, namely, the failure to provide patients with the right to access their protected health information under the HIPAA Rules. The total amount of the settlements with these fine entities is $136,500.

Patients can’t take charge of their health care decisions, without timely access to their own medical information,” said OCR Director Roger Severino. “Today’s announcement is about empowering patients and holding health care providers accountable for failing to take their HIPAA obligations seriously enough,” Severino added.

Getting Compliant

Providers receive all kinds of requests for medical and other records in the course of running their businesses. Reviewing and responding to these requests no doubt creates administrative burdens. However, buying forms online might not get the practice all it needs, and could put the practice at additional risk if those are followed without considering state law or are not implemented properly.

Putting in place relatively simple policies, carefully developing template forms, assigning responsibility, training, and documenting responses can go a long way toward substantially minimizing the risk an OCR enforcement action and its severity. Providers also should be considering sanctions under state law that also might flow from failing to provide patients access to their records. It is worth nothing that in some cases state law may be more stringent than HIPAA concerning the right to access, requiring modifications to the processes practices follow for providing access.

Michigan Considers Enhanced Data Breach Notification Law

Privacy and security continue to be at the forefront for legislatures across the nation, despite (or perhaps because of) the COVID-19 pandemic.  In late May, with back-to-back amendments, Washington D.C. and Vermont significantly overhauled their data breach notification laws, including expansion of the definition of personal information, and heightened notice requirements.  Now, Michigan may follow suit.

Earlier this month, the Michigan House of Representatives voted to advance House Bills 4186-87, sponsored by state Rep. Diana Farrington, of Utica, which create the Data Breach Notification Act, and exempt entities subject to the new act from similar provisions of Michigan’s previous Identity Theft Protection Act. Unlike other states that have expanded on already existing data breach notification laws, this bill would effectively replace Michigan’s prior law in its entirety.

This proposal puts Michigan consumers first when there are instances of compromised data,” said Farrington, who chairs the House Financial Services Committee. “Consumer protections are always important – and now many people across Michigan and in Macomb County have been put in dire financial straits through no fault of their own due to COVID-19. They don’t need the additional stress that is brought on when your personal information is potentially in someone else’s hands.

Below are highlights of Michigan’s new data breach notification bill:

  • Expansion of the definition of “sensitive personally identifying information” (PII). Following many other states, the new bill expands the definition of PII to include a state resident’s first name or first initial and last name in combination with one or more of the following data elements that relate to the resident:
    • A nontruncated  Social  Security  number,  driver  license  number,  state  personal identification  card  number,  passport  number,  military  identification  number,  or other unique identification number issued on a government document.
    • A financial account number.
    • A  medical  or  mental  history,  treatment,  or  diagnosis  issued  by  a  health  care professional.
    • A  health  insurance  policy  number  or  subscriber  identification  number  and  any unique identifier used by a health insurer.
    • A username or email address, in combination with a password or a security question and answer, that would allow access to an online account that is likely to have or is used to obtain sensitive personally identifying information.
  • Notification requirements to affected state residents. A covered entity would be required to provide notice to state residents whose PII was acquired in the breach, as expeditiously as possible and without unreasonable delay, taking into account the time necessary to conduct an investigation, and determine scope of breach, but not more than 45 days of its determination that a breach has occurred (unless law enforcement determines that such notification could interfere with a criminal investigation/national security). Written notice must at least include the following:
    • The date, estimated date, or estimated date range of the breach.
    • A description  of  the  PII acquired as part of the breach.
    • A   general   description   of   the   actions   taken   to   restore   the   security   and confidentiality of the PII involved in the breach.
    • A general description of steps a state resident can take to protect against identity theft, if the breach creates a risk of identity theft.
    • Contact information that the state resident can use to ask about the breach.
  • Notification requirements to state agency. If the number of state residents to be notified exceeds 750, the entity would have to provide written notice to Michigan’s Department of Technology, Management & Budget within the same time frame as notification to affected residents. Written notice must at least include a synopsis of events surrounding the breach, approximate number of state residents notified, any related services the covered entity is offering to state residents, and how the state resident can obtain additional information.
  • Substitute Notice. Under the bill, a covered entity required to provide notice could instead provide substitute notice, if direct notice is not feasible due to excessive cost or lack of sufficient contact information. For example, the cost of direct notification would be considered excessive if it exceeded $250,000.
  • Reasonable Security Measures. Michigan would join many other states that mandate businesses implement and maintain reasonable security measures designed to protect PII against a breach. When developing security measures, entities may consider the size of their entity, the amount of PII owned or licensed and its surrounding activity, and the cost to maintain such measures relative to the entity’s resources.
  • Data Disposal. Covered entities and third-party agents would be required to take reasonable measures to dispose of or arrange to dispose of PII when retention is no longer required by law. Disposal requires shredding, erasing or otherwise modifying PII to make it unreadable or undecipherable.
  • Penalties. The new law in its current form would not create a private right of action. However, a person that knowingly violates a notification requirement could be ordered to pay a fine of up to $2,000 for each violation or not more than $5,000 per day for each consecutive day the covered entity fails to take reasonable action to comply with the requirements, up to $250,000. The attorney general would have exclusive enforcement authority.

The bill now moves on to the Michigan Senate for further consideration. This amendment would keep Michigan in line with other states across the nation currently enhancing their data breach notification laws in light of the significant uptick in number and scale of data breaches and heightened public awareness.  Organizations across the United States should be evaluating and enhancing their data breach prevention and response capabilities.

LexBlog