As we recently noted, Washington state amended its data breach notification law on May 7 to expand the definition of “personal information” and shorten the notification deadline (among other changes). Not to be outdone by its sister state to the north, Oregon followed suit shortly thereafter—Senate Bill 684 passed unanimously in both legislative bodies on May 20, and was signed into law by Governor Kate Brown on May 24. The amendments will become effective January 1, 2020.

Among the changes effected by SB 684 is a trimming of the Act’s short title—now styled the “Oregon Consumer Information Protection Act” or “OCIPA” (formerly the “Oregon Consumer Identity Theft Protection Act” or “OCITPA”). Apart from establishing a much more palatable acronym, the amended short title mirrors the national (and international) trend of expanding laws beyond mere “identity theft protection” to focus on larger scale consumer privacy and data rights.

Key substantive changes to the data breach notification law include:

  • Expanding the definition of “breach of security” to cover personal information that a person “maintains or possesses” (where previously only information a person “maintains” was covered);
  • Adding an individual’s account username and password (or other means of account identification and authentication) to the definition of “personal information” sufficient to trigger breach notification obligations—whether or not combined with the individual’s real name;
  • Defining the terms “covered entity” and “vendor,” to replace the cumbersome language in the current statute (e.g., “A person that owns or licenses personal information that the person uses in the course of the person’s business, vocation, occupation or volunteer activities and that was subject to a breach shall give notice . . . .” becomes “A covered entity that was subject to a breach shall give notice . . . .”).
  • Creating new obligations for “vendors,” including a requirement to notify the applicable covered entity within 10 days of discovery of a breach, and a requirement that the vendor notify the state Attorney General if said breach affects more than 250 consumers or an undetermined number of consumers (notification to the covered entity was previously only required “as soon as is practicable” after discovery, and vendors had no obligation to notify the Attorney General); and,
  • Specifying that covered entities or vendors in compliance with HIPAA or the GLBA (and subject thereto) are exempt from the state’s data breach notification requirements, and adding that compliance with the data security safeguards set forth in HIPAA or the GLBA may be raised as an affirmative defense in any action alleging that a covered entity or vendor has failed to comply with OCIPA’s own data security safeguarding requirements.

For organizations subject to the new law, including anyone that “owns, licenses, maintains, stores, manages, collects, processes, acquires or otherwise possesses personal information” in the course of business, the biggest change to note is that the disclosure of usernames and passwords alone is now sufficient to trigger breach notification obligations. Companies should also make an effort to determine whether they may be acting as a “vendor” under OCIPA’s new definition (“a person with which a covered entity contracts to maintain, store, manage, process or otherwise access personal information”), as vendor entities will have new obligations when the amendments go into effect on January 1, 2020.

Per our earlier blog post, Texas was ambitious this legislative session when it proposed two consumer data privacy bills. Both bills made it through committee hearings, but only one made it to the governor’s desk for signature: HB 4390. However, even it arrived there very different than originally drafted.

HB 4390, dubbed the Texas Privacy Protection Act, started as a comprehensive consumer privacy bill, with parts similar to the European Union’s GDPR and California’s Consumer Protection Act. However, through multiple amendments and dilutions, what was left were essentially two things:

The general updates to the TITEPA include:

  • Requirement that affected individuals be notified within 60 days after the breach. This replaces the current language in the statute “as quickly as possible”; and
  • Requirement that the business experiencing the breach notify the state Attorney General if the breach affected at least 250 Texas residents.

These provisions will become effective January 1, 2020.

The Council will study other state and global data privacy laws in advance of the next legislative session and make recommendations. They will present their findings on or before September 1, 2020, and these recommendations will likely form the basis for consumer privacy legislation when the Texas Legislature reconvenes in January 2021.

 

In a landmark ruling, the Vermont Supreme Court recently held that a patient had standing to sue both the hospital at which she was a patient and the employee who attended to her, for negligent disclosure of her personal health information to a third-party. Neither the Health Insurance Portability and Accountability Act (HIPAA) nor Vermont law provide for a private cause of action for damages arising from a medical provider’s disclosure of information obtained during treatment.

In this case, the plaintiff claims that the emergency room nurse who cared for her lacerated arm, later informed a police officer that she was intoxicated, had driven to the hospital, and intended to drive home. Ultimately, the Court concluded that “no reasonable factfinder could determine the disclosure was for any purpose other than to mitigate the threat of imminent and serious harm to the plaintiff and the public”.

While this conclusion is not surprising, what is a bit surprising is the Court’s allowance for this private cause of action to proceed in the first place, given that neither HIPAA nor Vermont law allow for such. The Court reasoned that in recognizing this private cause of action on the basis of common law, other courts have correctly relied on the theory of a breach of duty of confidentiality, insofar as “health care providers enjoy a special fiduciary relationship with their patients” such that “recognition of the privilege is necessary to ensure that the bond remains.”

The Court highlighted further that as evidence of sound public policy underlying the recognition of liability for breach of the duty of confidentiality, courts have cited “(1) state physician licensing statutes, (2) evidentiary rules and privileged communication statutes which prohibit a physician from testifying in judicial proceedings; (3) common law principles of trust, and (4) the Hippocratic Oath and principles of medical ethics which proscribe the revelation of patient confidences.”

The Vermont court joins many other jurisdictions across the United States honoring a private right of action in the context of a breach of the duty of confidentiality, on the basis of public policy. This decision further signifies the heightened focus being placed on an individual’s right to privacy and security of their data. Employers across all industries, but particularly healthcare, are advised to revisit their approach to maintaining sensitive personal information confidentially and securely, as legislation and common law continues to strengthen in this area.

 

The California Senate Appropriations Committee recently blocked a bill that would expand a private right of action under the California Consumer Privacy Act (CCPA). As we reported, in late February, California Attorney General Xavier Becerra and Senator Hannah-Beth Jackson introduced Senate Bill 561, legislation intended to strengthen and clarify the CCPA. Then in April, the Senate Judiciary Committee referred the bill to the Senate Appropriations Committee by a vote of 6-2.

If SB 561 became law, it would make a number of significant changes to the current law. In particular, SB 561 would significantly expand the scope of the private right of action presently written into the CCPA. The CCPA provides consumers a private right of action if their nonencrypted or nonredacted personal information is subject to an unauthorized access, exfiltration, theft, or disclosure because the covered business did not meet its duty to implement and maintain reasonable safeguards to protect that information. SB 561 proposed to broaden this provision to grant consumers a private right of action if their rights under the CCPA are violated.

This week however, the Senate Appropriations Committee blocked the bill, which is likely to end its legislative process, at least for this year, as in order for a bill to advance in the legislature during 2019, it must pass at least one chamber by May 31.

The bill’s blockage is considered a win for businesses, as expansion of the private right of action would only increase what is already anticipated to be a flood of litigation once the CCPA takes effect.

No industry or sector is immune to privacy or security issues.  This week a jury in a district court in Pennsylvania awarded $1,000 to each of the 68,000 class members who claimed that Bucks County, a county just outside Philadelphia, and several other municipal entities, violated state law by making their criminal records public, in Taha v Bucks County. Bucks County potentially faces up to $68 million in damages.

This case arises from claims brought by Daryoush Taha in 2012, who alleged that the county’s inmate search tool, which was made available to the public in 2008, included access to an online database with criminal history records for all current and former Bucks County Correctional Facility inmates dating back to 1938 (nearly 68,000 individuals), in violation of Pennsylvania’s Criminal History Records Information Act (“CHRIA”).

In 2016, the district court granted summary judgment in favor of Taha, holding that Bucks County violated the CHRIA by releasing criminal history records for incidents older than three years that did not result in a conviction. Further the district court certified a class of individuals for claims against the County regarding similar CHRIA violations stemming from public access to the online database.

As part of evidence, the plaintiffs pointed to an email exchange between two Bucks County employees regarding the online database, where the two concluded that only social security numbers required protection, without checking requirements under the CHRIA. The plaintiffs argued that failure to properly review state law was an indication of “reckless indifference” regarding whether the online base was in violation of the law. Under the CHRIA, punitive damages are awarded where there is a “willful” violation of the law. The court agreed with the plaintiffs that the definition of a “willful” violation in the context of the CHRIA should be considered “reckless difference”, and that the actions of the County employees indeed amounted to “reckless indifference”. Interestingly, the inmate search tool had undergone several audits by the Pennsylvania State Police and Pennsylvania’s Office of Attorney General and neither found any CHRIA violation.

Although the jury awarded $1000 per individual to nearly 68,000 individuals, totaling nearly $68 million in damages, this amount will likely be slightly less as some of the individuals in the initial class certification are deceased, but no small sum of money regardless.

With the EU’s GDPR one year in, California’s CCPA on the brink, and a myriad of other federal, state and local regulations taking effect or under consideration, and Pennsylvania’s own Supreme Court finding a common law obligation to safeguard personal information, the public’s sensitivity to privacy and security issues only continues to grow. Whether your organization is public or private, whether it is part of an industry highly susceptible to data breaches such as healthcare, or believed to be less susceptible like construction, it should be reevaluating its privacy and security programs and ensuring compliance with relevant legislation.

The GDPR is wrapping up its first year and moving full steam ahead. This principles-based regulation has had a global impact on organizations as well as individuals. While there continue to be many questions about its application and scope, anticipated European Data Protection Board guidance and Data Protection Authority enforcement activity should provide further clarity in the upcoming year. In the meantime, here are a few frequently asked questions – some reminders of key principles under the GDPR and others addressing challenges for implementation and what lies ahead.

Can US organizations be subject to the jurisdiction of the GDPR?

Whether a US organization is subject to the GDPR is a fact-based determination. Jurisdiction may apply where the US organization has human or technical resources located in the EU and processes EU personal data in the context of activities performed by those resources. In cases where the US organization does not have human or technical resources located in the EU, it may be subject to the GDPR’s jurisdiction in two instances: if the organization targets individuals in the EU (not businesses) by offering goods or services to them, regardless of whether payment is required, or if it monitors the behavior of individuals in the EU and uses that personal data for purposes such as profiling (e.g. website cookies, wearable devices). The GDPR may also apply indirectly to a US organization through a data processing agreement.

If we execute a data processing agreement, does that make our US organization subject to the GDPR?

When an organization subject to the GDPR engages a third party to process its EU data, the GDPR requires that the organization impose contractual obligations on the third party to implement certain GDPR-based safeguards. If you are not otherwise subject to the GDPR, executing a data processing agreement will not directly subject you to the GDPR. Instead, it will contractually obligate you to follow a limited, specific set of GDPR-based provisions. Your GDPR-based obligations will be indirect in that they are contractual in nature.

Does the GDPR apply only to the data of EU citizens?

No, the GDPR applies to the processing of the personal data of data subjects who are in the EU regardless of their nationality or residence.

Is our organization subject to the GDPR if EU individuals access our website and make purchases?

If your organization does not have human or technical resources in the EU, the mere accessibility of your website to EU visitors, alone, will not subject you to the GDPR. However, if your website is designed to target EU individuals (e.g. through features such as translation to local language, currency converters, local contact information, references to EU purchasers, or other accommodations for EU individuals) your activities may be viewed as targeting individuals in the EU and subject you to the GDPR.

Are we required to delete an individual’s personal data if they request it?

If your organization is subject to the GDPR, an individual may request that you delete their personal data. However, this is not an absolute right. Your organization is not required to delete the individual’s personal data if it is necessary

  • for compliance with a legal obligation or the establishment, exercise or defense of a legal claim
  • for reasons of public interest (e.g. public health, scientific, statistical or historical research purposes)
  • to exercise the right of freedom of expression or information
  • where there is a legal obligation to keep the data
  • or where you have anonymized the data.

Additional consideration should be given to any response when the individual’s data is also contained in your back-ups.

GDPR principles have started to influence law in the U.S. In fact, many have been watching developments regarding the California Consumer Privacy Act (CCPA), which shares a right to delete as it pertains to the personal information of a California resident. Similar to the GDPR, it is not an absolute right and in certain cases an exception may apply. For instances, both law contain an exception from the right to have personal information deleted when the information is needed to comply with certain laws.

Does the GDPR apply to an EU citizen who works in the US?

If your organization is not subject to the GDPR and you hire an EU citizen to work in the US, the GDPR may not apply to the processing of their personal data in the US. However, depending on the circumstances, the answer may be different if the EU citizen is in the US on temporary assignment from an EU parent. In that scenario, their data may be subject to the GDPR if the US entity’s relationship with the parent creates an establishment in the EU, and it processes this data in the context of the activities of that establishment. To the extent the EU parent transfers the EU employee’s personal data from the EU to the US entity, that transfer may require EU-US Privacy Shield certification, the execution of binding corporate rules, or standard contractual clauses. These measures are designed to ensure data is protected when it is transferred to a country, such as the US, that is not deemed to have reasonable safeguards.

Do we need to obtain an EU individual’s consent every time we collect their personal data?

If your organization is subject to the GDPR and processes an EU individual’s information, you must have a “legal basis” to do so. Consent is just one legal basis. In addition to consent, two of the most commonly used legal basis are the “legitimate interests” of your organization and the performance of a contract with the individual. A legitimate interest is a business or operational need that is not outweighed by the individual’s rights (e.g. processing personal data for website security, conducting background checks, or coordinating travel arrangements). Processing necessary to the performance of a contract is activity that enables you to perform a contract entered into with the individual (e.g. processing employee data for payroll pursuant to the employment contract or processing consumer data for shipping goods under a purchase order.)

Should we obtain an employee’s consent to process their personal data?

Continue Reading The GDPR – One Year and Counting

As we noted last month, Washington’s efforts to follow California’s lead in passing its own GDPR-like law have stalled after the bill failed to make its way through the state’s House of Representatives—despite overwhelming approval in the Senate (where it passed 46-1).  That bill’s sponsor has promised to revisit the issue during the 2020 legislative session.

Despite this roadblock on the consumer privacy front, Washington governor Jay Inslee signed a bill on May 7 (HB 1071) significantly expanding the state’s data breach notification law, RCW 19.255.01, et seq.  There was little doubt that Governor Inslee would sign the bill into law, as it passed unanimously in both state legislative bodies.

Below is a summary of major changes to the state’s data breach notification law, and key takeaways for employers subject to Washington law.  For a detailed explanation of the law’s new provisions—which will become effective March 1, 2020—please refer to this post.

Deadline to provide notice of breach shortened to thirty (30) days following discovery.

Under the current law (and until HB 1071’s amendments become effective on March 1, 2020), notice of a breach must be provided within 45 days of discovery. With the amendments, notice must be provided no more than thirty days after the organization discovers the breach. This applies to notices sent to affected consumers as well as to the state’s Attorney General. The threshold requirement for notice to the Attorney General remains the same—it is only required if 500 or more Washington residents were affected by the breach.

Thirty days may still sound like plenty of time, but it can often take several days, or even weeks, for an entity to determine the scope of a breach and compile a list of potentially affected consumers. And if the breach affected residents of more than one state, each state’s laws must be examined to ensure that the notices sent to each individual comport with the breach notification laws of that individual’s state of residence.

Definition of “personal information” significantly expanded.

The previous definition tracked the language used by the majority of states, and only covered breaches that included an individual’s first name (or initial) and last name, plus any one or more of the three “bare minimum” data elements— Social Security number, driver’s license or state ID number, and/or financial account or card number (with an access code or password that would permit access thereto).

With the amendment, Washington adds the following six additional data elements that will be considered “personal information” if combined with an individual’s first name or initial and last name:

  • Full date of birth;
  • Unique private key used to authenticate or sign an electronic record;
  • Passport, military, or student ID number;
  • Health insurance policy or identification number;
  • Information about a consumer’s medical history, physical or mental health condition, or diagnosis or treatment by a health care professional; and,
  • Biometric data (such as fingerprint or retina scans, voiceprints, or other unique biological patterns used to identify an individual).

Significantly, Washington law now considers an individual’s username (or email address) and password (or security questions sufficient to permit access to an account) to be “personal information” regardless of whether the individual’s name is included. Notice to affected consumers of a breach of this type may be provided electronically or by email (unless the affected account was the individual’s email account).

In addition, the new law provides that even without an individual’s first name or initial and last name, any one or more of the other data elements will be considered “personal information” if the element, or combination of elements, would permit a person to commit identity theft against the individual, and the data element(s) were not rendered unusable though encryption, redaction or other methods.

Finally, as discussed more thoroughly in this post, HB 1071 also added notice requirements for affected consumers and the Attorney General—though notice to the Attorney General is still not required unless 500 or more Washington residents were affected by the breach.

There are several takeaways for employers here:

  • First, employers must be aware of the types of data elements the organization maintains on its employees (or other individuals, such as customers or clients), how that data is maintained, and what happens to that data when it is no longer needed.
  • Employers should also examine the necessity of maintaining certain types of data, and consider narrowing the scope of data elements that the organization maintains by ceasing to collect and maintain unnecessary data—even if not currently listed in the state’s definition of “personal information.”
  • Until now, Washington employers may not have been overly concerned with securing certain types of data, such as an employee’s date of birth or health insurance policy number. But once HB 1071’s amendments take effect, that information could trigger breach notification duties if subject to unauthorized access or disclosure.
  • Finally, employers should ensure the organization has sound policies in place specifically to deal with sensitive data (e., “personal information”) deemed necessary to maintain.

Many health care providers, including small and medium-sized physician practices, rely on a number of third party service providers to serve their patients and run their businesses. Perhaps the most important of these is a practice’s electronic medical record (EMR) provider, which manages and stores patient protected health information. EMR providers generally are business associates under HIPAA, subjecting them to many of the same requirements under the HIPAA privacy and security rules applicable to covered healthcare providers. HIPAA-covered healthcare providers should not assume their EMR providers comply with HIPAA and HITECH.

According to a federal Office for Civil Rights (OCR) press release, Medical Informatics Engineering, Inc. (MIE) has paid $100,000 to OCR and has agreed to a detailed corrective action plan to settle potential violations of the HIPAA privacy and security rules. MIE provides software and EMR services to healthcare providers.

According to reporting by the Chicago Tribune,

about 82 percent of hospital information security leaders surveyed reported having a “significant security incident” in the last 12 months, according to the 2019 Healthcare Information and Management Systems Society Cybersecurity Survey.

Yet, according to the same report, spending on information security only takes up about 5% of healthcare providers’ data security budgets, which is well below industry average. Additionally, some have estimated that in 2018, 20% of the breaches suffered by healthcare providers was caused by their third-party service providers. An excellent article by HIPAAJournal outlines a number of statistics illustrating the growing data security risk in healthcare.

In 2015, MIE reported to OCR that it discovered a breach which compromised user IDs and passwords enabling access to electronic protected health information (ePHI) of approximately 3.5 million people. OCR claims that, according to OCR’s investigation, MIE did not conduct a comprehensive risk analysis prior to the breach. The HIPAA rules require entities to perform an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of an entity’s ePHI. This is a basic requirement in the HIPAA security rules that all covered entities and business associates need to perform.

OCR Director Roger Severino noted,

The failure to identify potential risks and vulnerabilities to ePHI opens the door to breaches and violates HIPAA.

So, what is a healthcare provider to do?

A required element of HIPAA compliance includes having business associate agreements in place with business associates, including EMR providers. Under these agreements, business associate agree that they have satisfied the risk assessment requirement under HIPAA. However, in addition to making sure that compliant agreements are in place, covered healthcare providers may want to go a step further. That is, they may want to better assess the compliance efforts of their vendors as represented in the business associate agreement, particularly for those vendors that process and maintain so much of their patients’ ePHI. Providers might, for example, require such vendors to complete a detailed questionnaire about their data security practices, visit the vendor’s facilities, and/or request to review a copy of the vendor’s risk assessment. Similar practices can be applied to all vendors, not just EMR providers or business associates, based on the risk they pose.

Of course, healthcare providers should make sure they themselves are in compliance with the HIPAA privacy and security rules. This includes, among other things, conducting and documenting their own risk assessment. Simply having a set of policies and procedures is not sufficient.

A district court in Tennessee recently concluded in Wachter Inc. v. Cabling Innovations LLC that two former employees who allegedly shared confidential company information found on the company’s computer system with a competitor did not violate the Computer Fraud and Abuse Act (CFAA). The CFAA expressly prohibits “intentionally accessing a computer without authorization or exceeding authorized access, and thereby obtaining… information from any protected computer”.

The two former employees in question worked for Wachter Inc., a Kansas-based communications equipment provider, during which time they allegedly sent confidential company information to their personal email accounts and to email accounts of Wachter’s competitor, Cabling Innovations. In addition the former employees allegedly used Wachter’s resources and confidential information to obtain and perform work for Cabling Innovation.

In its reasoning, the Court emphasized that the CFAA does not define the term “without authorization” and some courts have found that “an employee may access an employer’s computer ‘without authorization’ where it utilizes the computer to access confidential or proprietary information that he has permission to access, but then uses that information in a manner that is inconsistent with the employer’s interest”. Moreover, the Court highlighted that “the CFAA was not meant to cover the disloyal employee who walks off with confidential information. Rather, the statutory purpose is to punish trespassers and hackers”.

The Court went on to state that the CFAA is primarily a criminal statute, and although it also permits “any person who suffers damage or loss by reason of a violation … [to] maintain a civil action against the violator to obtain compensatory damages and injunctive relief or other equitable relief,” the rule of “lenity” directs the Court to construe the CFAA coverage narrowly. The Court reasoned, “the rule of lenity limits the conduct that falls within the criminal prohibitions, it likewise limits the conduct that will support a civil claim”.

The CFAA has generated much debate among the courts regarding the scope of its application. Some forms of “unauthorized access” are obvious – e.g. a hacker breaking into a protected computer system resulting in data theft is clearly a CFAA violation and is the type of event the CFAA was originally designed to protect against. However, other circumstances, particularly in the employment context, can blur the lines of what is considered “unauthorized access” under the CFAA.

The court in Wachter is under the jurisdiction of the Sixth Circuit, which has not addressed the issue of a potential CFAA violation where an employee who has permission to access company information then misuses or misappropriates that information. That said, most districts courts in the Sixth Circuit have concluded that there cannot be a CFAA violation where an employee had permissible access to the computer system. Similarly, the Fourth Circuit held in WEC Carolina Energy Solutions LLC v. Miller that an employee who allegedly downloaded proprietary information from an employer’s computer system for the benefit of his subsequent employer did not violate the CFAA.

Other circuits, however, have taken a much more expansive approach to what employee activity is considered “without authorization” under the CFAA. For example, in U.S. v. John, the Fifth Circuit held that an employee violated the CFAA when she retrieved confidential customer account information she was authorized to access and transferred it to her half-brother for the purpose of committing a fraud. The First, Seventh and Eleventh Circuits have all taken a similarly expansive view that an employee violates the CFAA when he/she accesses the computer system in violation of the employer’s data use policies.

The U.S. Supreme Court has avoided addressing issues of CFAA vagueness. Most recently, the Supreme Court denied certiorari in Nosal v. United States, 16-1344, declining to weigh in on the scope of unauthorized access under the CFAA. The Ninth Circuit held in Nosal that David Nosal violated the CFAA by using his past assistant’s password to access his former employer’s computer system after his access credentials were expressly revoked.

Given the conflicting jurisdictional interpretations of the CFAA, companies should review their policies and procedures to ensure access rights and limitations to their information and information systems are clearly defined and effectively communicated to their employees. Taking these steps will help protect company data and may be useful in preserving a potential CFAA claim.

 

On May 10, Governor Phil Murphy signed into law P.L.2019, c.95. an amendment enhancing New Jersey’s data breach notification law by expanding the definition of personal information, and updating notification requirements. As we previously reported, the amendment was unanimously approved by the New Jersey General Assembly and Senate in late February.

New Jersey’s data breach notification law requires businesses to notify consumers of a breach of their personal information. Previously the law defined personal information as an individual’s first name or first initial and last name linked with any one or more of the following data elements:

  • Social Security number;
  • driver’s license number or State identification card number;
  • account number or credit or debit card number, in combination with any required security code, access code, or password that would permit access to an individual’s financial account.

The new law adds to the above list of data elements:

  • user name, email address, or any other account holder identifying information, in combination with any password or security question and answer that would permit access to an online account.

In addition, notification requirements are different for these added data elements. Under the amendment, businesses or public entities experiencing a breach involving a user name or password, in combination with any password or security question and answer that would permit access to an online account, and no other personal information, may notify affected consumers via electronic or other form that directs the customer whose personal information has been breached to promptly change any password and security question or answer, as applicable, or to take other appropriate steps to protect the online account with the businesses or public entities and all other online accounts for which the customer uses the same user name. Further, for breaches involving an email account, a business or public entity shall not provide notice of the breach via the compromised email account. Instead, notice shall be provided by one of the other methods described in the law, OR by clear and conspicuous notice delivered to the customer online when the customer is connected to the online account from an IP address or online location from which the business or public entity knows the customer customarily accesses the account.

New Jersey has now become at least the 10th state to update its data breach notification law to specifically address online breaches. The new law will take effect September 1, 2019.