In June of 2018 we reported that the U.S. Supreme Court granted a petition for review of a data breach lawsuit addressing the issue of whether parties can pursue class arbitration when the language in the arbitration agreement does not explicitly allow for such, Lamps Plus, Inc. v. Varela , No. 17-988, certiorari granted April 30, 2018. By granting the petition for certiorari, the Court afforded itself the opportunity to clarify its 2010 decision in Stolt-Nielsen v. AnimalFeeds International Corp., 559 U.S. 662 (2010) in which the Court ruled that parties cannot be forced into class arbitration, “unless there is contractual basis for concluding [they] agreed to do so”. The Supreme Court has finally issued its decision, ruling on April 24 2019, that arbitration agreements must explicitly include a class arbitration clause for parties to arbitrate class action claims.

The Supreme Court, in a 5-4 ruling, authored by Chief Justice Roberts, held that the 9th Circuit panel erred in ruling that Lamps Plus, a lighting retailer, must participate in a class arbitration of an employee’s claims when the employment agreement did not state that class arbitration was available. The employee’s claims arise from an incident of identity theft, as the result of a phishing attack, in which a third party impersonating a Lamps Plus employee convinced a fellow Lamps Plus colleague to send copies of W-2 forms for multiple Lamps Plus employees.

The employment agreement between the named plaintiff, Frank Varela, and his employer, Lamps Plus, included an arbitration clause, however it was silent on whether the clause also allowed for class arbitration. The 9th Circuit majority ruling stated that “perhaps the most reasonable” interpretation of that agreement allows for class arbitration. The circuit court analogized how Varela waiving his “right…to file a lawsuit or other civil action or proceeding” and “any right…to resolve employment disputes through trial by judge or jury,” clearly also includes waiving his right to class action lawsuits, even though the agreement does not explicitly state such.

The Supreme Court overturned the 9th Circuit and ruled that Stole-Neilsen does not permit a lower court to make such an “inference” from an ambiguous arbitration agreement. “Under the Federal Arbitration Act, an ambiguous agreement cannot provide the necessary contractual basis for concluding that the parties agreed to submit to class arbitration,” the opinion stated. “Like silence, ambiguity does not provide a sufficient basis to conclude that parties to an arbitration agreement agreed to ‘sacrifice the principal advantage of arbitration.”

In addition, the Court emphasized that the use of class arbitration “undermines the most important benefits” of the individual arbitration process, “lower costs, greater efficiency and speed and the ability to choose expert adjudicators to realize specialized disputes”.

The Supreme Court’s decision in Lamps Plus has significant implications for employers, well beyond the data breach context. This case is considered a “win” for employers, as lower courts will lack the ability to “infer” class arbitration clauses in arbitration agreements. Nonetheless, companies are advised to include unambiguous language in their employment agreements on whether class arbitration is available. For further insight on the Lamps Plus decision, check out our Class Action and Complex Litigation Practice Group’s in-depth commentary on the case, available here.

 

When the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 became law, it made significant changes to the civil monetary penalties for violations of HIPAA. In addition to increasing the amounts of the penalties, HITECH created a tiered approach to penalties, establishing four categories based on levels of culpability. In addition, current HHS regulations apply the same cumulative annual penalty limit across these four categories. Today, the Department of Health and Human Services (HHS) issued a notification of enforcement discretion changing its interpretation of HITECH resulting in a reduction in the amount of the cumulative annual penalty limit for three of the four categories.

What Are The Four Categories Again?

Section 13410(d) of the HITECH Act established four categories for HIPAA violations:

  1. No knowledge. The person did not know (and, by exercising reasonable diligence, would not have known) that the person violated the provision;
  2. Reasonable Cause. The violation was due to reasonable cause, and not willful neglect;
  3. Willful Neglect – Corrected. The violation was due to willful neglect that is timely corrected (30 days); and
  4. Willful Neglect – Not Corrected. The violation was due to willful neglect that is not timely corrected.

What Was The Old Range of Penalties?

The range of penalties for the four categories above was as follows:

Category Minimum Penalty Maximum Penalty Annual Limit
No Knowledge $100 $50,000 $1,500,000
Reasonable Cause $1,000 $50,000 $1,500,000
Willful Neglect – Corrected $10,000 $50,000 $1,500,000
Willful Neglect – Not Corrected $50,000 $50,000 $1,500,000

What Is The New Range of Penalties?

Commenters noted to HHS that above structure was not consistent with HITECH’s tiered approach to penalties; that is, establishing categories based on culpability. This is because the annual limits were the same for all levels of culpability. Upon further review by HHS’ Office of the General Counsel, HHS has determined that the better reading of HITECH is to apply annual limits as shown below.

Category Minimum Penalty Maximum Penalty Annual Limit
No Knowledge $100 $50,000 $25,000
Reasonable Cause $1,000 $50,000 $100,000
Willful Neglect – Corrected $10,000 $50,000 $250,000
Willful Neglect – Not Corrected $50,000 $50,000 $1,500,000

According to the guidance, while HHS expects to engage in future rulemaking to revise the penalty tiers in the current regulation to better reflect the text of HITECH, these changes are effective until further notice.

The answer may be yes.

GPS trackers enable businesses to derive greater efficiencies and productivity from their employees and their vehicle fleets. But, when businesses deploy this technology, HR departments often raise valid concerns about employee privacy on and, in some cases, off the job. When employers install GPS trackers on company-owned vehicles, these privacy concerns typically are outweighed by productivity gains, improved safety, and better control over time at work. According to a recent report, however, employers also need to be concerned about the security of their GPS trackers.

I can absolutely make a big traffic problem all over the world

According to Motherboard, a hacker known as L&M claims he has hacked into thousands of iTrack and ProTrack accounts. The reports indicate this activity has been going on in several countries, including South Africa, Morocco, India and the Philippines. iTrack and ProTrack are apps employers use with the GPS trackers to manage their fleets. The most unsettling part of this story is the hacker claims to be able to kill the engines of vehicles being driven by employees.

How can this happen?

Like many devices, they come with default passwords (e.g., 123456) which are among the most popular passwords and the least secure. According to the reports, the hacker acquires the usernames and then uses the anticipated default passwords to gain access to the account.

So, what can employers do?

I came across this NIST blog post which was a fun read and provided some excellent tips which basically boil down to the following:

  • Change default passwords! (And, not just for GPS trackers)
  • Develop passphrases – they generally are easier to remember and harder to crack.
  • Don’t store your passwords or passphrases on your devices.
  • Don’t use the same password or passphrase for all of your accounts, and certainly not your most important accounts.
  • Change your passwords and passphrases regularly. With billions of usernames and passwords being shared by hackers, it is possible that they have yours.
  • Don’t rely solely on passwords or passphrases. Adopt multifactor authentication.

When organizations roll out new technology, they simply have to add security considerations to list. This includes making sure default passwords are changed.

How will the California Consumer Protection Act (CCPA) apply to us? This is a question 0rganizations have asked since the CCPA was first proposed. There remains a number of important questions about the scope of the Golden State’s sweeping privacy law that still need to be answered.

One of those questions is whether the CCPA will reach employee data; that is, are an organization’s employees “consumers” under the law. Earlier this week, the California Assembly Privacy and Consumer Protection Committee started working through a number of bills addressing the CCPA. Included in those bills is AB 25, authored by the Committee Chairman, Ed Chau, which addresses this issue.

The Committee unanimously approved AB 25 which modifies the definition of “consumer” to exclude

a natural person whose personal information has been collected by a business in the course of a person acting as a job applicant to, an employee of, a contractor of, or an agent on behalf of, the business, to the extent the person’s personal information is collected and used solely within the context of the person’s role as a job applicant to, an employee of, a contractor of, or an agent on behalf of, the business. 

If signed into law, this change would be welcomed news for organizations already struggling with other aspects of CCPA compliance. However, organizations still may have CCPA issues to consider with respect to their employees.

Individuals can be employees and consumers of the same organization. In that case, an organization might want to consider, for example, how a dispute over its handling of a request for deletion of consumer personal information belonging to an employee/consumer could spill over into the workplace. Likewise, employers may have engaged certain third party vendors to provide certain products or services to employees. In certain circumstances, employers have a duty to obtain written assurances from those vendors that they will safeguard the employee personal information provided to these vendors. Employers will have to consider whether those assurances should include CCPA compliance, or will a “compliance with all laws” clause be sufficient.

Additionally, with the significant media attention to the CCPA and other privacy developments, such as the European Union’ General Data Protection Regulation (GDPR), employees may get confused about whether their rights under the CCPA as consumers also extend to the workplace. Some organizations may already extend CCPA-like protections to employees, perhaps flowing from global privacy policies driven primarily by GDPR. However, organizations that have not taken that approach need to be prepared to respond to demands from employees concerning their personal information. “The CCPA does not apply” may not be the right answer in light of the fact many states provide certain rights to employees concerning their personnel file and personal information. In California, for example, current and former employees have the right to inspect and receive a copy of the personnel files and records that relate to the employee’s performance or to any grievance concerning the employee. Cal. Labor Code Section 1198.5.

Further, regardless of the ultimate amendments to CCPA, there continues to be a growing trend for states to propose and implement privacy protections related to the data organizations collect.

We will be following the fate of AB 25 and the other pending CCPA bills. If CCPA is amended by AB 25 as currently drafted, it will be a relief for CCPA-covered entities, but it will not entirely eliminate the potential implications CCPA (or other state laws) may have on the workplace.

The much-anticipated amendment to North Carolina’s data breach notification law that we reported on earlier this year (see here) has finally been introduced to the state’s General Assembly.   The bill entitled, an Act Amending the Identity Theft Protection Act, House Bill DRH40393-LR10C, is primarily sponsored by State Representatives Jason Saine (R), Brenden H. Jones (R), and Robert T. Reives II, and was developed closely with Attorney General Josh Stein.

Some important changes were made to the proposed bill, following the version we reported on back in January. Below are the key differences between the two versions of the bill:

  • The definition of “security breach” was not expanded to include ransomware attacks. Originally, the anticipated bill was set to expand the definition of “security breach” to include ransomware attacks. Although this is not included in the current version of the bill, the definition of “security breach” was expanded to include an obligation that “any determination that illegal use has not occurred or is not reasonably likely to occur or that no material risk of harm is created shall be documented and maintained for at least three years.”
  • 30-day data breach notification period instead of 15. The bill originally proposed included a 15-day notification period to affected consumers and the Attorney General following a breach. The bill introduced to the General Assembly, instead includes a 30-day notification period, which is still considered brief, tying Colorado and Florida for the shortest data breach notification period in the nation.
  • Free Credit Monitoring Services for 24 months. The original proposal included an obligation for consumer reporting agencies experiencing a breach to provide affected consumers with free credit monitoring services for 5 years. Instead the bill introduced to the General Assembly includes an obligation for any entity covered by the bill that experiences a breach involving Social Security numbers to provide free credit monitoring services to affected consumers for 24 months. If passed, North Carolina would join California, Connecticut, Delaware, Massachusetts, as states that require free credit monitoring services to affected consumers after certain types of breaches.
  • Expansion of the definition of personal information to include certain types of medical information. The bill, if passed, would expand the definition of personal information to include “[h]ealth insurance policy number[s], subscriber identification number[s], or any other unique identifier[s] used by a health insurer or payer to identify [a] person,” and “any information regarding the individual’s medical history or condition, medical treatment or diagnosis, or genetic information, by a health care professional.” That said, the new bill also creates an exception for HIPAA compliant entities, which limits the significance of the expanded definition of personal information, as many entities potentially facing breaches to medical information are subject to HIPAA.

This bill, if passed into law, would be a substantial overhaul to North Carolina’s data breach notification law. It would keep North Carolina in line with other states currently enhancing their data breach notification laws in light of the large-scale data breaches flooding the media of late.  Organizations across the United States should be evaluating and enhancing their data breach prevention and response capabilities.

 

It was looking like Washington state would be the first state to follow the California Consumer Privacy Act (CCPA), with a GDPR-like law of its own. That effort has stalled, perhaps temporarily. However, both Washington’s House and Senate voted unanimously to send HB 1071 to Gov. Jay Inslee, which would substantially expand the state’s current data breach notification obligations.

Here are some of the highlights:

Definition of personal information. Following many other states, the new law would add to the data elements that if breached could trigger a notification obligation. Currently, personal information includes an individual’s first initial or first name and last name, together with one or more of the following – (i) Social Security number, (ii) Driver’s license number or Washington identification card number; or (iii) Account number or credit or debit card number, in combination with any required security code, access code, or password that would permit access to an individual’s financial account.

The following elements would be added to the list:

  • Full date of birth;
  • Private key unique to an individual and that is used to authenticate or sign an electronic record;
  • Student, military, or passport identification number;
  • Health insurance policy number or health insurance identification number;
  • Any information about a consumer’s medical history or mental or physical condition or about a health care professional’s medical diagnosis or treatment of the consumer; or
  • Biometric data generated by automatic measurements of an individual’s biological characteristics such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual;
  • Username or email address in combination with a password or security questions and answers that would permit access to an online account.

In addition, these elements (other than online account credentials) could be considered personal information even without the consumer’s first name or first initial and last name. That would be the case if encryption, redaction, or other methods have not be applied to render the element(s) unusable and the element(s) would enable a person to commit identity theft against a consumer.

Special Rule for Online Accounts. To combat the practice of many who use the same username and password for different accounts (note to reader, if this is you, stop reading this post and go change your account credentials), the new law would require notifications to provide some direction on this point. Specifically, when a breach involves a username or password, notice may be provided electronically or by email, and must inform affected persons to promptly change his or her password and security question or answer, as applicable. The notice should inform affected persons to take other appropriate steps to protect the online account and all other online accounts for which the affected person uses the same username or email address and password or security question or answer.

The new law goes a step further when the person or business providing the notice also furnished the email account to the affected person. In that case, notification must be provided using a permissible method other than email to that account, and must also include the information noted above for changing passwords for at risk accounts.

Notice Timing and Content. Like other state breach notification laws, Washington’s law requires notification be provided in the most expedient time possible and without unreasonable delay. Current law provides, however, that notice may not be provided later than forty-five calendar days following discovery. The new law reduces that period to thirty calendar days both for notice to individuals as well as to the Attorney General.

Importantly, the new law retains the exceptions to the notification period – notice may be delayed at the request of law enforcement or if due to measures necessary to determine the scope of the breach and restore the reasonable integrity of the data system. It is not clear if these exceptions also apply for notifying the Attorney General.

When notification is required, the new law adds to existing content requirements by mandating that notifications include, if known, the time frame of exposure – the date of the breach and the date of the discovery of the breach. Additional information also must be provided under the new law to the Attorney General, but under existing law that notice is required only if more than 500 persons are affected by the breach.

If enacted, the law changes in HB 1071 provide good examples of the need for organizations to continue to monitor these developments and revisit their incident response plans (IRPs). For example, some organizations may get caught off guard by the expanding definition of personal information under these laws. Date of birth typically is not included as an element of personal information in most other states (North Dakota is one exception). Having out of date template letters also can minimize the effectiveness of the organizations IRP.

Image result for secret surveillanceThe New York Times newly established Privacy Project, recently highlighted the extent to which our society has created a “facial recognition machine” – cameras are everywhere, even in doorbells. Segments of society have accepted widespread surveillance on public streets, shopping malls, and in common areas of office buildings, apartment complexes, schools and similar places. But there are limits.

Early this month, 131 patients (and counting) of a women’s hospital in San Diego, California filed a lawsuit against the hospital after discovering that there was secret video surveillance in three labor and delivery operating rooms, recording medical procedures without patients’ consent. Patients were recorded during Cesarean sections, birth complications, treatment after miscarriage, hysterectomies and other medical procedures from July of 2012 to July of 2013.   Approximately 1,800 patients were recorded during this period. The patients are suing the hospital for invasion of privacy, breach of fiduciary duty, negligence, negligent infliction of emotional distress and unlawful recording of confidential information.

In addition to not informing the patients of the hidden cameras, the lawsuit alleges that the hospital was “grossly negligent” in its storage of the recordings. The lawsuit claims that recordings were stored on employee computers, often without password protection and that the hospital “destroyed at least half the recordings but cannot say when or how it deleted those files and cannot confirm that it took the appropriate steps to ensure the files were not otherwise recoverable.”

This is not the first lawsuit against the hospital regarding the hidden cameras. Since 2016, the hospital has faced several lawsuits alleging privacy violations and other claims stemming from the video records.

The hospital said in a statement on April 4th to the San Diego community that the cameras were installed as part of an investigation regarding drugs and other equipment missing from several anesthesia carts in hospital operating rooms, and it was not intended for patients to be visible on the recordings, although ultimately that was the case.

The issue of hidden cameras is particularly common in elder care facilities, where for example family members secretly install a “granny cam”, to step in and help protect their elderly loved ones from abuse in the facility, including neglect, physical abuse, unexplained serious injuries and thefts. A study published in 2011 found that an estimated 260,000 (1 in 13) older adults in New York had been victims of one form of abuse or another during a 12-month period between 2008 and 2009, with “a dramatic gap” between elder abuse events reported and the number of cases referred to formal elder abuse services. Clearly, states are struggling to protect a vulnerable and growing group of residents from abuse. Technologies such as hidden cameras may help to address the problem, but their use raises privacy, security, compliance, and other concerns.

Whether installed by a concerned family member in a nursing home, or a medical professional or a hospital administrator, the use of video surveillance devices can pose a number of issues and potential risks, particularly when the devices are hidden and/or record audio as well as video. Here are just a few questions these devices raise:

  • Has the organization addressed federal and state laws establishing consent requirements when recording communications?
  • Are there state laws that specifically addresses hidden cameras or similar privacy rights? For “granny cams”, at least five states (Illinois, New Mexico, Oklahoma, Texas, and Washington) have laws specifically addressing the use of cameras in this context. While state “granny cam” laws are not applicable to the hospital case, in California, for example, the California Invasion of Privacy Act (CIPA) protects against recording of an individual’s confidential information without prior consent.
  • In general, if the organization installs such a device, what rights and obligations does it have with respect to the scope, notice, content, access, security, storage, deletion and other aspects of the recording?
  • For surveillance likely to capture protected health information, have the HIPAA privacy and security regulations been addressed? This includes assessing the risk of making the recordings, controlling access to the recording, and securing that information.
  • What record retention, chain of custody, and record destruction requirements and best practices should be implemented?
  • How do the features of the device, such as camera placement and zoom capabilities, affect the analysis of the issues raised above?

Facilities considering this technology, even when well intentioned, must assess the privacy and security implications. Practices and procedures considered and implemented, as applicable, not only for what happens prior to device installation (i.e. notice, consent, device placement, scope etc.), but also for what happens after recordings occur, including lawful and effective data storage and deletion policies.

Following recent examinations of SEC-registered investment advisers and broker-dealers, the Securities and Exchange Commission’s (SEC) Office of Compliance Inspections and Examinations (OCIE) published a privacy risk alert on April 16, 2019. OCIE is hoping to remind advisers and broker-dealers about providing compliant privacy and opt-out notices, and adopting and implementing effective policies and procedures for safeguarding customer records and information, under Regulation S-P.

Privacy Notices. During the examinations, OCIE observed advisors and broker-dealers were not providing initial privacy notices, annual privacy notices and opt-out notices to their customers. When these notices were provided, many did not accurately reflect firms’ policies and procedures and/or notify customers of their right to opt out of having their nonpublic personal information shared with nonaffiliated third parties. OCIE’s risk alert, thus, reminds advisors and broker-dealers that Regulation S-P requires that they:

  • provide a clear and conspicuous notice to customers that accurately reflects privacy policies and practices generally no later than when a customer relationship is established,
  • provide a similar notice not less than annually during the continuation of the customer relationship, and
  • deliver a clear and conspicuous notice to its customers that accurately explains the right to opt out of some disclosures of non-public personal information about the customer to nonaffiliated third parties.

Written Policies and Procedures to Safeguard Customer Information. OCIE also observed during these examinations that some advisors and broker-dealers had not adopted written policies and procedures as required under the Safeguards Rule. According to the risk alert, some firms simply:

restated the Safeguards Rule but did not include policies and procedures related to administrative, technical, and physical safeguards.

And, other policies

contained numerous blank spaces designed to be filled in by registrants.

Given the OCIE’s observations, purchasing sample privacy and data and security policies and procedures, perhaps online, without more, would likely be inconsistent with Regulation S-P. Data security compliance is more than simply having a policy document. OCIE explained that written policies and procedures under Regulation S-P must be “reasonably designed to ensure the security and confidentiality of customer records and information, protect against any anticipated threats or hazards to the security or integrity of customer records and information, and protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.” Thus, the general approach for advisors and brokers-dealers should be to assess the threats and vulnerabilities to customer records and information, and then craft administrative, physical, and technical policies and procedures to address those threats and vulnerabilities.

OCIE also detailed data security practices that it found troubling under Regulation S-P. Examples include:

  • Personal devices – employees storing and maintaining customer information on their personal laptops without policies and procedures address how to protect the information on those devices.
  • Electronic communications – the absence of policies designed to prevent employees from regularly sending unencrypted emails to customers containing PII.
  • Training and monitoring – a lack of training for employee about encryption, password-protection, and transmission of PII through company-approved methods.
  • Outside vendors – advisors and broker-dealers maintaining policies that required outside vendors to contractually agree to keep customers’ PII confidential, but not following their own policies.
  • PII inventory – not maintaining an inventory of all systems on which PII is maintained leaving advisors and broker-dealers unaware of the categories of customer PII that they maintain, and limiting the ability to adequately safeguard customer information.
  • Incident response plans – plans failed to address role assignments for implementing the plan, actions required to address a cybersecurity incident, and assessments of system vulnerabilities.
  • Departed employees – former employees of advisors and broker-dealers retained access to restricted customer information rights after termination of employment.

Many of the observations noted above are common gaps to data security policies and procedures, particularly for small and medium-sized enterprises in any industry. For advisors and broker-dealers, the consequences of compliance lapses could result in data breaches, enhanced scrutiny by the SEC and OCIE, and reputational harm. Thus, as OCIE suggests following its recent examinations, advisors and broker-dealers should review and update, as needed, their written policies and procedures to mitigate the issues identified by OCIE staff.

According to a recent decision from a federal district court in Illinois, Bose Corp. may monitor and collect information about the music and audio files consumers choose to play through its wireless products and transmit that information to third parties without the consumers’ knowledge. Such action does not violate the federal Wiretap Act or the Illinois Eavesdropping Statute. As such, the Court granted Bose’s motion to dismiss the plaintiff’s class action claims.

Bose manufactures and sells high-end wireless headphones and speakers. Consumers use the wireless headphones or speakers with their smartphones to listen to music streamed to their phone from music-streaming services. Users of certain models of Bose wireless headphones and speakers can access additional features of those products by downloading the Bose Connect App. Once downloaded, the App enables users to connect their smartphones to their Bose Wireless Products via a Bluetooth connection so that the user can access and control the products’ settings and features through the App. The App also displays the track title, artist, and album playing.

According to the Plaintiff, Bose designed the App to “(i) collect and record titles of the music and audio files consumers choose to play through their Bose wireless products and (ii) transmit such data along with other personal identifiers to a third-party data miner without consumers’ knowledge or consent.” Plaintiff alleged that Bose was not a party to the communication of the music information, but rather “intercepted” the contents of the communication between the user and the streaming services. Plaintiff further alleged that Bose did not have consent from either party to intercept the data.

For a customer using the App, Bose could access the data referenced above, link the music information to the particular Bose product’s serial number, identify the name and email address for the particular user, and in the process build detailed profile about the customer and his or her music listening habits.

The statutes at issue in the case prohibit intentionally intercepting or disclosing an electronic communication unless the interception is by a party to the communication or where one of the parties has given prior consent. The Court ruled that the complaint failed to sufficiently allege that Bose is not a party to the communication. The Court supported its analysis by noting that the complaint itself states that the Bose App is a participant in the communication of the information when it sends a user’s request for a song to the streaming service and in turn displays the provider’s song information on the App. Indeed, the court noted, that the display of such information is one of the primary functions of the App. Thus, the court concluded, Bose “is a part of the listener to streaming service communication.” Although the court notes that Plaintiff’s real issue may be the fact that Bose collects and discloses information it receives to third parties, the conduct falls outside both the federal Wiretap Act and the Illinois law as well. As such, the court dismissed these claims.

With the ever increasing reliance on wireless technology and businesses combing for as much data as possible to target consumers, companies collecting personal data need to be aware of how that data is collected and what steps are being taken to protect it. While Bose escaped liability for the Wiretap Act and the Illinois Eavesdropping Law, the future likely involves more consumers monitoring how their data is gathered, as well as a corresponding increase in regulation over the collection and protection of personal data, such as the California Consumer Privacy Act set to take effect in 2020, and other state consumer privacy initiatives popping up across the country.

Small and midsized enterprises (SMEs) continue to be targeted by ransomware, phishing and other cyberattacks; the consequences of which could be devastating. Those consequences include putting SMEs out of business, which is unfortunately the case for one small medical practice in Battle Creek, Michigan, as reported by HIPAAJournal.

The reality is that the effects of these attacks could be significantly mitigated with a bit of planning. Just maintaining good backups can go a long way. Of course, there are a number of other steps that SMEs can take to more comprehensively defend against these attacks.

The reports about the Michigan practice explain that the malware encrypted the system that maintained patient records and that the owners refused the attacker’s demands for payment. Refusing to pay these demands is not uncommon. The Federal Bureau of Investigation, which provides guidance on preventing ransomware attacks, does not encourage paying ransom. In some cases, ransomware attack victims have recovered their data after paying the ransom, however, there is no guarantee of that in a particular case. In fact, in some cases, after making the requested ransom payment, attackers have been known to request more money to unlock the data. Note also that payments of ransom to persons or entities on a U.S. Department of the Treasury’s Office of Foreign Assets Control (“OFAC”) sanction list could be prosecuted.

When the Battle Creek physicians did not succumb to demands for payment, the attackers deleted all of the encrypted files. Reports indicate that no patient data had been accessed or exfiltrated (removed) from the practice’s systems, however, some patients may have lost all or a portion of their medical records.  The practice is schedule to close at the end of this month.

SMEs certainly can improve their defenses to prevent and minimize the effects of an attack, however, they also need to be prepared to respond to an attack when it happens. Maintaining a written incident response plan is critical. This is particularly true for health care providers and other HIPAA covered entities and business associates. The federal Office for Civil Rights has provided guidance for dealing with ransomware attacks. Notably, the guidance provides that when PHI (protected health information) is encrypted in such an attack, it is presumed to be a breach and notification required unless the entity determines the incident constitutes a low probability of compromise. The guidance adds that:

Although entities are required to consider the four factors listed above in conducting their risk assessments to determine whether there is a low probability of compromise of the ePHI, entities are encouraged to consider additional factors, as needed, to appropriately evaluate the risk that the PHI has been compromised. If, for example, there is high risk of unavailability of the data, or high risk to the integrity of the data, such additional factors may indicate compromise. In those cases, entities must provide notification to individuals without unreasonable delay, particularly given that any delay may impact healthcare service and patient safety.

Taking steps to prevent an attack is important, but all SMEs, including those in the healthcare sector, also need to be prepared to respond to these and similar kinds of attacks. Failure to take these steps could have substantial effects on the business, including causing the business to close.