Connecticut Supreme Court: Health Care Providers Can Be Sued for Unauthorized Disclosures of Confidential Information

Physician practices and other health care providers respond to numerous requests for confidential patient information from patients and others. Mistakes made by employees fulfilling such requests for medical records or making similar disclosures can expose the practice to civil litigation. A recent decision by the Connecticut Supreme Court (Byrne v. Avery Center for Obstetrics and Gynecology) confirmed a patient’s common law right to sue in these situations putting health care providers in Connecticut at greater risk of being sued if they are not careful in the handling of patient confidential information

The Connecticut Supreme Court’s decision, released on January 16, 2018, held in short that the physician-patient relationship creates a common law duty of confidentiality, and that patients have a common law right to sue for breaches of that duty. So, while it is true that the privacy rules under the Health Insurance Portability and Accountability Act (HIPAA) do not provide patients a private right of action, health care providers in Connecticut and a significant number of other states can be sued for unauthorized disclosures of confidential patient information.

In 2014, we reported on an earlier appeal in this same case, referencing the challenges healthcare providers have with responding to attorney requests for information and subpoenas. The underlying facts are that the patient (plaintiff) advised the provider (defendant) not to disclose her protected health information to her significant other. However, when the provider received a subpoena in connection with a paternity suit that was sent on behalf of the significant other seeking the patient’s medical file, the provider “did not alert the plaintiff of the subpoena, file a motion to quash it or appear in court. Rather, the defendant mailed a copy of the plaintiff’s medical file to the court.” In the 2014 decision, the Court refused to rule on whether Connecticut’s common law recognizes a negligence cause of action arising from these facts. In its more recent decision, however, the Court ruled that such a cause of action is recognized under Connecticut law, observing from a decision in another state:

it is impossible to conceive of any countervailing benefits which would arise by according a physician the right to gossip about a patient’s health.

The Court also ruled that as it has become common practice for Connecticut health care providers to comply with HIPAA and its implementing regulations, the statute and those regulations may be used to “inform the standard of care applicable to such claims arising from allegations of negligence in the disclosure of patients’ medical records pursuant to a subpoena.”

As noted, this case should be a strong reminder to providers to be more careful when responding to requests for protected health information under HIPAA, at a minimum. Often documents seeking protected health information look official and threatening, but they may be nothing more than an attorney’s request for PHI, which without more generally will not justify disclosure under HIPAA. The fact that a private right of action does not exist under HIPAA is not the end of the inquiry. Providers have to consider the layers of other laws that potentially could provide a patient a remedy for a questionable disclosure of the patient’s medical records.

North Carolina AG Proposes Stronger Breach Notification and Personal Information Safeguard Requirements

Image result for north carolina attorney generalCiting to estimates in 2017 “more than 5.3 million North Carolinians were … affected by a data breach,” Attorney General Josh Stein and Rep. Jason Saine announced on January 8 proposed legislation aimed at protecting state residents from becoming victims of identity theft. To do so, the “Act to Strengthen Identity Theft Protections” (see fact sheet on proposed law) would, among other things, build on the state’s existing data breach notification law and require business to adopt reasonable safeguards to protect the personal information of North Carolinians.

Specifically, the Act would:

  • Expand definition of “breach.” The revised definition of “breach” would include situations involving the unauthorized access to or acquisition of an individual’s personal information. This change is intended in significant part to include “ransomware” attacks and, notably, to remove from the breached organization the discretion to determine the risk of harm. A similar approach is taken in guidance by the federal Office of Civil Rights which concerns ransomware and data breach response.
  • Shorten the notification period. Under the state’s current breach notification law, notice generally must be made without unreasonable delay, taking into account the legitimate needs of law enforcement, and consistent with any measures necessary to determine sufficient contact information, the scope of the breach and restore reasonable integrity, security and confidentiality of the data system. The Act would require that the breached entity notify the affected consumer(s) and the Attorney General’s office within 15 days, which would make North Carolina’s law mandate one of the shortest notification deadlines. The purpose of this change is to provide consumers more time to freeze their credit across and take other preventative measures before identity theft occurs.
  • Impose “reasonable safeguard” requirements for a broader set of personal information. Businesses that own or license personal information would be required to implement and maintain reasonable security procedures and practices to protect the personal information from a security breach. This requirement follows other states such as California, Connecticut, Florida, and Massachusetts. Additionally, the Act would expand the definition of “protected information” to include medical information and insurance account numbers.
  • Require free credit monitoring. The Act would require five years of free credit monitoring to be provided to affected consumers for security breaches that occur at a consumer reporting agency. Thus, this requirement would not apply to all businesses subject to the law, just consumer reporting agencies that have a breach.
  • Strengthen penalty provisions. The Act would make clear that businesses that suffer a breach and are found to have failed to maintain reasonable security procedures will have committed a violation of the Unfair and Deceptive Trade Practices Act. In that case, when calculating penalties, each person affected by the breach would represent a separate and distinct violation of the law. If adopted, this provision should spur more organizations to take steps to maintain reasonable safeguards.

Individuals and commercial entities that conduct business in North Carolina and that own or license data in any form that includes personal information about North Carolinians should follow the progress of the Act, as well as developments in other relevant states concerning data protection requirements (See, e.g., update to Maryland’s breach notification law, effective January 1, 2018). However, even if the Act fails to become law, adopting and maintaining reasonable safeguards can help protect against a data breach which might be reportable in virtually all states, including North Carolina.

U.S. Employers with EU Employees Gearing Up for GDPR

With the continuing parade of high profile data security breaches, the concern U.S. organizations have about the security of their systems and data has been steadily growing. And rightly so. Almost every organization processes (collects, uses, stores, or transmits) individually identifiable data. Much of this data is personal data, including employee data, which brings heightened privacy and security responsibilities and obligations.

For certain entities, these responsibilities and obligations are about to increase significantly. On May 25, 2018, the EU General Data Protection Regulation (GDPR) goes into effect. This is a game changer for those organizations subject to the jurisdiction of the GDPR, and not just because of its new data breach notification provision. The GDPR contains expanded provisions for data collection, retention, and access rights unlike those they are used to in the U.S. that will create substantial challenges for U.S. employers processing their EU employee data.

To effectively meet these challenges, U.S. employers need to take stock of the data they process concerning individuals relating to EU operations (and not just about employees, although that is our focus here). What categories of EU employee data are processed? Where does it comes from? In what context and where is it processed and maintained? Who has access to it? Are the uses and disclosures being made of that information permitted? What rights do EU employees have with respect to that information? The answers to these questions are not always self-evident. Employee data may cover current, former, or prospective EU employees as well as interns and volunteers. It may come from assorted places and be processed in less traditional contexts. And, it may be processed in the cloud, the U.S., or elsewhere outside the EU.

Starting with the source of EU employee data, the U.S. employer should review its connections with the EU. Does it have a EU branch or office, a subsidiary or affiliate? An EU franchise, agent, or representative? Has it recently merged or acquired an organization with EU locations or connections? Any one of these connections is a potential source of EU employee or comparable internal personal data, regardless of how small.

Next, how does the U.S. employer process its EU employee or internal personal data? This data can be processed in traditional contexts – HRIS, benefits, payroll, Active Directory or contact information, and recruitment or talent management. It can be processed in other contexts – Customer Relationship Management, software applications, IT maintenance and security review activity, surveillance images, remote log in, business-related travel and event attendance support, professional development, training and certification, and external facing websites simulating annual reports or collecting job applications. Even if the U.S. employer outsources payroll, benefits administration, or HR, it may still process EU employee or internal personal data in other contexts.

For a specific example of employee data processing, consider the internal facing website or employee that facilitates business travel or conference registration. This service collects the EU employee’s personal data in the form of name, address, phone number, work title and work address. However, it may also collect the EU employee’s special hotel and dining accommodations needs. This additional information may reveal health, disability, or religious beliefs information about the EU employee, all of which are subject to heightened protections. In another example, the organization’s training portal may use video presentations featuring internal trainers. These videos contain employee personal data – the trainer’s photo and, perhaps, work contact information. Locating and identifying all forms of EU employee data processing is critical.  However, knowing what actually constitutes EU employee personal data is key.

Identifying employee personal data in the context of the GDPR is challenging. The GDPR definition, especially when applied to an EU employee, can be expansive. And for U.S. employers, often surprising. EU employee personal data includes “any information relating to an identified or identifiable” EU employee. Identifiable simply means the employee can be “identified directly or indirectly… by reference to an identifier… or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural, or social identity of that natural person.” This may include name, address, driver’s license number, date of birth, passport number, vehicle registration plate number, phone number, photos, email address, id card, workplace or school, and financial account numbers. With respect to employees, it may also encompass – gender, personnel reports (including objective and subjective statements), recruitment data, job title and position, work address and phone number, salary information, health and sickness records, monitoring and appraisals, criminal records, rent, retirement or severance data, and online identifiers such as dynamic IP addresses, metadata, social media accounts and posts, cookie identifiers, radio frequency tags, location data, mobile device IDs, web traffic surveillance that identifies the machine and its user, and CCTV images.  ‘Special categories’ of employee data – racial and ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, data concerning an employee’s health, sex life, or sexual orientation, and biometric and genetic data – require heightened levels of protection under the GDPR. Given the broad interpretation of personal data under the GDPR, a determination of what constitutes employee personal information is often based on relevant facts and circumstances.

May 2018 is approaching quickly. The GDPR may bring new and enhanced obligations for U.S. employers. Significant among these is employee consent to processing personal data. With this in mind, employers should begin evaluating their organizations through the lens of employee data collection and processing, keeping in mind applicable national laws.

Does the GDPR Apply to Your US-based Company?

If you’ve been following the headlines, you know that a day doesn’t pass without a reference to the “GDPR”. On May 25, 2018, the European Union (EU) General Data Protection Regulation (GDPR) will take effect, marking the most significant change to European data privacy and security in over 20 years. Most multinational companies, and of course EU-based companies should be in the process of ensuring GDPR compliance by May 2018. But what about if you are a US-based company with no direct operations in the EU? Do you think you are free of the GDPR’s reach? Think again!

In short, the GDPR aims to protect the “personal data” of EU citizens – including how the data is collected, stored, processed and destroyed. The meaning of “personal data” under the GDPR goes far beyond what you might expect considering how similar terms are defined in the U.S. Under the GDPR, “personal data” means information relating to an identified or identifiable natural person. A person can be identified from information such as name, ID number, location data, online identifier or other factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that person. This even includes IP addresses, cookie strings, social media posts, online contacts and mobile device IDs.

Territorial Scope

A major change made by the GDPR is the territorial scope of the new law. The GDPR replaces the 1995 EU Data Protection Directive which generally did not regulate businesses based outside the EU. However, now even if a US-based business has no employees or offices within the boundaries of the EU, the GDPR may still apply.

Under Article 3 of the GDPR, your company is subject to the new law if it processes personal data of an individual residing in the EU when the data is accessed.  This is the case where the processing relates to the offering of good or services or the monitoring of behavior that takes place in the EU.

Thus, the GDPR can apply even if no financial transaction occurs. For example, if your organization is a US company with an Internet presence, selling or marketing products over the Web, or even merely offering a marketing survey globally, you may be subject to the GDPR.  That said, general global marketing does not usually apply. If you use Google Adwords and a French resident stumbles upon your webpage, the GDPR likely would not apply to the company solely on that basis. If, however, your website pursues EU residents – accepts the currency of an EU country, has a domain suffix for an EU country, offers shipping services to an EU country, provides translation in the language of an EU country, or markets in the language of an EU country, the GDPR will apply to your company. Likewise, if your company is engaged in monitoring the behavior of EU residents (e.g. tracking and collecting information about EU users to predict their online behavior), the GDPR likely will apply to your company.

US-based companies with no physical presence in the EU, but in industries such as e-commerce, logistics, software services, travel and hospitality with business in the EU should already be in the process of ensuring GDPR compliance. However, all US-based companies, especially those with a strong Internet presence, should assess whether their business activity falls within the territorial scope of the GDPR.

Consequences of Non-Compliance

The GDPR imposes significant fines for companies that fail to comply. Penalties and fines, calculated based on the company’s global annual turnover of preceding financial year, can reach up to 4% or €20 million (whichever is greater) for non-compliance with the GDPR, and 2% or €10 million (whichever is greater) for less important infringements. So, for example, if a company fails to report a breach to a data regulator within 72 hours, as required under Article 33 of the GDPR, it could pay a fine of the greater of 2% of its global revenue or €10 million.

A report by Gartner predicted that more than 50% of companies within the scope of the GDPR will not be compliant by the end of 2018. Considering that one of the main objectives of the GDPR was to expand the territorial scope, companies based outside the EU should not be surprised to find that they are a particular target of data regulators.

Don’t let your company become next year’s headline! This article kicks off our GDPR series that will help your company navigate the key aspects of the regulation. Efforts toward compliance need to begin now.

Illinois Court of Appeals Holds BIPA Plaintiffs Must Allege Some Actual Harm

In a ruling that may have significant impact on the recent wave of biometric privacy suits, an Illinois state appeals court held that plaintiffs must claim actual harm to be considered an “aggrieved person” covered by Illinois’ Biometric Information Privacy Act (BIPA), in a dispute arising from the alleged unlawful collection of fingerprints from a Six Flags season pass holder. Rosenbach v. Six Flags Entertainment Corp., 2017 IL App (2d) 170317 (Ill. App. Ct. Dec. 21, 2017).

The plaintiff, whose son’s fingerprint was collected by Six Flags after purchasing a season pass for one of its Great America amusement parks, filed suit on behalf of her son and similarly situated class members, against Great America LLC and Six Flags Entertainment Corp. for allegedly violating Illinois’ BIPA by failing to obtain proper written consent or disclosing their plan for the collection, use, storage, or destruction of her son’s biometric information. The plaintiff further claimed that had she known of Six Flags’ collection of fingerprints, she would not have allowed her son to purchase a season pass.

Six Flags argued in a motion to dismiss that the BIPA allows only “aggrieved” individuals to sue for all alleged violations, and that the plaintiff’s son and other similar plaintiffs who had not suffered actual harm have not met the necessary threshold to bring a claim.

After a lower court denied Six Flags’ motion to dismiss, the Illinois’ state appeals court three-judge panel held that in order for a plaintiff to meet the definition of “aggrieved person” under the BIPA, a plaintiff must demonstrate actual harm.

If the Illinois legislature intended to allow for a private cause of action for every technical violation of the act, it could have omitted the word ‘aggrieved’ and stated that every violation was actionable,” the panel ruled. “A determination that a technical violation of the statute is actionable would render the word ‘aggrieved’ superfluous. Therefore, a plaintiff who alleges only a technical violation of the statute without alleging some injury or adverse effect is not an aggrieved person under … the act.

Employment BIPA Class Actions

With over 30 employment class actions filed against employers in Illinois state court since July 2017 claiming BIPA violations for implementation of biometric technology, the Six Flags ruling represents a significant victory. We recently reported on a putative class action filed by employees against their employer, Oak Park Rehabilitation & Nursing Center LLC, alleging that mandatory daily biometric fingerprint scans violate employees’ privacy rights under the BIPA. Similar to the suit against Oak Park, the recent flood of employee class actions allege employer misuse of timekeeping systems that collect fingerprint scans. They claim the employer failed to provide proper notification and obtain written consent or neglected to institute a valid use policy.

Although the Six Flags decision represent a win for employers, plaintiffs will likely continue to attempt alternative legal arguments to claim that an individual is an “aggrieved person” under the BIPA. Accordingly, companies that want to implement technology that uses employee or customer biometric information (for timekeeping, physical security, validating transactions, or other purposes) need to be prepared.

Below are additional resources to help navigate biometric information protection laws:

It’s Tax Time – Alert Your HR and Payroll Teams About W2 Phishing Scams

Last February, the IRS issued a warning to all employers regarding the resurgence of a W-2 based cyber scam. The scam, which targets businesses during tax season, was also “spreading to other sectors, including school districts, tribal organizations and nonprofits.” In August 2017, the IRS renewed its warning to  tax professionals and businesses as part of its “Don’t Take the Bait” campaign. In October, the IRS reminded the public about its procedures for reporting successful or failed attempts. With the tax season quickly approaching, it’s worth re-visiting how an employer can fall prey to this scam, describing how they can avoid it, suggesting they have a response plan in case they are a victim.

The cyber-scam consists of an e-mail sent to an HR or Accounting department employee, presumably from an executive or “higher-up” within the organization. Both the TO and FROM e-mail addresses are legitimate internal addresses, as are the “sender” and recipient names. The fake e-mail asks the employee to forward the company’s W-2 forms, or related tax data, to the “sender.” This request aligns with the job responsibilities of both the employee and the supposed internal “sender.”

Despite its appearance, the e-mail is a fake. The scammer is “spoofing” the company executive’s identity. In other words, the cyber-criminal is assuming the executive’s identity and e-mail address for the purpose of sending what appears to be a legitimate request for sensitive company information. The unsuspecting employee relies on the accuracy of the sender e-mail address, coupled with the sender’s job title and role, and forwards the confidential W-2 information. The information goes to a hidden e-mail address controlled by the cyber-criminal.

If successful, the cyber-criminal obtains a trove of sensitive employee data that can include names, dates of birth, addresses, salary information, social security numbers, and well as employer information needed for tax filings. The information is used to file fake individual tax returns (Form 1040) which generate fraudulent tax refunds or it is sold on the dark web to identity thieves.

This cyber-scam is form of ‘spear phishing’ known as business email compromise (BEC) attacks, or CEO spoofing. Spear phishing attacks target a specific victim by using personal or organizational information to earn the victim’s trust. The cyber-criminal uses information such as personal and work e-mail addresses, job titles and responsibilities, names of friends and colleagues, personal interests, etc. to lure the victim into providing sensitive or confidential information.  Quite often, the scammer culls this information from social media, LinkedIn, and corporate websites. The method is both convincing and highly successful.

While an organization can use firewalls, web filters, malware scans or other security software to hinder spear phishing, experts agree the best defense is employee awareness. This includes ongoing security awareness training (see our white paper with best practices for setting up a training program) for all levels of employees, simulated phishing exercises, internal procedures for verifying transfers of sensitive information, and reduced posting of personal information on-line.

A W-2 e-mail phishing scam can have a devastating impact on a business and its employees. With tax season around the corner, expect to see more creative attempts to bait your personnel.

In the event your business is a victim of such an attack, it needs to be prepared to respond. This may require steps such as (i) being prepared to investigate the nature and scope of the attack, (ii) ensuring that the attackers are not still present in its systems, (iii) determining whether notification is required under applicable state law to individuals and state agencies, (iv) reporting to the IRS at phishing@irs.gov and the Internet Crime Complaint Center of the FBI, and (v) helping employees who may have questions about rectifying their tax returns.

Before Forms W-2 have to be generated (generally on or before January 31, 2018), business should be creating awareness in their organizations about these scams to avoid them from happening, but also making sure they are prepared to respond in the event a scam is successful.

Federal Court Permits Former Employees’ Data Breach Claims to Move Forward

A data breach occurs in which an outside individual obtains your company’s employees’ W-2 forms including social security numbers, addresses, and salary information. As a result, your company notifies all affected employees, explains what occurred, and offers a complimentary two-year membership to a service that helps detect misuse of personal information.   Is your company liable for negligence and breach of contract?

The answer may be, “yes,” according to a federal district court in Kentucky. Savidge v. Pharm-Save, Inc. (W.D. Ky. Dec. 1, 2017).  In Savidge, the plaintiffs alleged various state law claims that their former employer was liable due to the theft of their personally identifiable information (“PII”).  With regard to one plaintiff, the data breach resulted in a false tax return being filed on her behalf.

The company moved to dismiss the claims. In denying dismissal of the negligence claim, the court concluded that because Plaintiffs’ information was released to unauthorized individuals, the company breached its duty to “safeguard that information.”  Further, the court found there were sufficient allegations of injury based on Plaintiffs’ alleged purchase of credit monitoring and identity theft protection services as well as expenses incurred in responding to the fraudulent tax return.  Finally, the court held that Plaintiffs sufficiently alleged causation simply by alleging a nexus between the data breach and fraudulent activity that took place.

In addition, the court declined to dismiss Plaintiffs’ implied breach of contract claim. The complaint alleged that Plaintiffs provided their W-2 information to the company so the company could verify their identities, provide them with compensation, and to provide the company with complete records for tax purposes.  According to Plaintiffs, the company implicitly promised they would take adequate measures to protect their personal information and the company breached that obligation through the release of their PII.  According to the court, the allegations were sufficient to draw an inference that the company impliedly promised to protect their employees’ PII. Therefore, this claim also was permitted to proceed.

With a patchwork of federal laws governing various aspects of data breach liability, it is important for all those possessing PII to understand the extent of exposure under state law as well. Failure to take reasonable steps to protect such information is likely to result in liability.  The trend toward greater protection of PII is only growing, and with tax season nearly upon us it is important for employers to be aware of the kinds of schemes that could result in these kinds of breaches.

Senate Bill Introduced to Protect Personally Identifiable Information

Primarily motivated by several recent massive data breaches, Senate Democrats recently introduced a bill geared toward protecting Americans’ personal information against cyber attacks and to ensure timely notification and protection when data is breached.

The Consumer Privacy Protection Act of 2017 provides that companies that collect and hold data on at least 10,000 Americans would be required to implement “a comprehensive consumer privacy and data security program that includes administrative, technical, and physical safeguards appropriate to the size and complexity, and the nature and scope, of the activities of the covered entity.”

The legislation protects broad categories of data, including: Social Security, drivers’ license, and passport numbers, financial account numbers or debit/credit card numbers in combination with a security code or PIN, online usernames and passwords, unique biometric data such as fingerprints and retina or iris scans, physical and mental health data, geolocation data, and private digital photographs and videos.

The bill would also allow the United States Attorney General, state attorneys general, and the Federal Trade Commission to enforce alleged violations of the breach notification or security rules, which could subject companies to civil penalties of at least $16,500, depending on the number of records that were breached. The bill does not provide for a private right of action.

The legislation would require notification to be made “as expediently as possible and without unreasonable delay following the discovery by the covered entity of a security breach.”

The law would also require companies to provide “five years of appropriate identity theft prevention and mitigation services” at no cost to any individual who asks for it, and prohibits automatic enrollment in the identity theft prevention and mitigation services without their consent.

The text of the bill can be found here.

It is worth noting that shortly following the introduction of the Consumer Privacy Protection Act, three Democrat senators introduced the Data Security and Breach Notification Act that would require companies to report data breaches within 30 days of becoming aware of a breach. An individual who conceals a data breach could face a penalty of up to five years in prison. This bill comes on the heels of Uber’s recent data breach announcement that hackers stole 57 million records in 2016, and that Uber paid the hackers $100,000 to destroy the documents.

We will continue to report on the status of these bills and other legislative proposals for heightened data security at the federal level, in light of the massive data breaches of late, as developments unfold.

 

Supreme Court Will Not Hear Ninth Circuit Decision Regarding Willful Violations of FCRA’s Disclosure Provision

On November 13, 2017, the U.S. Supreme Court declined to hear the appeal of one of 2017’s more significant Fair Credit Reporting Act (FCRA) opinions, Syed v. M-I, LLC. (9th Cir. Jan. 20, 2017).  In Syed, the Ninth Circuit Court of Appeals held that a background check disclosure which included a liability waiver violated the FCRA. This case was significant because the Ninth Circuit is the first federal appeals court to definitively state that the FCRA “unambiguously bars the inclusion of a liability waiver.” The court also notably held that the employer willfully violated the FCRA by including the liability waiver in the disclosure, finding that no reasonable interpretation of the statute would allow any language besides a disclosure and authorization.

By way of background, the FRCA prohibits an employer from procuring a “consumer report” (e.g. a background report, credit report, etc.) on an employee or applicant without first providing a clear and conspicuous disclosure in a document consisting solely of the disclosure. FCRA litigation in recent years has primarily involved whether employers’ FCRA disclosure forms improperly included a release of liability or other “extraneous information” that violated the FCRA’s disclosure requirements.

In Syed, the Ninth Circuit agreed with the employee that the employer’s inclusion of waiver of liability language in the disclosure document willfully violated the FCRA. Analyzing the language of the statute and Congressional intent, the Ninth Court found that FCRA disclosure requirements are not met where a document contains any language other than the disclosure and an authorization. The court also reviewed the Supreme Court’s 2016 decision in Spokeo, Inc. v. Robins, but found that the employee in Syed had standing to bring the claim because he had alleged more than a “bare procedural violation” of the FCRA. For more information regarding the Spokeo case and other cases referring to the Supreme Court’s decision in Spokeo, please refer to our prior blog posts on the topic:

In its petition for certiorari to the Supreme Court, M-I argued that the Ninth Circuit incorrectly applied the Court’s holding in Spokeo when it found that Syed had standing to bring his claim under the FCRA. On November 13, the Supreme Court denied M-I’s petition without providing any explanation for the denial. As a result, the Ninth Circuit’s decision in Syed remains good law on both the issue of willfulness and the disclosure requirements under the FCRA.

The Syed decision serves as a warning to employers of the strict approach many courts have taken regarding the FCRA’s disclosure requirements. The Ninth Circuit’s determination that the inclusion of a liability waiver was a willful violation of the FCRA is of particular concern. Under the FCRA, willful violations can result in either actual damages or statutory damages, ranging from $100 to $1,000 per violation, which can result in significant potential liability in class action litigation.  There is also the possibility that employers may be hit with punitive damages for willful violations, which is at the court’s discretion.

Employers who obtain background checks from consumer reporting agencies must ensure their forms comply with the FCRA, as well as various state and local laws. Relying on disclosure and authorization forms provided to them by third-party vendors, including credit reporting agencies, is not recommended as such forms may include violations of the technical many provisions of the FCRA. Thus, employers should review their hiring forms with legal counsel to ensure they comply with the FCRA and applicable state and local laws.

We will continue to monitor and report on any further developments in the Syed case as well as any other developments related to the issues decided therein.

Elder Abuse: Are Granny Cams a Solution, a Compliance Burden, or Both?

 

In Minnesota, 97% of the 25,226 allegations of elder abuse (neglect, physical abuse, unexplained serious injuries and thefts) in state-licensed senior facilities in 2016 were never investigated. This prompted Minnesota Governor, Mark Dayton, to announce plans last week to form a task force to find out why. As one might expect, Minnesota is not alone. A study published in 2011 found that an estimated 260,000 (1 in 13) older adults in New York had been victims of one form of abuse or another during a 12-month period between 2008 and 2009, with “a dramatic gap” between elder abuse events reported and the number of cases referred to formal elder abuse services. Clearly, states are struggling to protect a vulnerable and growing group of residents from abuse. Technologies such as hidden cameras may help to address the problem, but their use raises privacy, security, compliance, and other concerns.

With governmental agencies apparently lacking the resources to identify, investigate, and respond to mounting cases of elder abuse in the long-term care services industry, and the number of persons in need of long-term care services on the rise, this problem is likely to get worse before it gets better. According to a 2016 CDC report concerning users of long-term care services, more than 9 million people in the United States receive regulated long-term care services. These numbers are only expected to increase. The Family Caregiver Alliance reports that

by 2050, the number of individuals using paid long-term care services in any setting (e.g., at home, residential care such as assisted living, or skilled nursing facilities) will likely double from the 13 million using services in 2000, to 27 million people.

However, technologies such as hidden cameras are making it easier for families and others to step in and help protect their loved ones. In fact, some states are implementing measures to leverage these technologies to help address the problem of elder abuse. For example, New Jersey’s Attorney General recently expanded the “Safe Care Cam” program which lends cameras and memory cards to Garden State residents who suspect their loved ones may be victims of abuse by an in-home caregiver.

Common known as “granny cams,” these easy-to-hide devices which can record video and sometimes audio are being strategically placed in nursing homes, long-term care, and residential care facilities. For example, the “Charge Cam” (pictured above) is designed to look like and actually function as a plug used to charge smartphone devices. Once plugged in, it is able to record eight hours of video and sound. For a nursing home resident’s family concerned about the treatment of the resident, use of a “Charge Cam” or similar device could be a very helpful way of getting answers to their suspicions of abuse. However, for the unsuspecting nursing home or other residential or long-term care facility, as well as for the well-meaning family members, the use of these devices can pose a number of issues and potential risks. Here are just some questions that should be considered:

  • Is there a state law that specifically addresses “granny cams”? Note that at least five states (Illinois, New Mexico, Oklahoma, Texas, and Washington) have laws specifically addressing the use of cameras in this context. In Illinois, for example, the resident and the resident’s roommate must consent to the camera, and notice must be posted outside the resident’s room to alert those entering the room about the recording.
  • Is consent required from all of the parties to conversations that are recorded by the device?
  • Do the HIPAA privacy and security regulations apply to the video and audio recordings that contain individually identifiable health information of the resident or other residents whose information is captured in the video or audio recorded?
  • How do the features of the device, such as camera placement and zoom capabilities, affect the analysis of the issues raised above?
  • How can the validity of a recording be confirmed?
  • What effects will there be on employee recruiting and employee retention?
  • If the organization permits the device to be installed, what rights and obligations does it have with respect to the scope, content, security, preservation, and other aspects of the recording?

Just as body cameras for police are viewed by some as a way to help address concerns over police brutality allegations, some believe granny cams can serve as a deterrent to abuse of residents at long-term care and similar facilities. However, families and facilities have to consider these technologies carefully.

LexBlog