Small Healthcare Provider Pays $31,000 for Failing to Have a Business Associate Agreement With File Storage Vendor

Disclosing protected health information (PHI) to a business associate without a compliant business associate agreement (BAA) is an improper disclosure under the HIPAA privacy and security regulations. According to the HHS Office for Civil Rights (OCR), an error like that can cost a small healthcare provider $31,000.

OCR recently announced a resolution agreement (pdf) with the Center for Children’s Digestive Health, S.C. (CCDH), a “small, for-profit health care provider with a pediatric subspecialty practice that operates its practice in seven clinic locations in Illinois.” According to the resolution agreement, OCR apparently learned of the missing BAA while investigating CCDH’s file storage vendor, FileFax, Inc., which stored CCDH’s PHI. Responsible for enforcing the privacy and security rules under HIPAA, OCR then commenced a compliance review of CCDH. It reported finding that neither CCDH nor FileFax could produce a signed BAA applicable to periods that CCDH had shared PHI with FileFax.  Without an admission of liability, CCDH agreed to resolve the matter by paying $31,000 and agreeing to comply with a comprehensive Corrective Action Plan (CAP).

The Health Information Technology for Economic and Clinical Health (HITECH) Act made a number of changes to HIPAA, including to the rules concerning “business associates.” Among those changes were updates to BAAs that the HIPAA rules require covered entities to maintain with their business associates. A covered entity’s business associates include third-party service providers, such as: claims administrators, accounting firms, law firms, consultants, cloud and other data storage providers.

The regulations make clear that even though business associates are directly subject to many of the HIPAA privacy and security requirements, BAAs remain necessary for compliance. A starting point for BAA compliance is the set of sample provisions posted by the OCR. However, there are other issues that parties to a BAA will want to address, such as: specificity concerning the safeguards that should be in place, data breach coordination and response, indemnity, cybersecurity insurance, and agency status. More information about business associates and BAAs can be accessed here.

Covered entities also should remember that the HIPAA regulations are not the only rules that require written assurances from third-party service providers concerning security of personal information. A number of state laws (e.g., California, Massachusetts, Maryland, New Mexico, New York, Oregon) require businesses to have contracts with third-party service providers to safeguard personal information. Of course, even in the absence of a federal or state law, taking steps to ensure vendors secure the confidential information they are provided, such as through a detailed data security agreement, is a prudent practice.

Six Tips to Consider in Hiring Privacy and Data Security Experts

Facing increasingly pervasive issues relating to privacy and data security companies are faced with what qualifications they should think about when looking to hire experts in these areas, and their role within the company is becoming increasingly vital. Moreover, unlike hiring for other positions it is common that a CEO lacks the knowledge and background to adequately assess whether such an individual has the right expertise, and later on how they are performing in the position. While there is no “one size fits all” checklist, the following are some factors to consider:

  1. Certification: Various certifications are available to privacy and data security experts. In evaluating whether a privacy or data security expert candidate has the necessary and appropriate knowledge and skills for such a position, companies should consider whether the candidate has received any relevant certifications. For example, professionals in these areas may have one or more certifications through the International Association of Privacy Professionals and/or the Information Systems Security Certifications Consortium, Inc. While not necessarily dispositive as to whether a candidate is qualified for a position, a certification in the areas of privacy and/or data security may evidence a candidate’s interest in, experience with, and maintenance of current knowledge about issues in these areas.
  2. Technical Knowledge and Practical Experience: A candidate with strong technical knowledge may be better positioned to identify potential threats to privacy and data security and to determine how best to prevent and address any such threats. Perhaps even more compelling than a candidate’s technical knowledge is his or her demonstrated practical experience in the application of such knowledge.
  3. Legal and Regulatory Knowledge: Another factor to consider is a candidate’s familiarity with and understanding of laws and regulations applicable to privacy and data security issues. A candidate who is well-versed in these areas may be more qualified to ensure compliance with pertinent laws and regulations in both domestic and international contexts.
  4. Policy: In addition to understanding applicable laws and regulations, privacy and data security experts should be able to understand, interpret, and prepare policies to best ensure compliance with such laws and regulations. Among other things, a strong candidate should possess knowledge about whether the company is legally permitted to use employees’ or customers’ personal information; whether specific information is subject to specific to more stringent rules based on the type of data involved; and whether personal information, if used, might lead to public relations issues or other business-related concerns.
  5. Networking: Expert candidates who engage in networking and attend conferences or similar events could be more up-to-date on relevant issues and laws in the areas of privacy and data security. Candidates who have presented at conferences or written articles about relevant issues may have a heightened commitment to their field, knowledge of pertinent subject matter, and understanding of the nuances of issues that can or may arise, as well as how to address any such issues if they do in fact occur.
  6. Independence and Analytical Skills: An expert who does not demonstrate independence and analytical skills may not be a good fit for an organization. Companies should look to an expert candidate’s ability to work independently and thoroughly analyze issues pertaining to overall privacy and data security issues and to particular incidents.

While these examples are not an exhaustive list of factors organizations should consider, they provide some important considerations for companies when interviewing and hiring privacy and data security experts.

New Mexico Enacts Data Breach Notification Act

On April 6, 2017, New Mexico Governor Susana Martinez signed HB 15, making New Mexico the 48th state to enact a data breach notification law.  The law has an effective date of June 16, 2017 and follows the same general structure of many of the breach notification laws in other states.

Importantly, the definition of personal identifying information (PII) under New Mexico’s Data Breach Notification Act includes biometric data (“a record generated by automatic measurements of an identified individual’s fingerprints, voice print, iris or retina patterns, facial characteristics or hand geometry that is used to uniquely and durably authenticate an individual’s identity when the individual accesses a physical location, device, system or account.”).  We have seen a number of states (e.g. Illinois) implement or amend their own data breach notification laws to include elements such as biometric data.

The Data Breach Notification Act includes three key components: (i) Disposal of PII; (ii) Security Measures for Storage of PII; and (iii) Notification of a Security Breach.

Disposal of PII:

Under the Act, organizations are required to arrange for the proper disposal of records containing the PII of New Mexico residents when they are no longer reasonably needed for business purposes.  Proper disposal means shredding, erasing, or otherwise modifying the PII contained in the records to be unreadable or undecipherable.

Security Measures for Storage of PII:

Organizations must implement and maintain – and contractually require their service providers and vendors to implement and maintain – reasonable security procedures and practices to protect the PII they own or license from unauthorized access, destruction, use, modification, or disclosure.  Unlike California, New Mexico has not yet provided guidance on what constitutes reasonable security procedures and practices.  Nevertheless, all organizations should be implementing safeguards to protect the personal and company information they maintain.

Notification of a Security Breach:

In the event of a breach, the Act provides:

  • Notification must be provided to each New Mexico resident within forty-five (45) calendar days following discovery of the breach.
  • If the person maintains or possesses PII of a New Mexico resident (but is not the owner or licensee) notification must be provided to the owner or licensee of the PII within forty-five (45) calendar days following discovery of the breach.
  • Notification to each New Mexico residents must include:
    • The name and contact information of the notifying person;
    • A list of the types of PII reasonably believed to have been subject to the breach;
    • The date(s), or estimated dates(s), of the breach;
    • A general description of the breach;
    • The toll-free numbers and addresses of the major consumer reporting agencies;
    • Advice directing the recipient to review account statements and credit reports to detect errors; and
    • Advice informing the recipient of their rights pursuant to the federal Fair Credit Reporting Act.
  • In the event of a breach affecting more than 1000 New Mexico residents, notification must be provided to the New Mexico Attorney General and the major consumer reporting agencies within forty-five (45) calendar days following discovery of the breach.  Such notice must include a copy of the notification sent to affected residents.
  • Notification may be delayed at the request of law enforcement or as necessary to determine the scope of the breach and restore the integrity, security, and confidentiality of the system.
  • A risk of harm trigger.  Specifically, notification is not required if, after an appropriate investigation, the person determines the breach “does not give rise to a significant risk of identity theft of fraud.”
  • The Act does not apply to a person subject to GLBA or HIPAA.

Under the Act, the New Mexico Attorney General may bring an action for injunctive relief and an award of damages for actual costs or loses, including consequential financial losses.  If a violation of the Act is knowing or reckless, a civil penalty of the greater of $25,000 or, in the case of failed notification, $10 per instance of failed notification up to a maximum of $150,000.

Breach notification laws continue to evolve and it is imperative for organizations to be prepared to respond appropriately.  If you need assistance with a data incident or data breach, please contact our 24/7 Data Incident Response Team at 844-544-5296 or breach@jacksonlewis.com.

A New Frontier In Law Firm Cyber Risk: Client Class Actions

That an actual breach of client information could expose your law firm to legal and business risks is unsurprising.  The risks posed by a potential breach, however, may be something your firm has not yet carefully considered – but needs to.  As we discussed during our recent webinar, law firms face a variety of cybersecurity-related risks.  Firms have been targeted by cybercriminals with increased frequency in the past few years, and clients are growing concerned.  In at least one instance – and likely more to follow – this concern has resulted in litigation between firm and client over the adequacy of the firm’s cybersecurity safeguards.

In April 2016, clients of a Chicago-based firm, Johnson & Bell, filed a class action lawsuit alleging that the firm failed to adequately safeguard their information.  The case, which was subsequently moved to arbitration, is now back in the news.  On March 28, 2017, Johnson & Bell sued Edelson PC, the firm representing the client class, for defamation.  In its complaint, Johnson & Bell alleges that “[t]he Edelson defendants have engaged in numerous violations of their ethical duties, have illegally abused the process of the courts to further their own self-aggrandizement, and have engaged in a self-serving publicity tour spreading their lies and defamatory statements about J&B.”  Perhaps ominously, Edelson has announced that the Johnson & Bell case is just its opening salvo; it plans to assert similar claims on behalf of clients of 15 other firms.

Cybersecurity Cartoon

The Johnson & Bell Complaint, which was made public last December, is notable for a number of reasons.

  • First, it homes in on several of the potential vulnerabilities firm systems may be subject to, such as the high incident of employees working remotely, or the fact that less well-protected systems, like those for timekeeping or email, can serve as gateways to systems holding more sensitive data.
  • Second, the Complaint identifies categories of sensitive data that many firms are likely to maintain, such as financial records, trade secrets, sensitive communications, and personal information.
  • Third, it contends that there’s an “industry standard” level of data security that any firm charging and collecting market-rate attorneys’ fees must provide. This is significant because there are indications that the “industry standard” (or “reasonable”) level of protection that the law imposes on businesses is likely to become more expansive and onerous in coming years.
  • And fourth, in addition to seeking damages and attorneys’ fees, the Johnson & Bell Plaintiffs are seeking to compel a security audit by an outside auditor. This audit would, among other things, reveal whether the firm has conducted a thorough risk assessment, and whether it has developed a sufficiently robust data security plan that includes written policies and procedures, employee training, and vendor management processes.

The prospect of client lawsuits provides a compelling reason to take prompt and committed action on the cybersecurity front – even if your firm has not yet experienced a breach. For guidance on how firms can prevent and respond to cybersecurity incidents, please check out our past post on this topic, and please tune in for our upcoming webinar on April 19.

Association of Corporate Counsel Develops Model Information Protection and Security Controls for Outside Vendors, Including Outside Counsel

The Association of Corporate Counsel (ACC), which represents over 42,000 in-house counsel across 85 countries, recently released its ACC Chief Legal Officers (CLO) 2017 Survey which found that two-thirds of in-house legal leaders ranked data protection and information privacy as ‘very’ or ‘extremely’ important.  In response to this growing concern, the ACC recently released “first-of-its-kind” safety guidelines to help “in-house counsel as they set expectations with their outside vendors, including outside counsel.” Firms concerned about facing these guidelines should review their cybersecurity risk management policies, procedures and practices [webinar].

The Controls

The Model Information Protection and Security Controls for Outside Counsel Possessing Company Confidential Information (“the Controls”) were developed in a joint effort between in-house counsel members of the ACC together with several law firms specialized in data security related issues. This joint effort signifies the importance of cohesion between in-house and outside counsel when handling sensitive corporate data.  “We are increasingly hearing from ACC members, at companies of all sizes, that cybersecurity is one of their chief concerns, and there is heightened risk involved when sharing sensitive data with your outside counsel,” said Amar Sarwal, ACC vice president and chief legal strategist.

The Controls address a broad range of data security related measures including: data breach reporting, data handling and encryption, physical security, employee background screening, information retention/return/destruction, and cyber liability insurance. Particular measures may be too burdensome under the circumstances, while the Controls as a whole may not be sufficient to satisfy applicable legal requirements such as the HIPAA privacy and security rules for business associates. Still, the Controls include a number of measures firms will have to consider carefully. For example, the Controls suggest that outside counsel be required to maintain

logical access controls designed to manage access to Company Confidential information and system functionality on a least privilege and need-to-know basis, including through the use of defined authority levels and job functions, unique IDs and passwords, [and] two-factor or stronger authentication for its employee remote access systems.

The Controls also would require outside counsel to be responsible for its subcontractors with access to confidential information, including by requiring those subcontractors to abide by the Controls. As for data breach notification, the Controls recommend a short time frame – under the Controls, outside counsel would be required to notify a client within 24 hours of discovering an actual or suspected incident.

It is the hope of the ACC that the Controls will serve as a “best practice”, standardizing the protocols companies implement when interacting with third-party vendors who may have access to sensitive corporate data, and ensuring that adequate protections are in place to prevent and respond to a data breach. Law firms should not be surprised to see these Controls, in one form or another, included in litigation and other guidelines mandated by their corporate clients.

Virginia Responds to W-2 Phishing Scams with First of Its Kind Notification Requirement

As previously highlighted, in early February, the IRS issued a warning to all employers regarding the resurgence of a W-2 based cyber scam. Since the IRS warning, this type of scam has taken numerous victims.  On February 15, 2017, Virginia Wesleyan College released a notice stating that the 2016 W-2 tax form information of its employees had been sent that day to an unauthorized third party as a result of an email scam.  The information was sent by an employee who believed a spear-phishing email was a legitimate request for W-2 forms.

In light of the IRS warning, together with the Virginia Wesleyan College phishing scam, on March 13, 2017, Virginia Governor Terry McAuliffe approved, a first of its kind, amendment to Virginia’s data breach notification statute. The new amendment requires employers and payroll service providers to notify the Virginia Office of the Attorney General of “unauthorized access and acquisition of unencrypted computerized data containing a taxpayer identification number in combination with the income tax withheld for an individual”.  Notably, notice is required even if the breach does not otherwise trigger the statute’s requirement to notify affected residents of a breach.

Notice to the Office of the Attorney General of a breach of computerized employee payroll data must include the affected employer or payroll service provider’s name, and federal employer identification number. Following receipt of notice, the Office of the Attorney General is then required to notify Virginia’s Department of Taxation of the breach.

This amendment to the Virginia statute becomes effective July 1, 2017, and in light of the growing concern for W-2 phishing scams it would not be surprising if other states follow suit. Employers should advise their staff to exercise caution when responding to requests for W-2 forms and confirm verbally that the request is valid.

Will More States Follow New York’s Lead?

As you know if you regularly read this blog, the New York State DFS finally finalized its “first-in-the-nation” cybersecurity rules with an effective date of March 1, 2017. And their reach is quite large: DFS-supervised entities from insurers and banks to mortgage brokers and credit unions (and their third-party service providers) will have to begin assessing their cybersecurity risks and responding with detailed cybersecurity programs headed up by chief information security officers. Various compliance deadlines under these regulations range 180 days after the effective date of the regulations to two years after the effective date for third-party service providers. For more information on the development and requirements of the DFS cybersecurity regulations see our articles: Getting Prepared for the New York Department of Financial Services’ Proposed Cyber Security Regulations, and New York Releases Revised Proposed Cyber Security Regulations.

Although the requirements are burdensome and the goals of the regulations lofty, a recent announcement from the New York Attorney General may make them more politically palatable. Last week Attorney General Eric Schneiderman announced a record number of data breach notices were received by his office in 2016, with breaches increasing 60% over 2015. In total, nearly 1,300 breaches were reported that exposed the personal records of nearly 1.6 million New Yorkers, though “mega-breaches” appeared to decline from the previous decade. Of the reported breaches, financial account information and Social Security numbers were the most frequently acquired information, together accounting for 81% of the breaches. Thus, although the DFS cybersecurity regulations were years in the making, their issuance on the heels of a year of record data breaches may yet prove prescient.

At the federal level, the tide seems to be turning the other way. The Trump administration’s “skinny budget” did include a $1.5 billion allocation to the Department of Homeland Security to fund various cybersecurity efforts from critical infrastructure protection to information sharing between federal agencies and the private sector. But budget cuts to other agencies may paint a more accurate picture of the administration’s cybersecurity priorities. For example, President Trump did not re-up President Obama’s pot of funds to be broadly distributed across the federal government for more widespread initiatives such as moving to multi-factor authentication, updating federal agencies’ severely outdated computer systems, and money to hire more qualified cybersecurity professionals into the federal workforce.

On top of Trump’s budget blueprint lacking this broader allocation of funds, the administration’s budget also proposes actual cuts to many agencies that house the personal information of U.S. citizens, including the SSA, ED, IRS, HUD, and HHS, among others. This budget proposal was released less than a week after a report from the White House’s OMB was released, which found that federal agencies suffered over 30,000 cyber incidents in 2016, and highlighted the need for departments across the federal government to strengthen their IT systems. Faced with potential budget cuts, a panel of federal agency Inspectors General testified before a House Appropriations subcommittee in early March that the cuts will force their agencies to make difficult decisions between modernizing and updating IT systems and maintaining or reducing the services they provide.

In Congress too, privacy priorities have shifted. Last week the Senate passed a resolution repealing broadband privacy rules issued by the FCC last year using the Congressional Review Act. This followed an FCC vote earlier in March, led by the newly-installed Commissioner, to stall the implementation of the data security portion of those rules. Commissioner Ajit Pai framed the votes as an effort to ensure that FCC rules are aligned with the approach to privacy regulation that the FTC has pursued, and added that the FCC is open to moving forward with a new framework. The House voted on Tuesday to pass the Senate’s resolution, which, if signed by President Trump, could leave a gap in federal privacy protections for internet consumers and cybersecurity regulations for internet service providers and those entities that collect and store consumers’ information.

Interestingly, the day after the House voted to pass the Senate’s resolution repealing the FCC’s privacy protections, a bipartisan group of senators introduced a bill called the Main Street Cybersecurity Act, aimed at assisting small businesses grapple with cybersecurity risks. In addition, Democratic legislators wrote a letter to the FCC on Tuesday urging the regulatory body to take action on the raising risks of cellphone cybersecurity. So there are some in the federal government that recognize resources and regulation may be needed to protect consumers.

Several states, however, have already followed New York’s lead to bridge the federal privacy and cybersecurity gap, including California and Connecticut’s recently updated laws limiting government access to email and other online communications and Illinois’ consideration of a “right to know” bill to let consumers find out the information certain internet companies collect about them. Unlike the DFS cybersecurity regulations, these and other such state privacy initiatives in New Mexico, Nebraska and West Virginia focus on the privacy of individuals rather than the strength of data collectors’ IT systems. The laws nevertheless do create regulatory requirements for the data collectors, and regulations directly governing these entities’ cybersecurity practices and preparedness may not be far behind as the discussion of privacy intensifies. The Connecticut Department of Banking, for example, has said that it is open to adopting new provisions to regulate cybersecurity after a review of New York’s regulations.

With these concerns finding champions in a few statehouses across the country, residents of states without these privacy protections may soon start to pressure their own state legislators and regulators to follow suit. Since privacy and cybersecurity are apparently areas where legislators are willing to reach across the aisle to protect their constituents’ (and frankly their own) private data, entities that operate in multiple states or across state lines could face a wrangled web of competing regulation as multiple states move to act where the federal government is not.

Jackson Lewis attorneys will continue to monitor these developments at both the federal and state levels, and are available to help your organization know what it has to comply with and when.

Companies May Soon Have a New Defense Against Cyber-Attacks

Co-author: Devin Rauchwerger 

The Active Cyber Defense Certainty Act is a new bill that is gaining positive bipartisan support and significant interest from business communities, lawmakers and academics. The proposed bill amends the Computer Fraud and Abuse Act which does not provide adequate deterrence for criminal hacking. The new bill is aimed at helping businesses that are falling prey to cyber criminals defend themselves online by giving victims of computer intrusions unprecedented rights.

Previously, under the Computer Fraud and Abuse Act, a company was either required to enlist local law enforcement after the fact or risk facing prosecution for hacking back. The new bill affords a victim with a number of defensive measures. Specifically, under the bill, a victim of a cyber-attack can access without authorization the attacker’s computer to gather information in order to establish attribution of criminal activity, including sharing information with law enforcement and stopping unauthorized activity against the victim’s network. However, a victim can not destroy information on the hacker’s computer, cause physical injury to another person, or create a threat to the public health or safety.

There are several concerns, however, about the proposed bill that have sparked debate. Giving companies the ability to hack back may not be the best approach to defend against cyber attacks. Instead, it may be more effective and prudent for companies to engage the assistance of law enforcement, government agencies and internet service providers. Also, giving companies the ability to attack the computers of suspected hackers can lead to potential national security concerns; if, for instance, the hacker is a foreign country. There are also ethical considerations that must be considered with hacking-back, such as causing harm to innocent parties.

Under the bill, the fact that the protection afforded the victim disappears if the victim “destroys the information stored on a computer of another” is also potentially problematic. The statute does not currently differentiate between purposeful destruction of information compared to accidental destruction. Companies may be weary to act if they lose the protection by accidentally destroying information in their attempt to stop the cyber-attack. The current language also suggests that a company cannot destroy whatever partial information the cyber-attacker illegally obtained from the victim.

Notably, there are also drafting issues with the bill. Several terms in the act are vague and open the door to a variety of problems. For example, the term “victim” is defined as “an entity that is a victim of a persistent unauthorized intrusion of the individual entity’s computer.” The term “persistent” is difficult to define: Is persistent measured in terms of the number of separate cyber-attacks that a company falls victim to or is it the duration of one particular cyber-attack that matters? Theoretically under the current language, a victim of a cyber-attack lasting only 30 seconds may not be afforded the protection of this Act.   For all these reasons, the bill will likely need significant revisions before it will pass.

While there are still several kinks that need to be worked out, this is clearly a positive step towards companies being able to defend themselves from cyber-attacks without facing legal repercussions.

At Last, the Final DFS Cybersecurity Regulations….

We wanted to keep you informed on the progress of the DFS cybersecurity regulations, as they complete their journey through the approval process. DFS has been working on the regulations since its 2013-2014 studies on cybersecurity risks to financial institutions. As reported in our article, Getting Prepared for the New York Department of Financial Services’ Proposed Cybersecurity Regulations, the original proposed regulations were published on September 28, 2016. The revised regulations were published on December 28, 2016 (see our article, New York Releases Revised Proposed Cybersecurity Regulations). A Notice of Adoption was published in the New York State Register last week announcing the adoption of the final regulations.

The Notice of Adoption noted that DFS received 60 comments on the revised regulations published on December 28, 2016 – but that it determined that most of the comments made were either unnecessary changes to the scope, wording or meaning of the regulations. Several comments were received asking DFS to hold off on finalizing the regulations until the federal government had implemented regulations, or to make efforts to harmonize the DFS regulations with existing (or proposed) state and federal standards, but DFS rejected these suggestions, stating that “it is vitally important to establish regulatory minimum standards for cybersecurity practices to address challenges currently facing the New York financial services sector.”

So what changes were made? To recap, the December 28, 2016 regulations made several key changes (see our article, New York Releases Revised Proposed Cybersecurity Regulations). The final regulations include “nonsubstantive” changes to several sections, including:

  • A tweak to the definition of “penetration testing” in Section 500.01(h) (“unauthorized penetration testing” was changed to “penetration testing”);
  • The responsibilities for implementing a cybersecurity program in Section 500.02 were clarified by language that states the covered entity may adopt “the relevant and applicable provisions of” a cybersecurity program that its affiliates maintain;
  • The Penetration Testing and Vulnerability Assessment provisions of Section 500.05 were revised to delete some duplicative language relating to periodic penetration testing;
  • The required retention period for audit trails Section 500.6 was decreased to three years (from five);
  • The events that must be reported to DFS under Section 500.17 were clarified (by eliminating some extraneous language) and language was added to clarify that the annual report would cover the prior calendar year;
  • The exemptions in Section 500.19 were revised to clarify that the thresholds in (a) are applied taking into account the operations of a covered entity and its affiliates, to clarify the scope of the exemptions (a new (d) was added to exempt Article 70 entities) and to clarify that notices of exemption must be filed within 30 days of making the determination that the entity is exempt.

The transitional periods for compliance with the final regulations have not changed in the final rule. Compliance is required within 1 year for the regulations relating to:

  • Annual reporting to the covered entity’s board
  • Penetration testing and vulnerability assessments
  • Risk assessments
  • Multi-factor authentication
  • Cybersecurity awareness training

Compliance is required within 18 months for the regulations relating to:

  • Audit trails
  • Application security
  • Limitations on data retention
  • Monitoring the activity of authorized users
  • Encryption of nonpublic information

There is a two year period for compliance with the third party service provider provisions; for all other provisions, entities should be in compliance within 180 days.

The final regulations continue to define a “Covered Entity” as “any Person operating under or required to operate under a license, registration, charter, certificate, permit, accreditation or similar authorization under the Banking Law, the Insurance Law, or the Financial Services Law.” Please see our earlier article, Getting Prepared for the New York Department of Financial Services’ Proposed Cybersecurity Regulations, for additional discussion of the coverage of the regulations, and our article New York Releases Revised Proposed Cybersecurity Regulations for a discussion of the exemptions under the revised regulations.

 

Employer Denied Access to Employee GPS Data

A federal district court in Indiana recently denied an employer’s motion to compel discovery of employee GPS data in defense of an action brought under the Fair Labor Standards Act (FLSA).   Crabtree v. Angie’s List, Inc.

Plaintiffs asserted claims for denial of overtime pay during a one-year period in which they worked as Senior Sales Representatives. Plaintiffs often used their personal electronic devices for work purposes. In order to defend Plaintiffs’ claims that they worked 10-12 hours per day, the employer sought GPS and location data from the Plaintiffs’ phones to “construct a detailed and accurate timeline of when Plaintiffs were or were note working.” Plaintiffs objected contending that the request implicated significant privacy concerns especially where the GPS data would not necessarily accurately reflect whether Plaintiffs were working at any given time.

One of the reasons for the employer’s request was because the manner in which the company kept track of the employees’ time was by recording the time the employees logged into and out of their computers’ SalesForce software. However, that system did not necessarily reflect if employees were not working even though they were logged in. Thus, the company sought the GPS data to help confirm the employees’ physical location.

In analyzing the employer’s motion to compel, the court expressed concern with the fact that disclosing GPS data from a personal device covering 24 hours per day for one year would result in tracking Plaintiffs’ movements well outside of their working time. The court stated that the employer “has not demonstrated that the GPS/location services data from Plaintiffs’ electronic devices would be more probative” than data already in the company’s possession. As such, the court ruled that the examination of Plaintiffs’ personal devices “is not proportional to the needs of the case.”

Based on this case, it is apparent that courts continue to be protective of personal data. As we have recently emphasized, it is important for employers to find methods by which they can accurately track employee work time without having to rely on data from the employee’s personal devices.

LexBlog