In honor of Data Privacy Day, we provide the following “Top 10 for 2017.”  While the list is by no means exhaustive, it does provide some hot topics for organizations to consider in 2017.

1.  Phishing Attacks and Ransomware – Phishing, as the name implies, is the attempt, usually via email, to obtain sensitive or personal information by disguising oneself as a trustworthy source. The IRS reported a 400 percent surge in phishing and malware incidents in 2016 and dedicates a page on its website to phishing and online scams. A relatively simply, yet extremely effective safeguard against such an attack is for organizations to advise employees (especially those in HR and Payroll) to be on the lookout for email requests, often appearing to come from a supervisor, for the personal information of all, or large groups of, the company’s employees. Before responding electronically, employees should verbally confirm such requests. This is especially true as organizations begin the W2 process and are compiling large amounts of personal information.

In some cases delivered by a phishing attack, ransomware is a type of malware that hackers use to stop you from accessing your data so they can require you to pay a ransom, often paid in cryptocurrency such as Bitcoin, to get it back. According to the FBI and the Department of Health and Human Services’ Office of Civil Rights, ransomware attacks have quadrupled, occurring at a rate of 4,000/day. These agencies and the Federal Trade Commission have offered guidance to help curb these attacks. Among other things, the guidance urges organizations to be prepared. A great start to combat ransomware’s effectiveness is for your organization to consider whether you maintain regular backups of your electronic systems.

2.  Safeguards Required to Protect Personal Information State laws continue to emerge and expand requiring businesses to protect personal information. Joining states such as Florida, Massachusetts, Maryland, and Oregon, Illinois businesses must implement and maintain reasonable safeguards to protect personal information beginning January 1, 2017, and California clarified what it means to have reasonable safeguards. Similar rules go into effect in Connecticut beginning October 1, 2017, for health insurers, health care centers, pharmacy benefits managers, third-party administrators, utilization review companies, or other licensed health insurance business. And, during 2017 in New York, entities regulated by the state’s Department of Financial Services, such as banks, check cashers, credit unions, insurers, mortgage brokers and loan servicers, and some of their subcontractors, likely will become subject to a complex set of cybersecurity regulations many view as the first of their kind in the country.

3.  Big Data, Analytics, AI, Wearables, IoT New technologies and devices continuously emerge, promising a myriad of societal, lifestyle and workforce advancements and benefits including increased productivity, talent recruiting and management enhancements, enhanced monitoring and tracking of human and other assets, and improved wellness tools. This will continue in 2017, and will require an unprecedented and unimaginable collection of data, which very often will be personal data. Federal agencies, such as the FTC and EEOC, and others are taking note. While these advancements are undoubtedly valuable, the potential legal issues and risks should be considered and addressed prior to implementation or use.

4.  HIPAA Privacy and Security Enforcement – The Office for Civil Rights continues in enforcement mode in 2017, announcing two settlements so far in January 2017, totaling nearly $3 million.  In one action, the agency addressed for the first time the 60-day rule for providing notification of breaches of unsecured protected health information. In this case, the covered entity discovered the breach involving 863 patients on October 22, 2013, but did not notify OCR until January 31, 2014, about 41 days late. The settlement amount was $475,000, or approximately $11,500 per day. OCR Director Jocelyn Samuels reminded covered entities that they “need to have a clear policy and procedures in place to respond to the Breach Notification Rule’s timeliness requirements.”

5.  Breach Notification Laws – There are currently 47 states with breach notification laws, and they continue to be updated. For example, beginning in 2017, California businesses and agencies can no longer assume that notification is not required when personal information involved in the breach is encrypted. Illinois also changed its breach notification law, effective January 1, 2017, to, among other things, expand the definition of “personal information” to include medical information, health insurance information, and unique biometric data. These laws continue to evolve and be amended to address the extensive amount of sensitive data that is stored electronically.

6.  The Telephone Consumer Protection Act (TCPA) – 4,860 TCPA lawsuits were filed in 2016 according to statistics compiled by WebRecon LLC. This represents an almost 32% increase over 2015 and marks the 9th consecutive year where the number of TCPA suits increased from the preceding year. With the SCOTUS decision in Campbell-Ewald making defense of class actions under the TCPA more difficult, we expect the number of TCPA suits to continue to grow in 2017. Many of these suits are not just aimed at large companies.  Instead, these suits are often focused on small businesses that may unknowingly violate the TCPA and can result in potential damages in the hundreds of thousands, if not millions, of dollars.  Understanding the FAQs for the TCPA and taking steps to comply with the TCPA is a great first step.

7.  The EU General Data Protection Regulation (GDPR) and the EU-U.S. Privacy Shield – GDPR has been adopted, and while it will not apply until May 25, 2018, there is a lot to do to get compliant. For example, GDPR adds a data breach notification requirement for data controllers; if notification is required, it must be provided to the data protection authority within 72 hours. Also, the EU-U.S. Privacy Shield data transfer agreement (“the Privacy Shield”) was reached to replaced the EU-U.S. Safe Harbour agreement which was invalidated on October 6, 2015, by the Court of Justice of the European Union’s (CJEU) ruling in Schrems v. Data Protection Commissioner. As of August 1, 2016, organizations based in the U.S. were able to self-certify their compliance with the Privacy Shield. Please review our detailed Q&A on some of the most common questions.

8.  President Trump – As we near the end of the President’s first full week in office, it remains to be seen just how the new administration will address privacy and cybersecurity issues. We considered some of these issues shortly after the election based on the President’s campaign which may provide some insight while we await more clarity from the White House.

9.  Social Media Investigations – Social media use continues to grow on a global scale and become more and more prevalent for organizations. This is especially true as generations who have lived their entire lives in a Social Media World represent an ever expanding percentage of the workforce.   User profiles or accounts are regularly sought and reviewed in litigation and/or employment decisions.   While public content may generally be viewed without issue, employers need to be aware of how they are accessing social media content and ensure they are doing so consistent with state laws protecting social media privacy and avoiding access to information they would rather not have.

10.  Be Vigilant and Watch for Changes – As more and more personal information and data is available and stored electronically, it is important for organizations to realize this data is extremely valuable, especially in the wrong hands. To this end, and as outlined above, organizations should be constantly assessing how best to secure their electronic systems. This is particularly true as the law and industry guidance are constantly changing and evolving in an effort to keep up with technological advancements.

 

BadgeIt is not uncommon for employers to assign badges to their employees to grant access to certain locations on the employer’s property and parking garages. Many employees have them, use them, lose them and think little of them. But, badges made by Humanyze are so much more, raising concerns from privacy advocates and others. According to a New York Post article and earlier reports, these badges are designed to be worn by employees all day (and possibly night) and are capable of capturing a wide range of information about the employee, along with data from other systems of the employer. Through data mining and analytics, according to Humanyze’s chief executive Ben Waber:

you can actually get very detailed information on how people are communicating, how physiologically aroused people are, and can make predictions about how productive and happy they are at work

So, just what does this badge collect? According to the report and the company’s website, the badge is worn around the neck (kind of like name badges at association conferences) and captures sleep patterns, analyzes voice, monitors body language and fitness, tracks location, and the levels of communications with colleagues. This and other data is combined with the employee’s email and phone activity to produce insights into productivity levels and the employee’s emotions, including stress and coping levels. According to the article, the badge “can even detect if an employee is drunk.” However, Mr. Waber points out that conversations are not recorded, only the tone of the conversation, and that individuals use the badges only after giving their consent.

This super badge certainly is not the first or only product working its way to market that engages in this kind of monitoring. For example, we reported on Microsoft’s Hololens, the company’s “augmented reality help system,” which is equipped with a “plurality” of sensors that gather a range of biometrics parameters (heart rate, perspiration, etc.) along with other information to assist employees with certain tasks. There are others coming.

The badge, Hololens and other similar devices can be valuable tools for businesses to understand their workforces, increase productivity, improve safety, reduce human error and so on. However, beyond assessing whether the technology works, there are a range of legal and risk management issues employers need to consider when deciding to use these devices.

Privacy and data security considerations are among them as these devices collect a range of health-related data, as well as information relating to the employee’s emotions, locations and interactions with others. However, as we have noted in earlier posts, other questions that are raised, such as whether gathering of biometric and other medical data constitutes a disability-related inquiry under the Americans with Disabilities Act, is monitoring constantly going too far, does the company have to bargain with the union, how will this affect morale, what obligations are there to secure the data collected and who can have access to it. Employers should think through these and other issues carefully before introducing these kinds of tools and devices into the workplace.

The Federal Trade Commission (“FTC”) recently announced that FTC chairwoman Edith Ramirez will be stepping down effective February 10, 2017. Ms. Ramirez guided the agency through a period of significant enforcement activity, particularly in the areas of cybersecurity and consumer privacy. President-elect Donald Trump will now have the opportunity to fill three vacancies at the federal consumer protection agency.

At the start of 2016, the FTC announced its intention to increase its cybersecurity enforcement authority, and has done just that. The broad power allocated to the FTC under Section 5 of the FTC Act provides it the unique opportunity to regulate private actors, both in handling of data and responding to a data breach.

The FTC has gone after a wide range of data security related private offenders in 2016 including: digital advertising companies (Turn Inc.), medical service providers (LabMD, Inc.), and telemarketing operations (Data Guri LLC). Just last week, the FTC filed a lawsuit against internet router manufacturer, D-Link Corporation, for failure to take proper steps to protect their devices, leaving thousands of customers vulnerable to hackers.

In addition to lawsuits, the FTC has demonstrated its cyber “watchdog” status in 2016 through issuance of: warnings against ransomware, guidelines on background screening, and a report discussing “big data”.

The FTC is headed by five Commissioners, nominated by the President and confirmed by the Senate, with one chosen by the President to be Chairperson. No more than three Commissioners can be of the same political party. Following Ms. Ramirez’ departure, only two Commissioners remain: Maureen K. Ohlhausen (R) (term expires Sept. 25, 2018) and Terrell McSweeny (D) (term expires Sept. 25, 2017). Thus, Mr. Trump will be able to appoint 2 persons from his party and a Democrat.

While President-elect Trump’s stance on cybersecurity is still unclear – Mr. Trump recently announced that former New York City Mayor Rudi Giuliani will head his cybersecurity advisory team – what is clear is that given the number of FTC vacancies, Mr. Trump will have the opportunity to impact the direction of the FTC, including its regulation of cybersecurity and enforcement activity.

On January 3, 2017, the Obama Administration issued a memorandum to all executive departments and agencies setting for a comprehensive policy for handling breaches of personally identifiable information (the “Memorandum”), replacing earlier guidance. Importantly, the Memorandum also affects federal agency contractors as well as grant recipients.

The Memorandum is not the first set of guidance to federal agencies and departments for reporting breaches of personally identifiable information (PII), but it establishes minimum standards going forward (agencies have to comply within 180 days from the date of the Memorandum). The Memorandum makes clear that it is not setting policy on information security, or protecting against malicious cyber activities and similar activities; topics related to the recent fiery debates concerning the 2016 election results and Russian influence.

The Memorandum sets out a detailed breach response policy covering topics such as preparedness, establishing a response plan, assessing incident risk, mitigation, and notification. For organizations that have not created a comprehensive breach response plan, the Memorandum could be a helpful resource, even for those not subject to it. But it should not be the only resource.

Below are some observations and distinctions worth noting.

  • PII definition. Unlike most state breach notification laws, the Memorandum defines PII broadly: information that can be used to distinguish to trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual. So, for example, the notification obligation for a federal contractor will not just apply if Social Security numbers or credit card numbers have been compromised.
  • Breach definition. Breaches are not limited phishing attacks, hackings or similar intrusions. They include lost physical documents, sending an email to the wrong person, or inadvertently posting PII on a public website.
  • Training. Breach response training must be provided to individuals before they have access to federal PII. That training should advise the individuals not to wait for confirmation of a breach before reporting to the agency. A belief (or hope) that one will find that lost mobile device should not delay reporting.
  • Required provisions in federal contracts. Federal contractors that collect or maintain federal PII or use or operate an information system for a federal agency must be subject to certain requirements by contract. The Memorandum requires agencies to update their contracts with contractors to ensure the contracts contain certain provisions, such as requiring contractors to (i) encrypt PII in accordance with OMB Circular A-130, (ii) train employees, (iii) report suspected or confirmed breaches; (iv) be able to determine what PII was or could have been accessed and by whom, and identify initial attack vectors, and (v) allow for inspection and forensic analysis. Because agencies must ensure these provisions are uniform and consistent in all contracts, negotiation will be difficult. The Federal Acquisition Regulatory Council is directed to work the Office of Management and Budget to promptly develop appropriate contract clauses and regulatory coverage to address these requirements.
  • Risk of harm analysis. Agencies will need to go through a complex risk of harm analysis to determine the appropriate breach response. Notably, encryption of PII is not an automatic exception to notification.
  • Notification. The rules for timing and content of breach notification are similar to those in many of the state breach notification laws. The Memorandum also advises agencies to anticipate undeliverable mail and to have procedures for secondary notification, something not clearly expressed in most state notification laws. The Memorandum also suggests website FAQs, which can be more easily updated and tailored. Agency heads have ultimate responsibility for deciding whether notify. They can consider over-notification and should try to provide a single notice to cover multiple notification requirements. They also can require contractors to provide notification following contractor breaches.
  • Tabletop Exercises. The Memorandum makes clear that testing breach response plans is essential and expressly requires that tabletop exercises be conducted at least annually.

Federal contractors and federal grant recipients that have access to federal PII will need to revisit (or develop) their own breach response plans to ensure they comply with the Memorandum, as well as the requirements of the applicable federal agency or department which can be more stringent. Of course, those plans must also incorporate other breach response obligations the organizations may have, whether those obligations flow from other federal laws (e.g., HIPAA), state laws, or contracts with other entities. Putting aside presidential politics, cybersecurity threats are growing and increased regulation, enforcement and litigation exposure is likely.

The Federal Trade Commission (“FTC”) has entered into a Consent Order to resolve a complaint brought against a digital advertising company, Turn Inc. Turn provided advertisers with the ability to engage in targeted advertising by tracking consumer’s activities or characteristics to deliver ads tailored to the consumer’s interests.  The FTC alleged that Turn violated federal law by falsely representing to consumers the extent to which consumers could restrict the company’s tracking of their activities and the extent to which Turn’s opt-out applied to mobile app advertising.

According to the FTC Complaint, Turn misrepresented that consumers could prevent Turn’s tracking by blocking or limiting cookies. The FTC claimed that even if a consumer deleted cookies or reset their device, Turn would nonetheless be able to recognize the users by cross-referencing other data to which it had access.

The proposed Consent Order requires, among other things, that Turn: 1) cease misrepresentations regarding what consumer information it collects and/or shares; 2) create an opt-out option that limits tracking by Turn; 3) post a “clear and conspicuous hyperlink” on its website that will take consumers to another page to explain what information Turn collects and uses for targeted advertising; 4) describe on its web site the technologies and methods it uses for targeted advertising; and 5) retain documents relating to compliance for five years. The Consent Order will become final after a 30-day public comment period. See the analysis of the FTC’s Consent Order.

The Consent Order demonstrates the significant and ongoing focus by the FTC on the accuracy of disclosures and statements regarding consumer information. This includes disclosures and statements made in website privacy statements and terms of use. Companies are advised to review their communications with customers and potential customers to be sure those communications are aligned with the companies’ practices and procedures. Such an assessment would help to reduce the possibility of an FTC complaint.

As we’ve noted previously, President-elect Trump’s campaign was light on details about his plans to address cybersecurity. However, his announcement yesterday that Thomas P. Bossert will serve as his assistant for homeland security and counterterrorism, a position equal in status to national security advisor according to the transition team, may offer greater insight into the President-elect’s intentions and plans for cybersecurity and related issues.

BossartMr. Bossert, who served as a top homeland security advisor to the latter President Bush, and who is currently the president of a risk management consulting firm that provides services to companies and governments, noted in the statement announcing his appointment:

We must work toward cyberdoctrine that reflects the wisdom of free markets, private competition and the important but limited role of government in establishing and enforcing the rule of law, honoring the rights of personal property, the benefits of free and fair trade, and the fundamental principles of liberty.

Mr. Bossert’s statement – in particular the portion regarding the “limited role of government” – suggests that the Trump Administration may be slow to pursue new federal cybersecurity statutes and regulations, and that it may give federal agencies, such as the FTC, FBI, and DHS, shorter leashes to enforce existing cybersecurity laws. This statement is consistent with Mr. Bossert’s past advocacy of utilizing a free market approach to cyber insurance, instead of a government-backed program.

That said, given the prominent role cybersecurity issues have played in the lead-up to and wake of the presidential election, and the increased incidence in recent years of cyberattacks against high-profile businesses and government entities, the Trump Administration could face enormous political pressure to take action on the cybersecurity front. One way Mr. Trump may respond to that pressure is by investing heavily in measures designed to protect public and private organizations in the U.S., including private businesses, from cyber conduct perpetuated by foreign actors.  Mr. Bossert, who has warned that businesses “don’t have enough money to compete with a motivated Chinese intelligence community data collection apparatus that can spend billions when [businesses] can only spend millions,” would likely agree with such an approach. The business community should bear in mind, though, that an effective plan for disrupting international interference with U.S. business affairs will likely require some degree of domestic regulation.

Additionally, it is worth noting state and local governments have not waited for the federal government to act, and have legislated in a number of areas concerning cybersecurity. Examples include stringent regulations in California and Massachusetts designed to safeguard information systems and personal data. More recently, New York State is poised to finalize new, stringent cybersecurity regulations, potentially prompting other states to do the same. Indeed, other states and cities have already signaled their intent to pursue activist immigration and climate change agendas in response to what they believe the Trump Administration’s agenda will be.

We will keep you posted as Mr. Trump’s cybersecurity policies, and state and local responses thereto, come into clearer view.

The New York State Assembly Committee on Banks held a public hearing on December 19, 2016, receiving testimony about both the benefits and challenges of a recently proposed regulation to address the growing threat posed by cyber-attacks on banks, insurance companies and most other entities which are regulated by the Department of Financial Services (DFS). The proposed regulation was initially published by DFS on September 28, 2016 and since that time has been subject to a public comment period before final issuance.

The proposed regulation, if adopted, is likely to require most DFS-regulated organizations to establish a cybersecurity program, including the adoption of policies and procedures, the reporting to DFS of all successful and unsuccessful cybersecurity attacks, the appointment of a chief information security officer to oversee cybersecurity plans, and the inclusion of certain required provisions in third-party service provider agreements. We have outlined the proposed regulation in more detail here.

Representatives from community banking and other relatively small DFS-regulated entities testified during the hearing that the proposed regulation is a “one size fits all” solution that are too onerous for small to mid-sized entities, fail to coordinate with existing federal cyber requirements, and seek to focus on a national security threat that should be addressed exclusively at the federal level. They also noted that the reporting requirements under the proposed regulation are particularly onerous in that reporting would be required for successful and unsuccessful cybersecurity attacks, which will further contribute to additional regulatory compliance costs that will be passed on to the consumer, resulting in higher consumer prices and possibly reduced consumer choice in some markets. Other witnesses claimed the proposed regulation does not go far enough, calling for more comprehensive and prescriptive requirements.  DFS did not testify at the hearing.

Meanwhile, DFS has indicated informally that it intends to publish a revised regulation in the coming weeks, and that, in so proceeding, will among other things extend the proposed regulation’s January 1, 2017 effective date. DFS has not signaled — either informally or formally – what other changes it intends to make the in the revised regulation.   It is possible the testimony from today’s public hearing could influence some of the changes.

We will report on this blog once DFS publishes its revised regulation. We continue to urge DFS-regulated companies to carefully review their current programs, policies, and procedures to understand their current cyber footing and evaluate what action, if any, they will need to take once the revised regulation is adopted.

We know that data analytics is being used to influence a wide range of things such as the pair of shoes one might want to buy or what news is “trending” on Facebook. Similar tools are being applied to employer-sponsored group health plans. According to a recent HealthcareITnews article, vendors such as Advanced Plan for Health (APH) are using predictive modeling functionality to support population health management. The ability to better anticipate and manage plan costs while shaping plan design to meet the needs of plan participants likely will be very appealing to plan sponsors, but employers should think through implementation carefully.

According to the article, these products (APH calls its product “Poindexter”) can make predictions about when certain health events are likely to occur (such as an ER visit), or forecast the nature of the services to be provided (such as the length of the participant’s hospital stay). We will leave to the data scientists to describe how this sausage is actually made, but here is how it is summarized in the article:

Currently, the Poindexter engine calculates care gaps and predicts the likelihood of hospital admissions, as well as readmissions, 6 to 12 months in advance for any given patient population — typically covered lives in a self-insured employer’s health plan. The tool also examines data from claims, pharmacy and clinical sources, benchmarking against real-world health data adjusted for comparable demographics, geography and industry of the employer.

Poindexter assigns risk scores to individuals within that population – identifying people whose health profile suggests elevated risk. With this information, case managers can improve outcomes and lower costs when they help patients avoid catastrophic events by improving their health through timely interventions.

One thing seems clear about this process – there’s a lot of data, a lot of very sensitive data, involved that is coming from a number of different sources. Certainly, data privacy and security compliance, yes this means HIPAA, must be taken into account by employers when considering whether and how to apply these analytical tools to their group health plans. Employer-sponsored wellness programs have raised similar issues as participants often must tender personal health information about themselves to take advantage of incentives under those programs.

Speaking of wellness programs, if analytics can predict and help employers better design their health plans, couldn’t the technology also be used to help prevent or put off more adverse and expensive health events. That is, in the course of “population health management,” would it be unreasonable to expect that a health plan that can reasonably anticipate or predict a significant health event would take some steps to try to prevent it from happening? Coupling analytics with traditional wellness programs, incentives perhaps could be more targeted to better steer participants toward healthier behaviors or to get care sooner and less expensively.

In the course of administering benefit plans with features like these, keeping protected health information anonymous may be easier said than done. Additionally, providing inducements can raise issues under HIPAA, the ACA, and the Equal Employment Opportunity Commission’s ADA and GINA regulations, which also have confidentiality protections. So, as technologies like analytics emerge to power employee benefit plans, particularly health plans, they need to be run through the array of law and regulations that apply to those plans.

A motion to dismiss has been filed in a California case filed by a New York woman who claims that the National Basketball Association’s Golden State Warriors violated the Electronic Communications Privacy Act (the “Wiretap Act”), 18 U.S.C. § 2510, et seq., by distributing a mobile content app that invades users’ privacy by turning on a device’s microphone and eavesdropping on the audio it picks up. Satchell v. Sonic Notify Inc., et al., 16-c v-04961 (N.D. Cal.)

The app uses the phone’s microphone to track the user’s location by picking up on sonic beacons but fails to warn users that it is doing so and that it is picking up nearby conversations in the process.  The beacons then trigger the delivery of custom-tailored content, promotions, and advertisements directly to users’ smartphones.

The motion, filed by the Warriors and the company that operates the beacons, claims that Plaintiff has not alleged an injury in fact, as required by the Supreme Court’s recent decision in Spokeo v. Robins, 136 S. Ct. 1540 (2016).  According to defendants, Plaintiff’s sole allegation of injury is that there was wear and tear on her phone and that her phone lost battery power.

Defendants also assert that Plaintiff misunderstands how the app operates stating that the beacon technology does not “record” or “intercept” anyone’s communications in that any such recordings remain on the user’s phone and are never transmitted beyond the device to any Defendant. Thus, Defendants could not have committed an illegal “intercept[ion]” within the meaning of the Wiretap Act, which requires an “acquisition of the contents” of an “oral communication.”

Plaintiff responded to the motion by arguing that Defendants misapply Spokeo.  Plaintiff contends that she alleges a substantive (rather than merely procedural) violation of the Wiretap Act, stating that the Wiretap Act guards against intangible harms that are firmly rooted in common-law privacy torts and protects substantive privacy interests that Congress explicitly sought to protect in enacting the Wiretap Act. Thus, taking the position that history and the judgment of Congress establish that the invasion of privacy Plaintiff suffered is a concrete injury sufficient to confer Article III standing, Plaintiff argues the Defendants’ motion should be denied.

We will continue to keep our eye on the ball in this case and report back once the court rules on Defendants’ motion.

A recent study at the University of Arkansas suggests that organizations should avoid doing too much for individuals affected by a data breach. That is, when organizations provide compensation to breach victims that exceeds the victims’ expectations it could backfire. Those victims may become suspicious, thinking the organization has something to hide, which could have an adverse impact on the victims’ willingness to continue doing business with the organization.

If you have gone through a data breach, then you know the anxiety organizations experience throughout the process. Among other things, they have to quickly secure their information systems, investigate how the incident happened, and coordinate with law enforcement and other agencies. But perhaps the biggest concern is what to do for the individuals affected by the breach beyond providing breach notification.

Except for California and Connecticut which require credit monitoring and related services be provided following breaches involving certain personal information, most state data breach notification statutes only require that affected persons be given notice of the breach. Yet, when considering their breach response, many organizations think about what to do for affected persons regardless of state law requirements. In many cases, companies wind up offering credit monitoring and related remediation services, but some companies also will provide compensation of some kind.

The study found, however, that when compensation (e.g., gifts, discounts, free memberships, etc.) exceeds what the affected persons expected would be provided, those persons are more likely to become suspicious, rather than appreciative. If affected persons are suspicious they may not only be less likely to associate with the organization or continue to buy its products or services, they may be more likely to inquire more deeply about the incident or take legal action.

When considering breach response strategies, therefore, organizations should think more carefully about the kinds of benefits or compensation to offer to persons affected by the breach. We have emphasized here many times the importance of developing a breach response plan and practicing that plan. That process should include thinking through different remediation strategies, including what, if any, credit monitoring services or compensation the organization would be prepared to offer in the event of a breach. A rash decision to provide robust compensation to affected persons, made in the heat of an actual breach, could be the wrong one, according to the study.