Efforts to secure systems and data from a cyberattack often focus on measures such as multifactor authentication (MFA), endpoint monitoring solutions, antivirus protections, and role-based access management controls, and for good reason. But there is a basic principle of data protection that when applied across an organization can significantly reduce the impact of a data incident – the minimum necessary principle. A data breach reported late last year by the Rhode Island Public Transit Authority (RIPTA) highlights the importance of this relatively simple but effective tool.

In December 2021, RIPTA sent notification of a data breach to several thousand individuals who were not RIPTA employees. Reports of the incident prompted inquiries from a state Senator in Rhode Island, Louis P. DiPalma, and union officials who represented the affected individuals. According to Rhode Island’s Department of Administration (DOA), a forensic analysis conducted in connection with the incident indicates the affected files included health plan billing records pertaining to State of Rhode Island employees, not RIPTA employees. The DOA goes on to state that:

[s]tate employee data was incorrectly shared with RIPTA by an external third party who had responsibility for administering the state’s health plan billing.

An investigation is underway to confirm exactly what happened. The content of recent conversations between state officials and union representatives reported in the press indicate that an RIPTA payroll clerk received a file containing state employee health plan data in August 2020, stored it on the employee’s hard drive, where it remained until August 2021, when the cyberattack on RIPTA occurred. It is unclear why the employee received the information, from whom, or whether it was appropriate to maintain it.

Regardless, the “minimum necessary” principle, simply stated, requires that organizations take reasonable steps so that confidential and personal information are only accessed, used, maintained, or disclosed to carry out the applicable business functions. Consider, for example, that retention policies are becoming increasingly important from a compliance perspective, such as with regard to the California Privacy Rights Act of 2020 (CPRA), which amends and supplements the California Consumer Privacy Act (CCPA), the EU General Data Protection Regulation (GDPR), and the Illinois Biometric Information Privacy Act (BIPA).  This principle can be applied at multiple points in the operations of the organization, including without limitation:

  • When requesting information. Think about what elements of information the organization collects from customers, students, patients, vendors, employees, and others. Is it more information than is needed to carry out the purpose(s) for the collection? Can portals, forms, etc. be modified to limit the information collected?
  • When receiving information. Employees cannot always control the information they receive from parties outside the organization. But when they do, what steps or guidelines are in place to determine what is needed and what is not needed? For information that is not needed, what is the process for alerting the sender, if necessary, returning the data, and/or removing it from the systems?
  • When using information. Employees carry out many critical business functions that require the use of confidential and personal information. Do they always need all of it? Are there instances where less information can be sufficient for the processing of an important business function.
  • When storing information. The task at hand has been completed and the question becomes what information should be retained. The answer can be a complex web of legally mandated retention requirements, contractual obligations, business needs, and other considerations. But organizations should carefully analyze these issues an establish protocols for employees to follow. Note that under the CPRA, a covered business may not retain a consumer’s personal information for longer than is reasonably necessary for the stated purpose it was collected.
  • When responding to requests or disclosing information. Whether engaging in billing and collection activities, responding to an attorney demand letter, reporting information to the government, administering benefit plans for employees, or any number of other typical business functions, organizations make disclosures of confidential and personal information. Important questions to ask are (i) what data does the requesting party really need, (ii) what classifications of information are actually in the file being disclosed and are there limitations on the disclosure of that information, and (iii) whether the response or disclosure can have the same effect with less data.

In thinking about these questions, there may not be a clear right or wrong answer to whether the information should or should not have been collected, used, stored, or disclosed. However, from a risk management perspective, it is helpful to review business procedures, practices, operations, forms, etc. for ways to minimize exposure to confidential and personal information. Applying the minimum necessary principle can be an effective way of minimizing the organization’s data footprint so that should it experience a security incident, there is the possibility for less data to be compromised.

The use of smart dashcams and vehicle cameras, including those leveraging AI technology, may trigger the next wave of BIPA litigation, according to two cases filed in Illinois this week.

Enacted in 2008, the Illinois Biometric Information Privacy Act, 740 ILCS 14 et seq. (the “BIPA”), went largely unnoticed until a few years ago when a handful of cases sparked a flood of putative class action litigation over the collection, use, storage, and disclosure of biometric information. Many of these cases were filed by plaintiffs who alleged BIPA violations when time management devices called for them to swipe their finger to clock in or out of work. Use of those devices, many plaintiffs claim, resulted in the collection of their fingerprints without the corresponding notice, consent, and other measures required under the BIPA. The focus may be shifting to a new technology: AI-powered dashcams.

Organizations whose employees drive regularly to perform job functions raise several issues – safety, productivity, loss prevention, expense reimbursement, among others. For these reasons, some organizations deploy telematics and related technologies to better manage their fleets. A tool in this process is the vehicle camera, such as dashcams, that are capable of monitoring (and recording) video and/or audio of the driver, passengers, and in some cases persons outside the vehicle. These devices also can track location and how a vehicle is being driven – hard acceleration, sharp turns, lane changing, etc. But, it is the use of AI and machine learning technologies that is raising questions about whether biometric identifiers and/or information are being collected.

According to at least one of these recently filed complaints, the vehicle camera does not just take a traditional video recording of the driver. It uses AI and machine learning technologies to detect driver behavior. More specifically, product descriptions claim the intelligent cameras can identify if drivers are inattentive, distracted, or tired through facial mapping technology which scans the geometry of the face and analyzes the resulting data.

Under BIPA, a “biometric identifier” generally means “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry” and “biometric information,” means “any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual.

It is unclear at this point whether these complaints have any merit, however, organizations that are using AI-powered vehicle cameras should be reviewing that technology carefully with their vendors to understand the nature and extent of the data being collected. For assistance with understanding the legal framework concerning biometric information, please see our Biometric Law Map

What is greenwashing and why is it a problem? | EuronewsWith ransomware and other cyber threats top of mind for most in the c-suite these days, a question frequently raised is whether a particular organization is a target for hackers. Of course, nowadays, any organization is at risk of an attack, but the question is whether some organizations are targeted more than others. A recent Insurance Journal article discusses a paper published in September 2021 that identifies a factor that could elevate the risk of being targeted, a factor many in cyber might not have expected, “greenwashing.”

Around this time of year, many offering commentary on cybersecurity issues (including us!) postulate on what lies ahead for the year, trends to watch, and emerging risks. For example, Embroker Insurance Services published comprehensive report in December 2021, outlining a wealth of cyberattack statistics and trends, including a view on the types of organizations most vulnerable to cyberattacks:

  • Banks and financial institutions
  • Healthcare institutions
  • Corporations
  • Higher education

It is not difficult to see why entities in these industries (and others) are thought of most frequently. They typically have thousands, sometimes millions of customers, many locations, hundreds of employees, and lots and lots of personal information. They maintain increasingly complex information systems amid an ever-expanding regulatory environment, sometimes without commensurate budgetary support.

However, according to the University of Delaware paper cited by the Insurance Journal, an organization’s “corporate social performance” or “CSP” can affect its likelihood of being subject to a cyberattack. Specifically, according to the paper, organizations that have CSP strengths outside of their core business with a less than stellar record in other areas are at increased risk of a data breach.

“The increased likelihood of breach for firms with seemingly disingenuous CSP records suggests that perceived “greenwashing” efforts that attempt to mask poor social performance make firms attractive targets for security exploitation.

An organization’s CSP, as measured by its environmental, social and corporate governance (ESG) rating, is an emerging metric for evaluating organizations, even if its impact on corporate financial performance (CFP) remains unclear.  For example, a proposed rule issued in October 2021 by the Department of Labor would help pave the way for increased consideration of ESG factors by plan fiduciaries when selecting investment options for retirement plan assets.

The greater attention to CSP and ESG shared by many, however, evidently may include a segment of people willing to take more extreme measures to achieve their goals. In 2008, according to reports, fires that severely damaged at least five luxury homes in a Seattle suburb were suspected to have been started by “ecoterrorists,” angry that developers marketed the subdivision as “built green.” This is not unlike the motive identified by the Univ. of Del. paper for launching cyberattacks against certain organizations – that is, stopping organizations from using ESG to appeal to customers without also making meaningful changes to core business practices.

Of course, it is not clear whether the paper has captured what motivates cyberattacks more often than not, or whether in fact organizations engaged in greenwashing are being targeted at a higher rate than others, if they at “targeted” at all. At the same time, it is certainly not unprecedented for individuals to take extreme measures to advance their desires for social, environment, and other changes. Either way, organizations should be considering all potential risks and appropriately weighing them when developing their information security policies, incident response plans, and other safeguards for protecting systems and information.

Photo courtesy of Euronews.com

Over the past several years, if your organization experienced a cyberattack, such as ransomware or a diversion of funds due to a business email compromise (BEC), and you had cyber insurance, you likely were very thankful. However, if you are renewing that policy (or in the cyber insurance market for the first time), you are probably looking at much steeper rates, higher deductibles, and even co-insurance, compared to just a year or two ago. This is dependent on finding a carrier to provide competitive terms, although there are some steps organizations can take to improve insurability.

What’s going on?

The short answer is what one might expect, claims paid under cyber insurance policies are significantly up, according to Marc Schein*, CIC, CLCS, National Co-Chair Cyber Center of Excellence for Marsh McLennan Agency who closely tracks cyber insurance trends. Mr. Schein identified the key drivers hardening the cyber insurance market: ransomware and business interruption.

  • Ransomware: According to FBI data, adjusted losses from ransomware matters tripled from 2019 to 2020. Further, according to an Allianz Global Corporate & Specialty (AGCS) cyber insights report, cited in Insurance Journal, the U.S. experienced a 62% increase in ransomware incidents during the first six months of 2021 and a 225% increase in ransom demands.
  • Business interruption: Business interruption costs following a ransomware attack more than doubled over the past year, increasing from $761,106 to $1.85 million in 2021, with down time averaging 23 days, according to the same AGCS report.

According to Fitch Ratings’ Cyber Report 2020, insurance direct written premiums for the property and casualty industry Increased 22% last year to over $2.7 billion, representing the demand for cyber coverage. The industry statutory direct loss plus defense and cost containment (DCC) ratio for standalone cyber insurance rose sharply in 2020 to 73% compared with an average of 42% for the previous five years (2015-2019). The average paid loss for a closed standalone cyber claim moved to $358,000 in 2020 from $145,000 in 2019.

The effects of these, other increases in claims, and losses from cyberattacks had a dramatic impact on cyber insurance.

  • Rate increases of 100% to 300% are not uncommon. According to Marsh’s November Cyber Market Report, the average U.S. cyber price per million in coverage increased 174% for the total price per million for the 12 month period ending September 2021.
  • Capacity has decreased dramatically, with $10 million limits becoming challenging to secure.
  • Policy changes, such as increases in deductibles, retention, sublimits, and co-insurance on ransomware payments, are making cyber coverage look more like health insurance.

What can we do?

Perhaps the most concerning development for organizations in the cyber insurance market is the significantly increased scrutiny carriers are applying to an applicant’s insurability. The days of the three-question application process may be over. According to Mr. Schein, before applicants look to procure cyber coverage, an astute buyer should contemplate the following underwriting cyber security controls. Examples of these include:

  • Multi-factor authentication across the applicant’s systems including for email, remote access, vendor access, etc.
  • Adoption of a tested incident response plan.
  • Presence of an endpoint detection solution.
  • Security awareness training, including phishing training.
  • Removing end-of-life software.
  • Closed remote access ports, including remote desktop protocol (RDP).

This is consistent with Mr. Schein’s experience with organizations anxious to bolster information security controls in connection with the underwriting process for cyber insurance. The controls mentioned above are typically best practices underwriters are strongly encouraging which may also improve an organization’s compliance posture. Notably, they are not limited to technical IT fixes, but include broader administrative policies and practices, such as training and breach preparedness.

Indeed, an increasing number of states require businesses to implement “reasonable safeguards“ to protect personal information. In New York, for example, the New York SHIELD Act requires businesses of all sizes to adopt administrative, physical, and technical safeguards to protect the personal information they maintain about New York residents. The statute does not require specific technical safeguards be maintained. The California Privacy Rights Act (CPRA) adds to the California Consumer Privacy Act (CCPA) an affirmative obligation to “implement reasonable security procedures and practices…to protect the personal information from unauthorized or illegal access, destruction, use, modification, or disclosure.” Considering what IT experts have been saying about the effectiveness of multifactor authentication, it has been identified as a meaningful control albeit not full-proof tool to help prevent unauthorized access to information systems within the scope of privacy and security regulation.

Of course, there are no silver bullets, but such safeguards may dramatically reduce the chances of a cyberattack, and that is music to an underwriter’s ears. There will be claims, just fewer of them, and perhaps less damaging.

 

I wish to thank Marc Schein for his tireless commitment to educating on these issues and for his valuable contributions to this article.  

In a groundbreaking move, likely to have significant impact on employee hiring and HR tech, the New York City Council has passed a measure (“the NYC measure”) that bans the use of automated decision-making tools to (1) screen job candidates for employment, or (2) evaluate current employees for promotion, unless the tool has been subject to a “bias audit”, conducted not more than one year prior to the use of the tool.  The NYC measure will take effect January 2, 2023.

The NYC measure was passed due to growing concern about automated decision-making tools – which will also be regulated under the California Privacy Rights Act, which is set to take effect at the same time as the NYC measure – one of which is that such tools may be imbedded with unintended biases that result in outcomes that discriminate against individuals based on protected characteristics like race, age, religion, sex and national origin.

The category of automated decision-making tools targeted by the NYC measure is “automated employment decision tools,” which the measure defines as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”  Excluded from the measure’s scope are tools that do not automate, support, substantially assist or replace discretionary decision-making processes and that do not materially impact natural persons, such as, for instance, junk email filters, firewalls, antivirus software, calculators, spreadsheets, databases, data sets, or other compilations of data.

Employers that intend to utilize an employment decision tool must first conduct a bias audit and must publish a summary of the results of that audit on their websites.  They must also notify all NYC employees and/or job candidates that: (1) the tool will be used in connection with assessment or evaluation of their employment or candidacy and (2) specify the job qualifications and characteristics that the tool will use to make that assessment or evaluation.

Utilizing an automated employment decision tool without first conducting a compliant bias audit exposes employers to civil penalties of up to $500 on day one, followed by penalties of $500 to $1,500 every day thereafter.   Failure to properly notify candidates or employees about use of such tools constitutes a separate violation.

This is not the first legislation of its kind, but certainly the most expansive.   In late 2019, Illinois passed the Artificial Intelligence Video Interview Act (“the AIVI Act”), HB2557, which imposes consent, transparency and data destruction requirements on employers that implement AI technology during the job interview process. The AIVI Act, the first state law to regulate AI use in video interviews, took effect January 1, 2020. Likewise, in 2020, Maryland enacted a law that requires notice and consent prior to use of facial recognition technology during a job interview.  And the Attorney General of Washington D.C. recently introduced a bill that addresses discrimination in automated decision-making tools generally.    Similar legislation is likely to trend across other states, as this technology continues to infiltrate hiring practices and other areas of business.  As early as 2014, the EEOC has been taking notice of “big data” technologies and the potential that the use of such technology may be in violation of existing employment laws such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the American with Disabilities Act and the Genetic Information Nondiscrimination Act.

Only time will tell the impact the NYC measure and others of its kind will have on employment practices, but employers should tread carefully with AI usage in the workplace. Moreover, it will likely not be long before other states and localities enact similar legislation. Employers, regardless of jurisdiction, should be evaluating their hiring practices and procedures, particularly to ensure that they obtain appropriate written consent before using any technology that collects sensitive information about job applicants or employees, and that they have conducted all requisite privacy and bias impact assessments.

The CCPA has reached the two-year mark. This is a good time for businesses to review the success of their compliance programs, recalibrate for the CCPA’s third year, and gear up for the CPRA’s January 1, 2023 effective date.

Here are a few suggestions:

  1. Privacy Policies. The CCPA requires a business to update the information in its privacy policy or any California-specific description of consumers’ privacy rights at least once every twelve months. If your business has not already done so, now is a good time to review both online and offline data collection practices to ensure privacy policies accurately disclose, at a minimum, the categories of personal information (“PI”) collected in the preceding 12 months, the categories of PI sold in the preceding twelve months, and the categories of PI it disclosed for a business purpose in the last 12 months.

Given the challenges of the last few months, your business may be collecting PI beyond what it currently discloses in its privacy policies. For example, the business may need to update its privacy policies to disclose the collection and use of COVID-19 related screening information, biometric information, or PI collected as a result of remote work situations.

If your business needs to update its privacy policy to reflect additional data collection activities, it will likely need to update its “notice at collection”, including employee and job applicant privacy notices.

  1. Employee training. The CCPA requires that a business ensure all employees handling inquiries about consumer rights, the businesses’ privacy practices, or its compliance with the CCPA are informed of applicable CCPA requirements. Businesses will want to
  • review training programs to ensure they include appropriate CCPA related content;
  • determine whether employee handbooks and manuals have been updated accordingly; and,
  • document that relevant employees have received training.
  1. Reasonable Safeguards. The CCPA does not currently create an affirmative obligation to implement reasonable safeguards for protecting consumer PI; however, it provides a private right of action to consumers whose PI has been involved in a data breach resulting from the business’s failure to implement reasonable security safeguards. With this in mind, your business will want to review whether it has
  • performed an annual risk assessment to identify new or enhanced risks, threats, or vulnerabilities to its systems or the PI it collects or maintains;
  • reviewed and updated its written information security program and data retention schedule;
  • practiced its incident response plan; and
  • updated its vendor management program to address cyber-based risk.

CCPA compliance is an ongoing activity, and these action items are worthy of review at the one-year mark. However, further year-end review might also include

  • an assessment of the business’s website’s accessibility;
  • confirmation that service provider agreements have been amended to satisfy the CCPA; and
  • incorporation of relevant CCPA provisions in new service provider contracts.

Although the CCPA does not mandate implementing reasonable safeguards, this will change effective January 1, 2023. The CPRA, which amends the CCPA, creates an affirmative duty to do so. Businesses should use the next year to identify what constitutes reasonable safeguards for their data and systems, begin implementing those safeguards, update internal policies and procedures as necessary, and train staff.

The CPRA also amends the CCPA disclosure requirements to include information relating to the collection and use of “sensitive personal information”. In addition, California consumers will have the right to limit the business’s use of this information in certain circumstances, similar to the right to opt out of the sale of personal information. In order to comply, businesses may need to revisit and expand their data mapping to capture sensitive personal information.

These are just two examples that necessitate reviewing your business’s data protection program and setting in motion processes to prepare for the CPRA. We will continue to post on steps your business can take in anticipation of January 1, 2023.

The leaders of our Wage & Hour Practice, Justin Barnes Jeffrey Brecher and Eric Magnus collaborated with us on this article.

According to reports, Kronos, the cloud-based, HR management service provider, suffered a data incident involving ransomware affecting its information systems. Kronos communicated that it discovered the incident late on Saturday, December 11, 2021, when it “became aware of unusual activity impacting UKG solutions using Kronos Private Cloud.”   Shortly after,  Kronos issued a helpful Q & A for customers impacted by the incident. The company confirmed:

[T]his is a ransomware incident affecting the Kronos Private Cloud—the portion of our business where UKG Workforce Central, UKG TeleStaff, Healthcare Extensions, and Banking Scheduling Solutions are deployed. At this time, we are not aware of an impact to UKG Pro, UKG Ready, UKG Dimensions, or any other UKG products or solutions, which are housed in separate environments and not in the Kronos Private Cloud.

This incident has already impacted time management, payroll processing, and other HR-related activities of organizations using the affected services. Ransomware and similar attacks also could compromise confidential and personal information maintained on affected systems, although there is no indication of that at this point. Clearly, organizations that use these services can be affected in several ways. The FAQs below provide information on some of the key issues these organizations should be thinking about.

Isn’t this really Kronos’ problem?

This certainly is a significant issue for Kronos and, based on communications from Kronos, the company is in the process of remediating the incident and alerting its impacted customers. However, because of the nature and extent of the services Kronos provides to its customers (i.e., employers), there are several issues that HR, IT and other groups inside organizations that are customers of the affected services need to be doing. We address some of those items below.

From a communications perspective, this incident likely will receive significant news coverage, prompting questions from employees about the impact of the incident on their personal information, their schedules, their pay, etc. Employers will need to think carefully about how to respond to these inquiries, especially when there is little known at this point about the incident.

From a compliance perspective, employers should be reviewing and implementing their contingency plans depending on the scope of services received from Kronos. For example, clients using Kronos time management systems should be evaluating what measures they should be implementing to ensure their employees’ time is properly captured and paid. A company has a legal obligation to accurately track hours worked, regardless of whether their third-party vendor (like Kronos) responsible for the task can do so or not. Clients might want to institute, in the short-term, paper timekeeping and tracking systems to ensure that employees are taking appropriate breaks and being paid for all time worked. It would be especially helpful in this situation to have employees sign off that the amount of time they report and the breaks they took are accurate.

From a cybersecurity standpoint, the answer to the question of whether this is only Kronos’ problem likely is no. All 50 states, as well as certain cities and other jurisdictions, have breach notification laws. If there is a breach of security under those laws, there may be a notification obligation. The notification obligation to affected individuals largely rests with the owner of that information, which likely would be employers. We anticipate that if notification is required, Kronos may take the lead on that, although employers will want some assurances that notification will be provided in a time and manner consistent with applicable law.

What should we be doing?

There are several steps employers likely will need to take in response to this incident, not all of which are clear at this point because of what little is currently known. Still, there are some action items affected employers should be considering:

  • Stay informed. Closely follow the developments reported by Kronos, including coordinating with your HR and IT teams.
  • Consult with counsel. Experienced cybersecurity and employment counsel can help employers properly identify their obligations and coordinate with Kronos, as needed.
  • Communicate with employees. Maintaining accurate and consistent communications with employees is critical, especially considering a significant part of the discussions around this incident could be taking place in social media. Your employees and their representatives, where applicable, may already be aware of this incident. To be prepared to address and respond to employee concerns, organizations should consider providing an initial short summary of the incident to potentially impacted individuals as soon as possible. That communication could be expanded over time with more information as it come available, perhaps in the form of FAQs like these. Less is more on the initial communication, again, given what little is known. However, it is important to let employees know the organization is aware of the incident and actively taking steps to mitigate its effects on employees.
  • Review Your Kronos Services and Service Agreement. Begin evaluating the services that the organization receives from Kronos. This will help to implement contingency plans, but also to assess the nature and extent of the information that Kronos maintains on the organization’s behalf. The organization might be able to conclude early on that, while there may be impacted systems and operations, Kronos was not in possession of the kind of personal information pertaining to employees of the organization that could lead to a breach notification obligation. This information could be reassuring for employees. Also, review the services agreement between the organization and Kronos as it may include provisions that have particular relevance here. For example, the agreement may outline a process agreed to between the parties for handling data incidents like this.
  • Review your cyber insurance policy. It might be premature to make a claim against the organization’s cyber policy, assuming the organization has a cyber policy – an important consideration nowadays. But, key stakeholders should review the situation and discuss potential coverage options with the organization’s insurance broker and/or legal counsel. Becoming more familiar with existing cyber insurance policies and coverage is prudent as it might cover some of the costs an organization incurs in connection with incidents like this.
  • Evaluate vendors. What some are asking may have led to the Kronos incident is the “Log4j” vulnerability, however, that has not been confirmed at this time. Log4j is described as a Java library for logging error messages in applications. Because other vendors also may have Log4j exposure, organizations may want to use this incident as a reason to examine more closely the data privacy and security practices of other third-party vendors, regardless of whether the Log4j vulnerability was exploited here. This is particularly the case for those vendors that handle the personal information of employees and customers.
  • Revisit your own data security compliance measures. Organizations also should check their own systems for Log4j and other vulnerabilities and fix them as quickly as possible.

Will the state breach notification laws apply?

We do not know if there has been a “breach” at this point. This will require investigation and analysis of the incident, which we understand is underway at Kronos at this time. However, if the incident affects certain unencrypted personal information of individuals, such as names coupled with social security numbers, drivers’ license numbers, financial account numbers, medical information, biometric information or certain other data elements, state breach notification laws may apply. Organizations that utilize Kronos’ services globally must consider a broader definition of personal data, such as under the General Data Protection Regulation (GDPR).

Thousands of organizations have suffered similar attacks, all of which illustrate the importance of planning for a response, not only trying to prevent one. Third party service providers play important roles for most organizations, particularly with regard to their HR systems and corresponding operations. It will take some time to work through this incident, but it should be a reminder for all affected organizations to continue to develop, refine, and practice their contingency plans.

Earlier this month, New York Governor Kathy Hochul signed into a law a bill that will require New York private sector employers to provide written notice to employees before engaging in electronic monitoring of their activities in the workplace.  Civil Rights (CVR) Chapter 6, Article 5, Section 52-C*2 will take effect six months after enactment, i.e. May 7th, 2022.

Pursuant to the new New York law, electronic monitoring in the workplace includes monitoring of employees’ telephone conversations or transmissions, electronic mail or transmissions, or internet access or usage of or by an employee by any electronic device or system, including but not limited to the use of a computer, telephone, wire, radio, or electromagnetic, photoelectronic or photo-optical systems. Prior written notice of the electronic monitoring must be issued at the time of hiring and must be acknowledged by the employee in writing or electronically.  In addition, the notice must be posted in a conspicuous place readily available for viewing by employees.

It is important to note that under the new law, a private right of action for employees that are impacted by the law is not available. The New York attorney general has exclusive enforcement authority. Failure to comply with the law’s notice requirements may subject the employer to a civil penalty of $500 for the first offense, $1000 for the second offense, and $3000 for the third and each subsequent offense.

Employer monitoring requirements of this kind are not exclusive to New York. In Connecticut, for example, both private and public sector employers are required to notify employees prior to electronic monitoring, with similar penalties for failure to comply.  Likewise, in Delaware, an employer is not permitted to monitor or intercept an employee’s telephone conversations, email or internet usage without prior notice in writing or alternatively notification, day of, each time the employee accesses the employer-provided email or Internet access services.

Excessive, clumsy, or improper employee monitoring can cause significant morale problems and, worse, create potential legal liability for privacy-related violations of statutory and common law protections, as evidenced by the New York law and others of its kind. Advancements in technology have made it easier to monitor remote employees, and by extension easier to violate the law for employers that are not careful.

When organizations decide to engage in any level of surveillance or search of employees, they should consider what their employees’ expectations are concerning privacy. Whether in a jurisdiction that requires prior notice of employee monitoring or not, in general, it is best practice to communicate to employees a well-drafted acceptable use and electronic communication policy that informs them what to expect when using the organization’s systems, whether in the workplace or when working remotely. This includes addressing employees’ expectations of privacy, as well as making clear the information systems and activities that are subject to the policy.

COVID-19 changed the way many organizations operate, and monitoring and surveillance have become increasingly important, particularly for employers that do not share the same physical workspace with their employees.  When employers implement new monitoring and surveillance tools, they need to plan carefully, have the right team in place, review policies and applicable state and federal law, and be prepared to address problems when they arise.

On October 27, 2021 the FTC issued a final rule (the “Final Rule”) amending 16 CFR Part 134, Standards for Safeguarding Customer Information (“Safeguards Rule”), after a period of notice and comment. While the existing Safeguards Rule imposes a general obligation on financial institutions to maintain an information security program, the Final Rule outlines these requirements in more granular detail. Importantly for smaller financial institutions, the Final Rule exempts businesses with fewer than 5,000 customers.

The Final Rule now defines key terms rather than incorporating them by reference. Other changes include requiring greater oversight and responsibility of a company’s information security program by designating a qualified individual to maintain the program, requiring annual reports to a company’s board of directors or governing body, and requiring vulnerability assessments and penetration testing. While there will likely be some cost to comply with the new requirements of the Final Rule, the FTC indicated the importance of these requirements justifies any associated costs.

What Businesses are Subject to the New Final Rule

The Final Rule applies to financial institutions that maintain customer information for over 5,000 individuals.

Data Breach Reporting Obligations

The FTC indicated in their discussion of the Final Rule that there may be future reporting obligations of data breaches to the FTC. The FTC requested comments on whether it should require such reporting. While reporting obligations were not added to the Final Rule, the FTC is issuing a Notice of Supplemental Rulemaking to impose data breach reporting obligations.

While not yet imposing data breach notification obligations, the Final Rule does require that covered business implement a written incident response plan.

Designation of a Qualified Individual and Internal Reporting

The Final Rule requires covered institutions to designate a qualified individual to oversee the organization’s information security program. This person need only be qualified and does not need to be an executive or CISO. In fact, this individual need not even be an employee. This allows smaller enterprises to utilize a third-party such as a virtual CISO. Previously, covered institutions were only required to designate an employee to coordinate the company’s information security program.

The qualified individual must now submit written reports to the company’s board of directors or senior officers no less than once a year. These reports must provide status updates regarding the company’s information security program, compliance with the Safeguards Rule, and other material issues such as risk assessments, security events or violations, and recommended changes to the information security program.

Overall, this change appears to be geared toward encouraging the participation of company leadership in information security. As the number of data breaches continue to increase, this change indicates that information security should receive regular consideration from company executives. The FTC stopped short of requiring the board of directors to certify the report, however.

Risk Assessments and Vulnerability Testing

The Final Rule requires companies conduct regular, written risk assessments that include testing for vulnerabilities and penetration testing. Previously, risk assessments could remain fairly high level. Vulnerability assessments and penetration testing, however, are far more granular and technical in nature.

Penetration testing must be conducted at least annually. Not all IT managed service providers are equipped with the ability to conduct this testing. Companies may therefore need to employ additional vendors with increased technical capabilities.

Vulnerability assessments must be conducted every six months or whenever there is a material change in business operations or a material impact on the information security program. Vulnerability assessments are designed to identify and detect publicly known security vulnerabilities.

Increased Security Controls

The Final Rule imposes greater security controls on covered businesses. Here are some of the significant requirements imposed by the Final Rule:

  • Encryption – Customer data must now be encrypted both in transit and at rest. Data need not be encrypted while in transit throughout internal business networks, however.
  • MFA – Covered businesses are now required to implement multi-factor authentication for all remote connections. Long considered a best practice, the Final Rule now mandates MFA.
  • Audit Trails – Information systems must be continuously monitored to detect and log unauthorized access. Logging must be enabled to show when individual users access protected information.
  • Change Management – Any change within a company’s technical infrastructure has the potential to introduce new vulnerabilities. The Final Rule requires covered businesses to implement formal change management procedures. This includes identifying potential impact beforehand and thoroughly documenting all changes.
  • Secure Disposal – Financial institutions would be required to dispose of customer information when no longer needed or when not required by law to retain the information. This applies to both digital and paper records. The Final Rule requires deletion of customer information not accessed for more than two years.
  • Secure Development Practices – Any applications that utilize or access customer information, whether developed in-house or by a vendor, must implement secure development practices. This includes regular testing and security evaluations during the development lifecycle.

Vendor Management

The Final Rule identifies the significant risk presented by outside vendors. Covered businesses will be required to take reasonable step in selecting service providers, which includes ensuring service providers implement and maintain appropriate safeguards for customer information. This oversight requirement is not just during the selection of vendors but includes periodic assessments. Covered businesses may no longer simply rely on a vendor’s security certifications or attestations.

Effective Date

The Final Rule will take effect 30 days after the date of its publication in the Federal Register. But certain provisions of the Final Rule will not take effect until one year after publication to give smaller organizations adequate time to comply. Provisions that take effect one year after publication include:

  • Designation of a qualified individual and annual written reporting
  • Written risk assessments
  • Continuous monitoring
  • Annual penetration testing
  • Biannual vulnerability assessments
  • Enhanced training
  • Periodic vendor assessments
  • Written incident response plan

 Conclusion

The Final Safeguards Rule imposes more detailed requirements for the information security programs of financial institutions. Covered businesses should prepare for the additional costs and administrative burden. Notification obligations to the FTC for data breaches may be soon to follow.