CCPA FAQs on Cookies

As businesses prepare for the effective date of the California Consumer Privacy Act, many are conducting data mapping to identify the personal information they collect, who it belongs to, how they use it, with whom they share it and whether they sell or disclose it. The information a business collects from this exercise will set the groundwork for understanding compliance obligations. Given the CCPA’s expansive definition of personal information, it is easy to overlook elements of personal information during this exercise, including website cookies. These FAQs provide a high-level look at how the CCPA may apply to website cookies.

 Does the CCPA apply to website cookies?

A cookie is a small text file that a website places on a user’s computer (including smartphones, tablets or other connected devices) to store information about the user’s activity. Cookies have a variety of uses ranging from recognizing you when you return to the website to providing you with advertising targeted to your interests. Depending on their purpose, the website publisher or a third party may set the cookies and collect the information.

The CCPA defines personal information to include a “unique identifier.” This means “a persistent identifier that can be used to recognize a consumer, a family, or a device that is linked to a consumer or family, over time and across different services, including, but not limited to, a device identifier; an Internet Protocol address; cookies, beacons, pixel tags, mobile ad identifiers, or similar technology… or other forms of persistent or probabilistic identifiers that can be used to identify a particular consumer or device.” As a result, personal information collected by website cookies that identifies or could reasonably be linked to a particular consumer, family or device may be subject to the same disclosure notices and consumer rights, including the right to delete or opt out of the sale of information to a third party, as other personal information collected through the website.

Does the CCPA require that we have a cookie policy on our website?

The CCPA does not require websites of covered businesses to have a separate cookie policy to address the collection and use of personal information through cookies, or to permit consumers to exercise their rights. This information can be included in the website’s privacy policy.

Does our website need a cookie banner?

 The website does not need a separate cookie banner if the website discloses information relating to the collection and use of personal information through cookies, and permits consumers to exercise their rights, if this information is included in the website privacy policy and is provided at or before the point of collection.

Do cookies create special challenges to CCPA compliance?

Covered businesses may not have a full understanding of what cookies are present on their websites or their functionality. These businesses should inventory and audit their cookies to identify at a minimum:

  • the types of cookies set on their sites
  • their purpose and functionality
  • the personal information they collect and how it is used
  • whether the personal information is shared and, if so, to whom
  • if applicable, the purpose(s) for selling the personal information and to whom it is sold, and
  • whether the cookies are first party or third party cookies. This may require consulting with your IT provider, website designer, marketing department, and particularly advertising partners.

In certain cases, third parties may place cookies on the website that collect personal information as part of services necessary for the site’s business purpose. The services agreement with this third party should contain specific provisions identifying it as a service provider, stating the business purpose for collecting the personal data, and prohibiting the further use or sale of any personal information collected by the cookies. These provisions are necessary to demonstrate that any disclosure of personal information to a third party, or collection by a third party, is in the context of providing services and not a sale or disclosure to which the consumer’s right to opt out applies.

In other cases, it may be unclear if a third party cookie’s collection of personal information is strictly for the website’s business purpose or a sale subject to the right to opt out. This may apply in cases where cookies are placed by embedded content (e.g. video), a social media widget, or a vendor that provides targeted or behavioral advertising. While the website publisher should disclose all collection activity and use, it will need to review these activities to determine how to effectuate meaningful notice and the right to opt out.

It is not yet clear how the CCPA will apply to third party cookies used specifically for targeted and behavioral advertising. This creates significant uncertainty for website publishers who engage vendors to assist with advertising. The Adtech industry, legislators, and various stakeholders are currently reviewing how the CCPA may apply to third party cookies that track site users for targeting and behavioral advertising and clarification may be forthcoming.

Cookies and other website tracking technologies pose a unique challenge to businesses as they work to identify the personal information they collect and process. Identifying the presence of these technologies, their function, and the relationship with any third party that places them on the website is an essential part of data mapping. This process will require a greater understanding of the website’s functionality as well as a deeper dive into the business’ analytics, marketing, and advertising practices.

Georgia Supreme Court May Weigh in on Standing in Data Breach Litigation

The Georgia Supreme Court may weigh in on the hot issue plaguing data breach class action litigation across the nation, must a data breach victim suffer actual financial loss to recover damages, or is the threat of future harm enough? On August 20, the Georgia Supreme Court heard arguments in a class action suit stemming from a data breach in September 2017 at Athens Orthopedic, exposing 200,000 of its current and former patients’ personal information including names, addresses, social security numbers, dates of birth and telephone numbers. Upon discovery of the breach, Athens Orthopedic advised patients to place fraud alerts on their credit accounts and seek other advice.

In 2018, the Georgia Court of Appeals, in a 2-1 decision, ruled that because the plaintiffs did not suffer any actual financial loss or harm, they were not entitled to recover damages for potential or future harm. The class action suit alleged that some of hacked information was offered for sale on the dark web, and some information was temporarily made available on a data storage site. Plaintiffs argued that costs such as identity theft protection, credit monitoring, and costs associated with credit freeze, which they purchased are “classic measures of consequential damages” because they are incurred to mitigated “foreseeable” damages. The Court of Appeals rejected this argument, highlighting that “mitigation damages lessen the severity of an injury that has already taken place; if no injury occurred, there is no legally cognizable harm to mitigate”.

The Georgia Supreme Court is certainly not the first court in the nation to address this issue. Federal circuit courts over the past few years have struggled with this issue, in large part due to lack of clarity following the U.S. Supreme Court’s decision in Spokeo, Inc. v. Robins which held that even if a statute has been violated, plaintiffs must demonstrate that an “injury-in-fact” has occurred that is both concrete and particularized, but which failed to clarify whether a “risk of future harm” qualifies as such an injury. For example, the 3rd6th, 7th,  9th  and D.C. circuits have generally found standing, while the 1st2nd4th and 8th circuits have generally found no standing where a plaintiff only alleges a heightened “risk of future harm”.

Most recently, the U.S. Supreme Court  rejected a petition for a writ of certiorari by Zappos requesting the Court to review a Ninth Circuit Court decision which allowed customers affected by a data breach to proceed with a lawsuit on grounds of vulnerability to fraud and identity theft. The Supreme Court did not provide a reason for its denial of the Zappos petition.

The Georgia Supreme Court is expected to issue its ruling in Athens Orthopedic in the coming months. The lack of clarity on this issue has made it difficult for businesses to assess the likelihood of litigation and its associated costs in the wake of a data breach.  It is crucial for businesses to assess their breach readiness and develop an incident or breach response plan that takes into consideration the possibility of litigation.

Expansion of Technology at K-12 Schools Comes with Data Security Risks for Students and Parents

Image result for k-12 back to schoolA new school year is upon us and some students are already back at school. Upon their return, many students may experience new technologies and equipment rolled out by their schools districts, such as online education resources, district-provided equipment, etc. to enhance the education they provide and improve district administration. However, a recent report, “The State of K-12 Cybersecurity: 2018 Year in Review,” compiles sobering information about cybersecurity at K-12 schools. The report discusses 122 publicly-disclosed cybersecurity incidents affecting 119 public K-12 education agencies across 38 states in 2018. The trend seems to be continuing in 2019. Like other organizations, school districts should be allocating appropriate resources to ensure that the technologies and equipment they are leveraging and the third party vendors they are engaging to help students learn do not leave those same students (or their parents) vulnerable to a data breach.

Implementing technologies and other products and services for students in the course of their K-12 education often requires the collection of massive amounts of personal information about students and students’ parents. Children are enrolling in classes, sports activities, clubs, etc., providing immunization and health records, using school district equipment that could be tracking location and other metrics, paying for lunch and other goods and services with debit and credit cards, etc. It should be no surprise that K-12 school districts are targets and that security incidents are on the rise.

Why student data? In recent years, the marketing and sale of children’s personal information is growing. Criminals realize that students are not focused on their credit reports, nor are their parents. Left unchecked, personal data of children can be used to build new identities and engage in widespread fraud that could later come back to hurt unsuspecting students, and potentially their parents.

Reports of K-12 security incidents in 2019 suggest a continuing trend. Here are some examples:

  • The School of the Osage School District in Missouri reported a data security incident by an outside vendor used to provide support and educational services to individual students. The same incident is believed to have affected other school districts including the Rome City School District, the Carmel Clay Schools, and others.
  • San Dieguito Union High School District experienced a malware attack.
  • Student busing information concerning Cincinnati Public School children, including student names and pickup and drop-off locations, was inadvertently disclosed to unauthorized recipients, according to reporting by DataBreaches.net.
  • Camp Verde Unified School District in Arizona experienced a ransomware attack.
  • While reporting on a malware attack at the Watertown City School District in New York, Spectrum News also noted attacks at the Syracuse City School and the Onondaga County Public Library.
  • For more information about other reported breaches in the education sector, Databreaches.net provides an informative resource here.

These risks are not new. Fortunately, there are steps that school districts can take to address them. Here are some examples:

  • Educate their district community. Districts can develop materials to help inform parents and students about the importance of safeguarding personal information and best practices for doing so. This should include informing student and parents on how to quickly inform the district about potential incidents.
  • Appoint a data protection officer. Districts can appoint a data protection officer to be responsible for implementing all required security and privacy policies and procedures.
  • Develop data security and privacy policies.  Districts can establish written policies and procedures for protecting personal information. These policies and procedures should be informed by a thorough risk assessment. A recognized framework for school security policies is the National Institute for Standards and Technology Cybersecurity Framework (“NIST CSF”).
  • Consider privacy and security at the outset of any new technology initiative. Protecting student data should not be an afterthought. At the start of a new initiative, districts can evaluate what information is necessary for the initiative to be successful and design the initiative to include only that information, and to maintain it only for as long as it is needed.
  • Establish a vendor management program. Districts work with many third parties to support and extend technology-based services to students. They can take steps during the process to procure service providers to ensure appropriate measures will be applied by the service provider to safeguard personal information. They can ask questions, review policies, examine their systems, etc. Districts can also obligate providers by contract to secure information, and make sure the information is destroyed or deleted at the conclusion of the services.
  • Provide training for administrators, teachers, staff, and others. Information privacy and security awareness training, online or in person, is critical to creating awareness about security threats and following best practices.
  • Develop an incident response plan. Districts can make sure they have a response plan and are prepared to quickly respond to an actual or suspected security incident. This includes practicing that plan so the response team is ready.

Like many organizations, K-12 school districts have quite a challenge – they need to increasingly leverage technology to deliver their services, which requires access to and processing of personal information, but may not have sufficient resources to address all of the risks. Getting started is half the battle and there often is “low-hanging fruit” that districts can adopt with relatively little cost.

New Notification Requirements in New York for Healthcare Providers Facing a Cybersecurity Incident

On August 12, Mahesh Nattanmai, New York’s Chief Health Information Officer, issued a notice letter (“the notice”) on behalf of the New York State Department of Health (“Department”) requiring healthcare providers to use a new notification protocol for informing the Department of a potential cybersecurity incident. The updated protocol is considered effective immediately from a healthcare provider’s receipt of the notice letter.

“We recognize that providers must contact various other agencies in this type of event, such as local law enforcement. The Department, in collaboration with partner agencies, has been able to provide significant assistance to providers in recent cyber security events. Our timely awareness of this type of event enhances our ability to help mitigate the impact of the event and protect our healthcare system and the public health. The Department has designed a more efficient process to engage assistance for providers, as needed,” the Department states in its notice letter.

Moreover, the Department provides the types of healthcare providers, which should be implementing this update notice protocol immediately:

  • Hospitals, nursing homes, and diagnostic and treatment centers,
  • Adult care facilities, and
  • Home health agencies, hospices, licensed home care services agencies.

A cybersecurity incident is defined by the notice as “the attempted or successful unauthorized access, use, disclosure, modification, or destruction of data or interference with an information system operation”. Therefore, even if a healthcare provider is aware of an unsuccessful attempt of a breach (e.g. by a disgruntled employee), that incident should be reported to the Department.

The notice does not state the time period within which the Department should be notified upon a healthcare provider’s discovery of the cybersecurity incident. It is worth noting that the recently enacted New York SHIELD Act exempts HIPAA compliant covered entities from notification requirements following a data breach. Under HIPAA a covered entity is generally required to report a data breach of over 500 individuals to the U.S. Department of Health and Human Services (HHS) within 60 days of discovery of the breach, so it would not surprising if the length of time is similar, however we are currently confirming with the Department whether this is indeed the case. Contact information for the Department will vary depending on the healthcare provider’s location in New York. The notice provides contact information for each region: Capital District, Central New York, Metropolitan Area, Central Islip, New Rochelle and Western Area.

A recent study found that 70% of healthcare providers have experience a data breach. You can never be too prepared for a cybersecurity incident. Below are helpful resources from our blog on cybersecurity incident prevention and response for healthcare providers:

 

 

 

 

Does the CCPA Apply to Your Business?

The California Consumer Privacy Act (CCPA), considered the most expansive U.S. privacy laws to date, is set to take effect January 1, 2020. In short, the CCPA places limitations on the collection and sale of a consumer’s personal information and provides consumers certain rights with respect to their personal information. Wondering whether they will have to comply, many organizations are asking if the law will apply to them, hoping that being too small, being located outside of California, or “only having employee information,” among other things, might cause them not to have to gear up for CCPA.

So, we thought we would dig in a little deeper into the question of when the CCPA might apply to a business. However, note that the law is still developing as amendments work their way through the legislature and we await regulations from the California Attorney General intended to further clarify the statute. Organizations will need to continue to monitor these developments to determine if the CCPA will apply to them.

Basic Rule. In general, the CCPA applies to a “business” that:

A. does business in the State of California,

B. collects personal information (or on behalf of which such information is collected),

C. alone or jointly with others determines the purposes or means of processing of that data, and

D. satisfies one or more of the following

(i) annual gross revenue in excess of $25 million,

(ii) alone or in combination, annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices, or

(iii) derives 50 percent or more of its annual revenues from selling consumers’ personal information.

Related entities and non-for-profits. Under the CCPA, a “business” can be a “sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity that is organized or operated for the profit or financial benefit of its shareholders or other owners.” Thus, for example, a business under this definition generally would not include a not-for-profit or governmental entity. It also would not include a corporation that meets all of the prongs above, other than those listed under D.

However, a “business” under CCPA also includes any entity that controls or is controlled by a business that meets the requirements above and that shares common branding with such a business. “Control,” for this purpose, means either (i) ownership of, or the power to vote, more than 50 percent of the outstanding shares of any class of voting security of a business; (ii) control in any manner over the election of a majority of the directors, or of individuals exercising similar functions; or (iii) the power to exercise a controlling influence over the management of a company. “Common branding” means a shared name, servicemark, or trademark. Accordingly, organizations that would not themselves be a “business” under the CCPA could become subject to the law because of the entities that control them or that they control, and with which they share common branding.

Businesses that do not collect “consumer” personal information. It does not appear to be necessary under the CCPA for a business to actually be the one to collect personal information from consumers in order for the law to apply. So long as personal information is collected on behalf of a business (such as through a third party), the business could be covered by the CCPA, assuming the other requirements are satisfied.

Some businesses also may believe that because they do not engage in transactions directly with individual consumers and collect their personal information, they are not subject to the law. The businesses might be thinking this is because their “consumers” are other businesses and not individuals. However, a consumer under the CCPA generally means a natural person who is a California resident. Accordingly, when conducting business with other businesses, a business likely collects personal information from contacts at those other businesses. Similarly, virtually all businesses collect information about their employees. Recent legislative activity indicates that obligations under the CCPA may continue to extend to employee personal information.

Businesses located outside of California. It also does not appear that a business will need to be located in California in order to be subject to the CCPA. While the CCPA is not clear on this point, a business may be considered to be “doing business” in California if it conducts online transactions with persons who reside in California, has employees working in California, or has certain other connections to the state, and is without a physical location in the state. As noted, regulations may help to clarify what “doing business in California” means for purposes of the CCPA.

Businesses that process information on behalf of other businesses. The definition of a business under the CCPA requires that the business must alone or jointly with others “determine the purposes or means of processing” of that data. The CCPA does not expand on this language. However, since nearly identical language in the General Data Protection Regulation (GDPR) is used to define a controller, guidance from the UK’s Information Commissioner may provide some insight – here are some questions you might ask to see if your organization is a controller:

  • The business decides to collect or process the personal data.
  • The business decides what the purpose or outcome of the processing is to be.
  • The business decides what personal data should be collected.
  • The business decides which individuals to collect personal data about.
  • The business obtains a commercial gain or other benefit from the processing, except for any payment for services from another controller.
  • The business decides processes the personal data as a result of a contract between the business and the data subject.
  • The business exercises professional judgement in the processing of the personal data.
  • The business has a direct relationship with the data subjects.

An organization that merely processes personal information for businesses covered by the CCPA might take the position that it is not subject to the CCPA. That organization may be correct, however, its business partners that are subject to the CCPA may be required to push certain CCPA obligations down to the organization by contract.

Consequences of Non-compliance. Organizations on the fence about the application of the CCPA should consider what happens if they fail to comply but are determined later to be subject to the law. A business that violates the CCPA can face injunctions and penalties of not more than $2,500 for each violation, and not more than $7,500 for each intentional violation, in an action brought by the California Attorney General. That said, a business is provided 30 days after receiving written notice of noncompliance to cure the violation, before facing liability. In addition, the CCPA provides consumers a private right of action if their nonencrypted or nonredacted personal information is subject to an unauthorized access, exfiltration, theft, or disclosure because the covered business did not meet its duty to implement and maintain reasonable safeguards to protect that information.  That private action includes statutory damages in an amount not less than one hundred dollars ($100) and not greater than seven hundred and fifty ($750) per consumer per incident or actual damages, whichever is greater.

A recently survey by ESET found that over 44% of the 625 business owners and company executives polled had never heard of CCPA, and only 11.8% knew if the law applied to their business. Organizations should be doing their best to determine if they have CCPA obligations either directly as a business, because they control or are controlled by a business, or because they have contractual obligations flowing from a business. Efforts toward compliance need to begin now as the CCPA becomes effective January 1, 2020.

Licensed by Your State’s Insurance Commissioner? Comprehensive Data Security Requirements Are Headed Your Way

Most businesses in the insurance industry have one thing in common – they collect and maintain significant amounts of sensitive, nonpublic information including personal information. Not surprisingly, insurance-related businesses are a target of cyberattacks and a few have faced some of the largest data breaches reported to date. Beyond the headlines, however, small and mid-sized insurance companies face similar risks, and governments have stepped up their scrutiny of cybersecurity. Hearing the calls for legislation and regulation, the National Association of Insurance Commissioners (NAIC) adopted a Data Security Model Law with the goal of having it adopted in all states within a few years. So far, eight states (see below) have adopted a version of the Model Law and it looks like more are on the way.

What is the NAIC’s Data Security Model Law?

In an effort that largely began with establishing a task force in 2014, the NAIC adopted a Data Security Model Law in November 2017. The Model Law is intended to provide a benchmark for any cybersecurity program. The requirements in the Model Law track some familiar data security frameworks, such as the HIPAA Security Rule. It also has many similarities to the New York State Department of Financial Services (NYDFS) regulations (specifically the 23 NYCRR 500). Note that licensees are not subject to the Model Law unless the state where that licensee is licensed adopts a version of the Model Law. At that time, the licensee must comply with that law.

Who is Subject to the Model Law?

The Model Law generally applies to “Licensees,” defined as:

any person licensed, authorized to operate, or registered, or required to be licensed, authorized, or registered pursuant to the insurance laws of this State but shall not include a purchasing group or a risk retention group chartered and licensed in a state other than this State or a Licensee that is acting as an assuming insurer that is domiciled in another state or jurisdiction.

Licensees range from large insurance carriers to small independent adjusters. These include individuals providing insurance related services, firms such as agency and brokerage businesses, and insurance companies. Additionally, there may be business that require a license, but are not traditionally considered to be in the insurance business. Examples include car rental companies and travel agencies that offer insurance packages in connection with their primary business.

The Model Rule provides exceptions for certain licensees. For example, licensees with fewer than ten employees (including independent contractors) are exempt from the requirement to maintain an information security program. However, they remain subject to the other provisions in the Model Law, such as the requirement to provide notification in the case of certain cybersecurity events.

What are some of the requirements of the Model Law? Read More

Is Your Small Business Prioritizing Cybersecurity?

A recent study surveying small and mid sized businesses (SMBs) found that 67% had experienced a cyber attack in 2018, and yet that same study found that cybersecurity is still “not on the to do list” for SMBs – 60% of the SMBs surveyed responded that they did not have a cybersecurity plan in place, and only 9% ranked cybersecurity as a top business priority. The federal government has taken notice of these concerning statistics.

Early this month, the U.S. House of Representatives passed five bipartisan bills to help small businesses. Among the bills passed, two specifically aim to enhance a small business’s ability to prevent and respond to a cybersecurity incident. First, the SBA Cyber Awareness Act, H.R. 2331, aims to strengthen the Small Business Administration’s handling and reporting of the cyber threats that affect small businesses. The bill requires the SBA to provide an annual report on the status of SBA cybersecurity, and notify Congress of any incident of cyber risk and how the SBA is addressing it. Second, the Small Business Development Center Cyber Training Act of 2019, H.R. 1649, requires the Small Business Administrator to establish or certify an existing cyber counseling certification program to certify employees at small business development centers. It also requires the SBA to reimburse lead small business development centers (SBDCs) for any costs relating to such certifications up to $350,000 in a fiscal year.

The Senate has also introduced legislation to help SMBs better address cyber threats. In late June, Senator Marco Rubio (R-FL) joined by Senator Gary Peters (D-MI) introduced the Small Business Cybersecurity Assistance Act of 2019, S.2034 that aims to better educate small businesses on cybersecurity through counselors and resources offered at SBDCs. The bill incorporates recommendations suggested by DHS and SBA’s Small Business Development Center Cyber Strategy in a report from March of 2019, which described challenges small businesses face with implementing cybersecurity for their business, including the confusing nature of government cyber resources and lack of training.

The cyber threats plaguing SMBs are real, and SMBs need to address the significant risk to their businesses. The cyber insurance industry is increasingly targeting SMBs with robust insurance policies, comparable to offerings for larger companies. While insurance is a helpful component of an overall risk management strategy, it should not be the only component.

In the event of a data breach, the policy might cover costs related to responding to that breach (sending notices, offering credit monitoring, etc.) and business interruption costs, but it might not cover the costs of a federal or state agency inquiry following the reported breach. That is, if, for example, a small health care practice reporting a breach might trigger a compliance review by the federal Office of Civil Rights. In that case, OCR investigators would be looking for information about the breach, but also evidence that a risk assessment was conducted, copies of written policies and procedures covering administrative, physical, and technical safeguards to protect health information, acknowledgments that employees completed HIPAA training, and other information to support compliance. Having these compliance measures in place can substantially limit an SMB’s exposure in these kinds of federal or state agency inquiries, as well as strengthen the SMB’s defensible position should the SMB be sued as a result of a breach.

Healthcare Organizations, Is Your Patient Portal Secure?

Co-author: Valerie Jackson

While healthcare organizations are embracing new technologies such as patient portals, a recent report shows that organizations’ cybersecurity measures for these technologies are behind the times. A patient portal is a secure online website that allows patients to access their Electronic Health Record from any device with an Internet connection. Many patient portals also allow patients to request prescription refills, schedule appointments, and securely message providers. With this increased access for patients comes the risk that someone other than the patient will gain unauthorized access to the portal, and to the patient’s electronic protected health information (ePHI).

2019 has seen record numbers of patient records being breached. Halfway through 2019, around 25 million patient records have been breached, eclipsing the number of patient records breached in all of 2018 by over 66%. In this environment where hackers find patient records a valuable commodity on the black market, healthcare organizations are must balance patients’ desire for ease of use with the duty to prevent unauthorized access to patient records. To learn more about how healthcare organizations are meeting this challenge, LexisNexis® Risk Solutions in collaboration with the Information Security Media Group conducted a survey in spring 2019 asking healthcare organizations about their cybersecurity strategies and patient identity management practices. The results of the survey, which included responses from more than 100 healthcare organizations, including hospitals and physician group practices, were recently published in a report, “The State of Patient Identity Management” (the “report”).

The report concluded that healthcare organizations had a high level of confidence in the security of their patient portals, but this confidence may be misplaced based upon the security measures respondents reported they had in place. The vast majority of healthcare organizations reported that they continued to use traditional authentication methods such as username and password (93%), knowledge-based authentication questions and answers (39%), and email verification (38%). Notably, less than two-thirds reported using multifactor authentication. Multifactor authentication verifies a user’s identity in two or more ways, using: something the user knows (passwords, security questions); something the user has (mobile phone, hardware that generates authentication code); and/or something the user does or is (fingerprint, face ID, retina pattern).

While the HIPAA Security Rule does not require multifactor authentication, it does require covered entities and business associates to use security measures that reasonably and appropriately implement the HIPAA Security Rule standards and implementation specifications. Generally, the HIPAA Security Rule requires covered entities and business associates to (1) ensure the confidentiality, integrity, and availability of all ePHI the covered entity or business associate creates, receives, maintains, or transmits, (2) protect against any reasonably anticipated threats or hazards to the security or integrity of such information, and (3) protect against any reasonably anticipated uses or disclosures of such information that are not permitted or required. The Person or Entity Authentication standard of the HIPAA Security Rule requires that covered entities and business associates implement procedures to verify that a person or entity seeking access to ePHI is the one claimed. However, this standard has no implementation specifications. It is also worth mentioning that under the HIPAA Privacy Rule prior to a permissible disclosure, a covered entity must verify the identity of person requesting ePHI and their authority to have access to that ePHI, if either the identity or authority is not known to the covered entity. In addition, the covered entity must obtain “documentation, statements, or representations” from the person requesting the ePHI when such is a condition of the disclosure.

Healthcare organizations are not required to adopt any one cybersecurity framework or authentication method under HIPAA, however increasing cybersecurity and implementing multifactor authentication for access to patient portals certainly helps with compliance under the HIPAA Security Rule. Failure to implement reasonable and appropriate cybersecurity measures could not only lead to a healthcare data breach, but it could also result in a covered entity or business associate being fined by the HHS Office for Civil Rights. To learn more about how the firm can assist healthcare organizations with HIPAA compliance and data security, please contact your Jackson Lewis attorney.

New York Enacts the SHIELD Act

On Thursday, New York Governor Andrew Cuomo signed into law the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act), sponsored by Senator Kevin Thomas and Assemblymember Michael DenDekker. The SHIELD Act, which amends the State’s current data breach notification law, imposes more expansive data security and data breach notification requirements on companies, in the hope of  ensuring better protection for New York residents from data breaches of their private information. The SHIELD Act takes effect on March 21, 2020. Governor Cuomo also signed into law the Identity Theft Prevention and Mitigating Services Act that requires credit reporting agencies that face a breach involving Social Security numbers to provide five years of identity theft prevention and mitigation services to affected consumers. It also gives consumers the right to freeze their credit at no cost. This law becomes effective in 60 days.

Below are several FAQs highlighting key features of the SHIELD Act:

What is Private Information under the SHIELD Act?

Unlike other state data breach notification laws, New York’s original data breach notification law included definitions for “personal information” and “private information.” The current definition of “personal information” remains: “any information concerning a natural person which, because of name, number, personal mark, or other identifier, can be used to identify such natural person.” However, the SHIELD Act expands the definition of “private information” which sets forth the data elements that, if breached, could trigger a notification requirement. Under the amended law, “private information” means either:

  • personal information consisting of any information in combination with any one or more of the following data elements, when either the data element or the combination of personal information plus the data element is not encrypted, or is encrypted with an encryption key that has also been accessed or acquired:
    • social security number;
    • driver’s license number or non-driver identification card number;
    • account number, credit or debit card number, in combination with any required security code, access code, password or other information that would permit access to an individual’s financial account; account number, credit or debit card number, if circumstances exist wherein such number could be used to access an individual’s financial account without additional identifying information, security code, access code, or password; or
    • biometric information, meaning data generated by electronic measurements of an individual’s unique physical characteristics, such as a fingerprint, voice print, retina or iris image, or other unique physical representation or digital representation of biometric data which are used to authenticate or ascertain the individual’s identity; OR
  • a user name or e-mail address in combination with a password or security question and answer that would permit access to an online account.

It is worth mentioning that the SHIELD Act’s expansive definition of “private information” is still not as broad as the definition of the analogous term under the laws of other states. For example, California, Illinois, Oregon, and Rhode Island have expanded the applicable definitions in their laws to include not only medical information, but also certain health insurance identifiers.

How has the term “breach of security of the system” changed?

The SHIELD Act alters the definition of “breach of the security of the system” in two significant ways. First, it broadens the circumstances that qualify as a “breach” by including within the definition of that term incidents that involve “access” to private information, regardless of whether they resulted in “acquisition” of that information. Under the old law, access absent acquisition did not qualify as a breach. In connection with this change, the amendments also add several factors for determining whether there has been unauthorized access to private information, including “indications that the information was viewed, communicated with, used, or altered by a person without valid authorization or by an unauthorized person.”

Second, as discussed above, the expansion of the definition of private information effectively expands the situations which could result in a breach of the security of the system.  Notably, the SHIELD Act retains the “good faith employee” exception to the definition of “breach.”

Are there any substantial changes to data breach notification requirements? And who must comply?

Any person or business that owns or licenses computerized data which includes private information of New York residents must comply with breach notification requirements, regardless of whether the person or business conducts business in New York.

That said, there are several circumstances which would exempt a business from the breach notification requirements. For example, notice is not required if “exposure of private information” was an “inadvertent disclosure and the individual or business reasonably determines such exposure will not likely result in misuse of such information, or financial harm to the affected persons or emotional harm in the case of unknown disclosure of online credentials”. Further, businesses that are already regulated by and comply with data breach notice requirements under certain applicable state or federal cybersecurity laws (e.g., HIPAA, NY DFS Reg. 500, Gramm-Leach-Bliley Act) are not required to further notify affected New York residents, however, they are still required to notify the New York State Attorney General, the New York State Department of State Division of Consumer Protection, and the New York State Division of the State Police.

What are the “reasonable” data security requirements? And who must comply with them?

As with the notification requirements, the SHIELD Act requires that any person or business that owns or licenses computerized data which includes private information of a resident of New York must develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information. Again, businesses in compliance with laws like HIPAA and the GLBA are considered in compliance with this section of the law. Small businesses are subject to the reasonable safeguards requirement, however safeguards may be “appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.” A small business is considered any business with fewer than fifty employees, less than $3 million in gross annual revenue in each of the last 3 years, or less than $5 million in year-end total assets.

The law provides examples of practices that are considered reasonable administrative, technical and physical safeguards. For example, risk assessments, employee training, selecting vendors capable of maintaining appropriate safeguards and implementing contractual obligations for those vendors, and disposal of private information within a reasonable time period, are all practices that qualify as reasonable safeguards under the law.

Are there penalties for failing to comply with the SHIELD Act?

The SHIELD Act does not authorize a private right of action, and in turn class action litigation is not available. Instead, the Attorney General may bring an action to enjoin violations of the law and obtain civil penalties. For data breach notification violations that are not reckless or knowing, the court may award damages for actual costs or losses incurred by a person entitled to notice, including consequential financial losses. For knowing and reckless violations, the court may impose penalties of the greater of $5,000 dollars or up to $20 per instance with a cap of $250,000. For reasonable safeguard requirement violations, the court may impose penalties of not more than $5,000 per violation.

Conclusion

The SHIELD Act has far reaching effects, as any business that holds private information of a New York resident – regardless of whether that organization does business in New York – is required to comply. “The SHIELD Act will put strong safeguards in place to curb data breaches and identity theft,” said Justin Brookman, Director of Privacy and Technology Policy for Consumer Reports. The SHIELD Act signifies how seriously New York, like other states across the nation, is taking privacy and data security matters.  Organizations, regardless of their location, should be assessing and reviewing their data breach prevention and response activities, building robust data protection programs, and investing in written information security programs (WISPs).

Illinois’ Attorney General Wants to Know About Data Breaches

Possibly adding to the list of states that have updated their privacy and breach notification laws this year, the Illinois legislature passed Senate Bill 1624 which would update the state’s current breach notification law to require most “data collectors,” which includes entities that, for any purpose, handle, collect, disseminate, or otherwise deal with nonpublic personal information, to notify the State’s Attorney General of certain data breaches. The state’s current statute already requires notification of a data breach to the Attorney Generals’ office, but only in the event of data breach affecting state agencies, and only if those breaches affect more than 250 Illinois residents.

Under the Senate Bill, if a data collector is required to notify more than 500 Illinois residents as a result of a single data breach, that data collector also must notify the Illinois Attorney General’s office. Similar to the requirements in other states requiring Attorney General notification, the law requires certain content be included in the notification:

  •      A description of the nature of the breach of security or unauthorized acquisition or use.
  •      The number of Illinois residents affected by such incident at the time of notification.
  •      Any steps the data collector has taken or plans to take relating to the incident.

In addition, if the date of the breach is unknown at the time the notice is sent to the Attorney General, the data collector must inform the Attorney General of the date of the breach as soon as possible. Note, some states have more extensive content requirements, such as Massachusetts, which requires covered entities that experience a breach to inform the Attorney General (and the Commonwealth’s Office of Consumer Affairs and Business Regulation) about whether the organization maintains a written information security program. The change in Illinois would exclude covered entities or business associates that are subject to the privacy and security regulations under HIPAA, provided they are compliant with those regulations. Of course, covered entities and business associates would still have to notify the federal Office of Civil Rights in the event of a data breach affecting unsecured protected health information.

The change would require the notification to be made in the most expedient time possible and without unreasonable delay, but not later than when the data collector provides notice to individuals affected by the breach. Also joining some other states, such as Massachusetts and New Hampshire, the Senate Bill provides that the Attorney General may publish the name of the data collector that suffered the breach, the types of personal information compromised in the breach, and the date range of the breach.

Should these changes become law, the patchwork of state breach notification laws continues to grow more complex, particularly for organizations that experience multistate data breaches. It is critical, therefore, that organizations are prepared with an incident response plan, one that not only addresses steps to drive systems-related investigations and recovery, but also a timely and compliant communication and notification strategy.

 

 

LexBlog