Artificial Intelligence Enabled Cybersecurity Systems

The use of artificial intelligence (AI) enabled cybersecurity systems is increasing dramatically. By 2018, sixty-two percent of all companies are projected to use AI technologies.

The use of AI cybersecurity systems provides greater efficiency through automation, the ability to evaluate larger data sets and, in many cases, a faster way to identify the “cyberattack needle in the big data haystack.” For example, some credit card companies use AI systems to scan large data banks for abnormal transactions and evaluate the gravity of a potential large scale cyber threat.

However, companies should not believe that AI systems alone provide a cybersecurity panacea. Cybersecurity solutions always require a human touch, such as risk analysis and case specific strategies for individual cyberattack responses. AI systems can identify potential risk situations, but the question of individual case evaluation and the proper individualized response still can only occur through the use of human analysis and participation.

AI cybersecurity systems should be used to act as a “safety net” in assessing potential large scale risks. In addition, AI systems can be adjusted based on human intervention to determine the differences between malicious attacks and normal behavior that has low risk. Also, it is undisputed that cybersecurity experts expect that criminals will inevitably utilize AI to automate their attacks as well. Because of this anticipated criminal use of AI, fully audited cybersecurity will never, by definition, be possible. AI systems can be well-equipped to increase detections rates, but will always need to be tweaked by human testers to find holes in programs and subsequently fortify AI defenses moving forward.

Bottom line, the use of AI enabled cybersecurity systems should be explored and evaluated, but always used in conjunction with individual human training and response strategies.

Maryland Amends Personal Information Protection Act

The Maryland General Assembly has recently amended its Maryland Personal Information Protection Act, House Bill 974, effective January 1, 2018. Notable amendments expand the definition of personal information, modify the definition of breach of the security of the system, provide a 45-day timeframe for notification, allow alternative notice for breaches that enable an individual’s email to be accessed, and expand the class of information subject to Maryland’s destruction of records laws.

The coming years likely will bring a variety of amendments to state data breach notifications laws. Review our comprehensive discussion of Maryland’s new law, and trends in other state data breach notification laws.

Washington Joins Growing List of States with Laws Protecting Biometric Information

Not to be outdone by the recent attention to biometric information in Illinois, and the Prairie State’s Biometric Information Privacy Act (BIPA), Washington enacted a biometric data protection statute of its own, HB 1493, which became effective July 23, 2017.

What it notable about Washington’s new biometric information law?

  • It prohibits “persons” from “enrolling” “biometric identifiers” in a database for a “commercial purpose” without first providing notice, obtaining consent, or providing a mechanism to prevent the subsequent use of the biometric identifiers for a commercial purpose. Lots of definitions, more on that below.
  • The exact type of notice and consent should depend on the context, and notice must be given through a procedure reasonably designed to be readably available to affected individuals. Note that the law does not require notice and consent if the person collects, captures, or enrolls a biometric identifier and stores it in a biometric system, or otherwise, in furtherance of a security purpose.
  • In general, a person that has obtained a biometric identifier from an individual and enrolled that identifier may not sell, lease or otherwise disclose the identifier absent consent. There are, of course, some exceptions, such as the disclosure being necessary to provide a product requested by the individual. In addition, a person generally may not use or disclose a biometric identifier for a purpose that is materially inconsistent with the terms under which the identifier was originally provided.
  • Persons that possess biometric identifiers of individuals that have been enrolled for a commercial purpose must (i) have reasonable safeguards to protect against unauthorized access or acquisition to the identifiers, and (ii) not retain the identifiers for longer than is necessary to carry out certain functions, such as providing the product for which the identifier was acquired.
  • There is no private right of action under the new Washington law. It is to be enforced by the state’s Attorney General. Remember that Illinois’ BIPA does permit persons to sue for violations of that law.

To understand how the law applies, one needs to review the defined terms. For example, the term “biometric identifiers” means:

data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual. “Biometric identifier” does not include a physical or digital photograph, video or audio recording or data generated therefrom, or information collected, used, or stored for health care treatment, payment, or operations under the federal health insurance portability and accountability act of 1996.

The law also defines “commercial purpose” to mean:

a purpose in furtherance of the sale or disclosure to a third party of a biometric identifier for the purpose of marketing of goods or services when such goods or services are unrelated to the initial transaction in which a person first gains possession of an individual’s biometric identifier.

And, the term “enroll” means

to capture a biometric identifier of an individual, convert it into a reference template that cannot be reconstructed into the original output image, and store it in a database that matches the biometric identifier to a specific individual.

The use of biometrics and biometric identifiers in commercial transactions and for other purposes is growing, and so is the number of state laws intending to protect that kind of data. Businesses that use or disclose biometrics in carrying out their business should carefully consider whether this new state law applies and, if so, what they need to do to comply.

Illinois Class Actions Spark New Attention For Biometric Data Applications

Capturing the time employees’ work can be a difficult business. In addition to the complexity involved with accurately tracking arrival times, lunch breaks, overtime, etc. across a range of federal and state laws (check out our Wage and Hour colleagues who keep up on all of these issues), many employers worry about “buddy punching” or other situations when time entered into their time management system is entered by a person other than the employee to whom the time relates. To address that worry, some companies have implemented biometric tools to validate time entries. A simple scan of an individual’s fingerprint, for example, can validate that individual is the employee whose time is being entered. But that simple scan can come with some significant compliance obligations, as well as exposure to litigation as discussed in a recent Chicago Tribune article.

The use of biometric data still seems somewhat futuristic and high-tech, but the technology has been around for a while, and there are already a number of state laws addressing the collection, use and safeguarding of biometric information. We’ve discussed some of those here, including the Illinois Biometric Information Privacy Act (BIPA)which is the subject of the litigation referenced above. Notably, the Illinois law permits individuals to sue for violations and, if successful, can recover liquidated damages of $1,000 or actual damages, whichever is greater, along with attorneys’ fees and expert witness fees. The liquidated damages amount increases to $5,000 if the violation is intentional or reckless.

For businesses that want to deploy this technology, whether for time management, physical security, validating transactions or other purposes, there are a number of things to be considered. Here are just a few:

  • Is the company really capturing biometric information as defined under the applicable law? New York Labor Law Section 201-a generally prohibits the fingerprinting of employees by private employers. However, a biometric time management system may not actually be capturing a “fingerprint.” According to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. But, under BIPA, this distinction may not work in some cases. “Biometric information” means any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual, such as a fingerprint. As a federal district court explained: The affirmative definition of “biometric information” does important work for [BIPA]; without it, private entities could evade (or at least arguably could evade) [BIPA]’s restrictions by converting a person’s biometric identifier into some other piece of information, like a mathematical representation or, even simpler, a unique number assigned to a person’s biometric identifier. So whatever a private entity does in manipulating a biometric identifier into a piece of information, the resulting information is still covered by [BIPA] if that information can be used to identify the person.
  • How long should biometric information be retained? A good rule of thumb – avoid keeping personal information for longer than is needed. The Illinois statute referenced above codifies this rule. Under that law, biometric identifiers and biometric information must be permanently destroyed when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the entity collecting it, whichever occurs first.
  • How should biometric information be accessed, stored and safeguarded? Before collecting biometric data, companies may need to provide notice and obtain written consent from the individual. This is the case in Illinois. As with other personal data, if it is accessible to or stored by a third party services provider, the company should obtain written assurances from its vendors concerning such things as minimum safeguards, record retention, and breach response.
  • Is the company ready to handle a breach of biometric data? Currently, 48 states have passed laws requiring notification of a breach of “personal information.” Under those laws, the definitions of personal information vary, and the definitions are not limited to Social Security numbers. A number of them include biometric information, such as Connecticut, Illinois, Iowa and Nebraska. Accordingly, companies should include biometric data as part of their written incident response plans.

The use of biometrics is no longer something only seen in science fiction movies or police dramas on television. It is entering mainstream, including the workplace and the marketplace. Businesses need to be prepared.

Unsolicited Call Without Charge Held a Violation of TCPA

Recently, the United States Court of Appeals was called upon to determine whether an unsolicited call that did not result in a charge to the consumer violated the Telephone Consumer Protection Act (“TCPA”) and, if it did, was the harm sufficiently concrete to provide plaintiff with standing to sue. Susinno v. Work Out World, Inc. (3rd Cir. July 10, 2017).

In this case, plaintiff alleged that she received an unsolicited call on her cell phone from a fitness company. She did not answer the call and the company left a prerecorded offer on her voicemail lasting one minute. Plaintiff’s complaint asserted that the phone call and message violated the TCPA’s prohibition of prerecorded calls to cell phones. The lower court dismissed the case on defendant’s motion, but the Third Circuit reversed.

On appeal, the defendant argued that the TCPA does not prohibit a single prerecorded call if the phone’s owner is not charged for the call. The appellate court disagreed with the defendant’s statutory interpretation. In addition, the court cited a provision of the TCPA that indicates calls to a cell phone “that are not charged to the called party” can implicate “privacy rights” that Congress “intended to protect” even if the phone’s owner is not charged. Thus, the court ruled, plaintiff established a violation of the TCPA.

With regard to the issue of concrete injury, the court relied upon the U.S. Supreme Court’s recent 2016 decision in Spokeo v. Robins finding that standing to pursue a violation of a federal law requires “concrete injury.” Here, the Third Circuit ruled that plaintiff alleged concrete injury because the injury alleged is the very injury the statute was intended to prevent.

Telemarketers and other businesses are cautioned to comply with applicable provisions of the TCPA and consider seeking counsel before embarking down a questionable path.

An Ounce of Data Breach Prevention…Address Attorney-Client Privilege in Your Breach Planning

Data breach “horror” stories have become a new staple in today’s business environment. The frequency of attacks which threaten (or compromise) the security of business networks and information systems continually increases — in the health care space alone (which holds the dubious honor of Most Likely To Be Attacked), a FBI and HHS’ Office for Civil Rights report notes that ransomware attacks occur at the rate of 4,000 per day, a four-fold increase from 2015. Experienced data breach forecasters continue to predict that cyber-attacks will continue to increase in frequency. Although data security and breach response are constantly in the headlines, studies demonstrate that organizations remain unprepared to effectively respond to a data breach.

For entities that are covered under HIPAA (or their business associates), or other state or federal cybersecurity regulations (such as the NYS DFS regulations we previously discussed in our articles, Getting Prepared for the New York Department of Financial Services’ Proposed Cybersecurity Regulations, and New York Releases Revised Proposed Cybersecurity Regulations) breach response preparedness is required. This would include periodic assessment and development of an effective incident response plan. Breach response readiness is not only required for many organizations, it is just sound business practice in today’s environment.

Is your organization ready? It may have an incident response plan, drafted a couple of years ago, adorning a forlorn shelf (blow the dust off carefully), but perhaps the plan has not been updated or tested, or staff has not been trained (and re-trained) — or legal counsel may not have provided input on the plan.

Legal counsel is valuable not only to provide input on legal definitions, notification processes, and third party contract provisions in the incident response plan. Another important benefit to including legal counsel in the planning process (as well as data breach response) is to ensure that the incident response plan is drafted to appropriately address legal counsel’s role, thereby protecting attorney-client/work product privileges. These protections are not absolute – in fact, there is significant case law discussing how and when they apply. Therefore, legal counsel should be involved in plan development and the plan should clearly provide that investigations are initiated and overseen by legal counsel as part of the breach response (and litigation risk assessment) process.

A May 18, 2017 decision of the United States District Court in the Central District of California underscores the benefits of legal counsel in breach response preparation and planning. In this decision, rendered in the context of the Experian breach litigation, the plaintiffs sought access to a forensic consultant’s report. The forensic consultant had been retained by Experian’s legal counsel immediately after the breach was discovered by Experian, and the report was used by legal counsel to develop a legal strategy for Experian’s response to the breach. The plaintiffs claimed the report should be disclosed because it was also used for the purpose of meeting Experian’s legal duty to investigate the data breach.

Despite the fact that the forensic consultant had previously worked for Experian (doing a very similar analysis), the court noted when the forensic consulting firm was retained by legal counsel, as well as the way legal counsel directed the form and content of the report (so that only portions could be disseminated to Experian’s incident response team, ensuring privilege was not waived), and held that this demonstrated that the report was work product and should not be disclosed to the other side.

The decision discusses another important point – whether the plaintiffs were entitled to disclosure of the report because they would not be able to re-create the investigation of the servers as it was performed on “live” operating networks, and therefore would suffer a substantial hardship. In this case, however, the report was prepared using server images, rather than the live systems. Consequently, the court held that there was no substantial hardship calling for the report to be disclosed.

At Jackson Lewis, our 24/7 Data Incident Response Team is prepared to assist with your planning and ready to assist if (when?) a breach occurs. Our data breach hotline is: 844-544-5296.

 

Public-Private Partnerships Could Bolster Healthcare Cybersecurity Efforts

Protecting data in the healthcare industry continues to be an area of focus for regulators and lawmakers. HIPAA Journal noted that in 2016 more HIPAA covered entities reported breaches than in any other year since the U.S. Department of Health and Human Services (“HHS”) Office of Civil Rights started publishing breach summaries on its “Wall of Shame” in 2009. Almost all of these breaches affected healthcare providers. Add to the mix the global cyberattacks we saw in May 2017 and the growing threat from ransomware and you can see a perfect storm forming.

One potential aid in weathering this storm is the public-private partnership discussed at a recent Congressional hearing before the U.S. House of Representatives Energy and Commerce Subcommittee on Oversight and Investigations. Subcommittee Chairman Representative Tim Murphy (R-PA) called cybersecurity in the healthcare sector “essential” and encouraged healthcare institutions to continue ongoing efforts to form an effective public-private partnership to assist in these efforts.

The hearing focused on the National Health Information Sharing and Analysis Center (“NH-ISAC”), which is a global, nonprofit organization whose members represent approximately one-third of the U.S. health and public health GDP. There are approximately 200 members of the NH-ISAC. The purpose of an ISAC is to help private sector entities share cyber-related threat information with one another. The NH-ISAC works closely with HHS in its efforts to combat cyber threats.

During the hearing, Denise Anderson, the President of the NH-ISAC, noted there are many small healthcare providers like physician practices, chiropractor offices and dental practices that are vulnerable to cyberattacks and would benefit from education through the NH-ISAC. She also stated that she was concerned that many small and mid-sized providers do not even realize the NH-ISAC exists.

Several examples were given at the hearing of NH-ISAC work that could help smaller healthcare providers reduce their vulnerability to cyberattacks. One example of that work is the CyberFit suite of services, which Anderson explained allows members to leverage the NH-ISAC community to realize cost savings and efficiencies. Another was the Medical Device Security Information Sharing Council, a forum for manufacturers and hospitals to interact and collaborate in order to advance medical device security and safety. There also was testimony at the hearing regarding an NH-ISAC project in which different members create portions of a security incident response plan or a security operations plan, and then donate that into the public domain or at least into the healthcare sector.

Members of the committee expressed appreciation of the serious consequences that cyberattacks could have on the healthcare sector. These members also expressed interest in the efforts of the NH-ISAC to increase its membership and improve cybersecurity in the healthcare sector. In this environment of heightened cyber-threats and HIPAA enforcement, healthcare providers may wish to consider including the NH-ISAC as a resource in their cybersecurity efforts.

Strengthening Data Security Through Human Resources and Information Technology Teamwork

Human Resources (“HR”) and information technology (“IT”) departments play unique and important roles within an organization. With instances of data breaches on the rise, however, companies should be mindful of the importance of regular communication and collaboration between employees in these departments with respect to issues of data security. Addressing such issues should not be tasked only to HR employees or IT departments but, rather, employees from both departments should work in collaboration toward creating and maintaining data protection processes.

Among other things, employees in HR and IT departments should work together with respect to creating data security policies and procedures to help ensure they are aligned and effective. In addition to partnering in the formation of data security policies and practices, HR and IT departments should join forces to provide practical training to employees on issues such as avoidance of data breaches brought about by phishing emails, ransomware attacks, or other scams that place data security at risk. Teamwork among HR and IT departments also is important with respect to identifying and responding to potential and actual data breaches, as well as consistently and appropriately addressing data security policy violations with employees whose conduct has or might put the security of a company’s data at risk.

Collaboration between HR and IT professionals builds a more fortified defense against potential data breaches or other data security issues and makes a company better prepared to respond in the event of a breach. Employees in IT departments can provide valuable insight to HR employees, who have varying degrees of knowledge about IT and its many attendant risks. In turn, HR employees can work with IT employees toward implementation and enforcement of policies geared toward best practices in protecting data.

Working alone, HR and IT departments can make strides in furtherance of data protection. But working together hand-in-hand, they can provide an organization with greater protection from the risk of a data breach and place a company on stronger footing with respect to identifying and responding to the risks and consequences of a potential or actual data breach.

Lyft Drivers Allege Uber Spied on them for Competitive Edge

Co-author: Devin Rauchwerger 

A former Lyft driver filed a class action lawsuit in the Northern District of California against Uber, alleging Uber violated the Electronic Communications Privacy Act (“ECPA”), the California Invasion of Privacy Act (“CIPA”), and other common law invasions of privacy and unfair competition.  The plaintiff seeks to represent two classes: 1) all individuals in the U.S. who worked as Lyft drivers while not working for Uber whose private information and whereabouts was obtained by Uber’s unlawful access of Lyft computer systems; and 2) a similar California class.  The lawsuit estimates the national class at 126,000 individuals or more.

Plaintiff alleges Uber developed a spyware system named “Hell” which permitted Uber to access Lyft computer systems by posing as Lyft customers.  By posing as a customer, Uber could determine the location of up to eight Lyft drivers and obtain their unique Lyft ID.  Once Uber had the particular driver ID, they were able to indefinitely track that particular driver’s location.

The lawsuit further alleges Uber used the information gathered from Lyft drivers to determine how many drivers Lyft had in particular areas, what the average charge was for rides, and which Lyft drivers were also working for Uber.  Uber then offered incentives to the drivers who were using dual platforms to encourage them to only use Uber.

It is also believed another objective of the Hell program was to generate more rides for Uber drivers who were also using the Lyft platform.  Using this method, if there were several Uber drivers in the area when a pickup was requested, Uber’s program would route the person to the Uber driver who also happened to be a Lyft driver, thus ensuring that the driver worked more frequently for Uber.

Uber was already on rocky terms from a privacy perspective even before information about the Hell program was first released in April 2017.  In March of this year, a report came out about a different program called Greyball which Uber initially created to avoid abusive riders.  The report claimed Uber used the Greyball program to avoid government regulators who were attempting to catch Uber drivers in restricted or banned areas.  The Greyball program issued law enforcement members a fake Uber app which prevented them from successfully obtaining rides in the restricted or banned areas.  Use of the program stopped after it was publicly discovered.

While Uber has not officially admitted to the use of the Hell program, they have failed to publicly deny the program’s existence.

We will continue to monitor the developments of this lawsuit, as well as decisions regarding Uber’s other questionable privacy practices. As this incident exemplifies, this area of the law continues to change, but its pace is behind the changes in technology so it is important to consult with privacy counsel before implementing new technologies.

LexBlog