Companies May Soon Have a New Defense Against Cyber-Attacks

Co-author: Devin Rauchwerger 

The Active Cyber Defense Certainty Act is a new bill that is gaining positive bipartisan support and significant interest from business communities, lawmakers and academics. The proposed bill amends the Computer Fraud and Abuse Act which does not provide adequate deterrence for criminal hacking. The new bill is aimed at helping businesses that are falling prey to cyber criminals defend themselves online by giving victims of computer intrusions unprecedented rights.

Previously, under the Computer Fraud and Abuse Act, a company was either required to enlist local law enforcement after the fact or risk facing prosecution for hacking back. The new bill affords a victim with a number of defensive measures. Specifically, under the bill, a victim of a cyber-attack can access without authorization the attacker’s computer to gather information in order to establish attribution of criminal activity, including sharing information with law enforcement and stopping unauthorized activity against the victim’s network. However, a victim can not destroy information on the hacker’s computer, cause physical injury to another person, or create a threat to the public health or safety.

There are several concerns, however, about the proposed bill that have sparked debate. Giving companies the ability to hack back may not be the best approach to defend against cyber attacks. Instead, it may be more effective and prudent for companies to engage the assistance of law enforcement, government agencies and internet service providers. Also, giving companies the ability to attack the computers of suspected hackers can lead to potential national security concerns; if, for instance, the hacker is a foreign country. There are also ethical considerations that must be considered with hacking-back, such as causing harm to innocent parties.

Under the bill, the fact that the protection afforded the victim disappears if the victim “destroys the information stored on a computer of another” is also potentially problematic. The statute does not currently differentiate between purposeful destruction of information compared to accidental destruction. Companies may be weary to act if they lose the protection by accidentally destroying information in their attempt to stop the cyber-attack. The current language also suggests that a company cannot destroy whatever partial information the cyber-attacker illegally obtained from the victim.

Notably, there are also drafting issues with the bill. Several terms in the act are vague and open the door to a variety of problems. For example, the term “victim” is defined as “an entity that is a victim of a persistent unauthorized intrusion of the individual entity’s computer.” The term “persistent” is difficult to define: Is persistent measured in terms of the number of separate cyber-attacks that a company falls victim to or is it the duration of one particular cyber-attack that matters? Theoretically under the current language, a victim of a cyber-attack lasting only 30 seconds may not be afforded the protection of this Act.   For all these reasons, the bill will likely need significant revisions before it will pass.

While there are still several kinks that need to be worked out, this is clearly a positive step towards companies being able to defend themselves from cyber-attacks without facing legal repercussions.

At Last, the Final DFS Cybersecurity Regulations….

We wanted to keep you informed on the progress of the DFS cybersecurity regulations, as they complete their journey through the approval process. DFS has been working on the regulations since its 2013-2014 studies on cybersecurity risks to financial institutions. As reported in our article, Getting Prepared for the New York Department of Financial Services’ Proposed Cybersecurity Regulations, the original proposed regulations were published on September 28, 2016. The revised regulations were published on December 28, 2016 (see our article, New York Releases Revised Proposed Cybersecurity Regulations). A Notice of Adoption was published in the New York State Register last week announcing the adoption of the final regulations.

The Notice of Adoption noted that DFS received 60 comments on the revised regulations published on December 28, 2016 – but that it determined that most of the comments made were either unnecessary changes to the scope, wording or meaning of the regulations. Several comments were received asking DFS to hold off on finalizing the regulations until the federal government had implemented regulations, or to make efforts to harmonize the DFS regulations with existing (or proposed) state and federal standards, but DFS rejected these suggestions, stating that “it is vitally important to establish regulatory minimum standards for cybersecurity practices to address challenges currently facing the New York financial services sector.”

So what changes were made? To recap, the December 28, 2016 regulations made several key changes (see our article, New York Releases Revised Proposed Cybersecurity Regulations). The final regulations include “nonsubstantive” changes to several sections, including:

  • A tweak to the definition of “penetration testing” in Section 500.01(h) (“unauthorized penetration testing” was changed to “penetration testing”);
  • The responsibilities for implementing a cybersecurity program in Section 500.02 were clarified by language that states the covered entity may adopt “the relevant and applicable provisions of” a cybersecurity program that its affiliates maintain;
  • The Penetration Testing and Vulnerability Assessment provisions of Section 500.05 were revised to delete some duplicative language relating to periodic penetration testing;
  • The required retention period for audit trails Section 500.6 was decreased to three years (from five);
  • The events that must be reported to DFS under Section 500.17 were clarified (by eliminating some extraneous language) and language was added to clarify that the annual report would cover the prior calendar year;
  • The exemptions in Section 500.19 were revised to clarify that the thresholds in (a) are applied taking into account the operations of a covered entity and its affiliates, to clarify the scope of the exemptions (a new (d) was added to exempt Article 70 entities) and to clarify that notices of exemption must be filed within 30 days of making the determination that the entity is exempt.

The transitional periods for compliance with the final regulations have not changed in the final rule. Compliance is required within 1 year for the regulations relating to:

  • Annual reporting to the covered entity’s board
  • Penetration testing and vulnerability assessments
  • Risk assessments
  • Multi-factor authentication
  • Cybersecurity awareness training

Compliance is required within 18 months for the regulations relating to:

  • Audit trails
  • Application security
  • Limitations on data retention
  • Monitoring the activity of authorized users
  • Encryption of nonpublic information

There is a two year period for compliance with the third party service provider provisions; for all other provisions, entities should be in compliance within 180 days.

The final regulations continue to define a “Covered Entity” as “any Person operating under or required to operate under a license, registration, charter, certificate, permit, accreditation or similar authorization under the Banking Law, the Insurance Law, or the Financial Services Law.” Please see our earlier article, Getting Prepared for the New York Department of Financial Services’ Proposed Cybersecurity Regulations, for additional discussion of the coverage of the regulations, and our article New York Releases Revised Proposed Cybersecurity Regulations for a discussion of the exemptions under the revised regulations.

 

Employer Denied Access to Employee GPS Data

A federal district court in Indiana recently denied an employer’s motion to compel discovery of employee GPS data in defense of an action brought under the Fair Labor Standards Act (FLSA).   Crabtree v. Angie’s List, Inc.

Plaintiffs asserted claims for denial of overtime pay during a one-year period in which they worked as Senior Sales Representatives. Plaintiffs often used their personal electronic devices for work purposes. In order to defend Plaintiffs’ claims that they worked 10-12 hours per day, the employer sought GPS and location data from the Plaintiffs’ phones to “construct a detailed and accurate timeline of when Plaintiffs were or were note working.” Plaintiffs objected contending that the request implicated significant privacy concerns especially where the GPS data would not necessarily accurately reflect whether Plaintiffs were working at any given time.

One of the reasons for the employer’s request was because the manner in which the company kept track of the employees’ time was by recording the time the employees logged into and out of their computers’ SalesForce software. However, that system did not necessarily reflect if employees were not working even though they were logged in. Thus, the company sought the GPS data to help confirm the employees’ physical location.

In analyzing the employer’s motion to compel, the court expressed concern with the fact that disclosing GPS data from a personal device covering 24 hours per day for one year would result in tracking Plaintiffs’ movements well outside of their working time. The court stated that the employer “has not demonstrated that the GPS/location services data from Plaintiffs’ electronic devices would be more probative” than data already in the company’s possession. As such, the court ruled that the examination of Plaintiffs’ personal devices “is not proportional to the needs of the case.”

Based on this case, it is apparent that courts continue to be protective of personal data. As we have recently emphasized, it is important for employers to find methods by which they can accurately track employee work time without having to rely on data from the employee’s personal devices.

Eleventh Circuit Upholds Company Claims Against Former Executive For Unlawful Access to Email

A terminated executive who accessed co-worker emails in the process of reporting possible company wrongdoing lost his appeal on several grounds. In Brown Jordan Intl, Inc. v. Carmicle, the Eleventh Circuit found that the employee violated both the Stored Communications Act (SCA) and the Computer Fraud and Abuse Act (CFAA).

Carmicle reported to the company concerns about the preparation of a second set of financial projections to the detriment of shareholder value. Carmicle acknowledged that he obtained much of the information by secretly accessing co-worker emails. He did so by using a universal password issued as part of an email conversion after employees failed to create their own personal password. Carmicle subsequently was terminated after an investigator found his allegations of impropriety were without merit (among other reasons).

The appellate court upheld the ruling that Carmicle violated the CFAA despite his argument that Brown Jordan suffered no “loss” as required by the law. Carmicle argued that there was no damage because the company did not experience an “interruption of service” and there was no damage to the computers.   However, the company maintained it suffered a loss by, among other things, engaging an outside consultant to assess how Carmicle accessed the emails. Based on this expense, the appellate court found the company sustained a “loss” under CFAA. The court held that “loss” can include the reasonable costs incurred in connection with responding to a violation, assessing the damage done, and restoring the affected data to the condition prior to the violation.

Finally, the court rejected Carmicle’s argument that his access was authorized under the SCA based on a company policy stating that employees have no expectation of privacy and that the company has the right to monitor email communication. The Eleventh Circuit found that it would be “unreasonable” to permit someone to exploit a generic password to access emails without prior authorization and without any suspicion of wrongdoing.

Notwithstanding the outcome in this case, companies are reminded to take steps to ensure privacy protocols are in place and up-to-date. In this day and age, it is reasonable to assume that someone – whether from outside the company or within – may seek access to your network.

$3.2M Fine for Failure to Protect Electronic Records

The Department of Health and Human Services Office of Civil Rights (“OCR”) fined a Texas hospital $3.2 million for its impermissible disclosure of unsecured electronic protected health information (ePHI) and non-compliance over many years with multiple standards of the HIPAA Security Rule.

Children’s Medical Center of Dallas filed breach reports with OCR in 2010 and again in 2013. The first report indicated the loss of an unencrypted, non-password protected BlackBerry device at the Dallas/Fort Worth International Airport on November 19, 2009. That device contained the ePHI of approximately 3,800 individuals. On July 5, 2013, the medical center filed a separate HIPAA Breach Notification Report with OCR, reporting the theft of an unencrypted laptop from its premises sometime between April 4 and April 9, 2013. The Hospital reported the laptop contained the ePHI of 2,462 individuals.

OCR’s investigation found that, despite knowledge of the risk of maintaining unencrypted ePHI on its devices as early as 2007 (identified through medical center’s own risk assessments), the medical center failed to implement risk management plans and failed to deploy encryption or an equivalent alternative measure on all of its laptops, work stations, mobile devices and removable storage media until at least April 9, 2013. When announcing the fine, OCR stated “a lack of risk management not only costs individuals the security of their data, but it can also cost covered entities a sizable fine.” This fine indicates that even with the change of administration, OCR seems likely to continue its aggressive approach to HIPAA enforcement.

This action demonstrates again the importance of creating a culture of security where your employees are cognizant of the potential ill-effects of failing to safeguard personal information. This is especially true as OCR’s enforcement activities are not simply focused on the harm to individuals, but instead focus on compliance. HIPAA covered entities and business associates should regularly assess their risk of disclosing protected health information and – -just as importantly – address the issues identified during those assessments which would include the implementation of appropriate safeguards and conducting regular HIPAA training for employees.

Expert Insights on Developing a Physical Security Program

In today’s digital age, security tends to be thought about in terms of firewalls, malware, encryption and other safeguards for electronic systems. But the security of those systems, as well as an organization’s facilities, people and other critical assets depends significantly on physical security as well. We are delighted to share below some thoughts from an ASIS board certified expert in security management, Scott Soltis, CPP and CEO of HMS Security & Risk Management Solutions.

The protection of assets in all forms, people, property and information is critical to the success of all organizations.  This article highlights access control and physical security models and summarizes many industry “best practice” concepts.

The need for physical security and premise protection has been in existence for thousands of years. Access control can be found in historical architecture.  Dating back to the time of Caesar, the need to protect a physical structure can be found by use of gates, walls and other barriers.  In the dark ages, many kingdoms were protected atop high mountains or hills, or used motes and drawbridges to keep unauthorized persons from gaining access to their castles.

With modernization, physical security has quickly transcended from traditional locks and keys to the most sophisticated computerized and network based electronic access control systems, which can utilize unique credentialing approaches to identify/authorize an individual into an area. As companies expand and compete in the global marketplace, security program are being pressured for more efficiency and cost reduction.  Companies with global competition, also face the threat of industrial espionage.

Workplace violence and active violence in the workplace remains a consistent threat to U.S. companies and organizations. While this article doesn’t focus on the importance of organizations having a comprehensive workplace violence prevention program, the existence of a successful physical security program provides a core-mitigating factor to protect employees against the threat of harm.  Physical security programs help to reduce business risks and susceptibility to lawsuits and civil litigation, and assist in the protection of the assets of an organization.

Developing a Physical Security Program

A typical physical security program requires multiple layers of protection with layers becoming progressively more difficult to access closer inward toward the asset. Each layer will have multiple controls that will aid in the protection of the assets.  The function of each of the physical security layers is to deter, detect, delay, deny, and defend against loss.

In order for the physical security program to be effective, it is incumbent on the organization to develop and maintain controls to include policies and procedures, personnel management and training, physical barriers and controls, access control equipment, and adequate reporting and records management processes or systems.

Prior to deploying a physical security program, it is recommended that a qualified security professional conduct a Threat, Vulnerability/Risk Assessment (TVRA). This assessment should include but not be limited to:

  • determining the existing levels of security,
  • identifying areas of improvement in the physical security program,
  • establishing the appropriate levels of protection needed, and
  • recommending controls to enhance the overall security program.

Following the completion of the TVRA, a security program can be designed/modified to meet the needs of the organization and ensure that the security program and is adaptable to manage existing as well as future threats. A well-implemented security program will include a continual improvement process that ensures the program is adjusted to environmental changes, and ensures regular updates that tests the effectiveness of the program elements.

Having a qualified security professional implement a security program will reduce an organization’s security risks and more importantly provide a method for organizations to meet the duty of care, which would be expected by its employees.

For more information on this topic, contact Scott Soltis at: scott.soltis@hmsent.com

GPS Tracking and Smartphone Apps – Get Consent!

With the proliferation of satellite navigation systems and smart phones, many employers have contemplated using GPS tracking to increase efficiency, and frankly, to keep a better eye on their employees during the work day. The use of GPS tracking in a vehicle can be lawful, there are some limitations to keep in mind.

First, you have to keep in mind an employee’s potential right to privacy while in the company vehicle. Make sure you have a policy in place that informs the employee that the vehicle has a GPS system installed that will track their whereabouts. If the GPS system has other functionality, like tracking speed, gas consumption and driving behaviors, the employee should be put on notice of those things as well. Some GPS systems also have video and audio recording features. All of those things should be explicitly disclosed to diminish the employee’s expectation of privacy while operating the company vehicle.

Second, there are a number of states that limit when and how a GPS system can be installed. For example, in California there is no statute expressly limiting the installation of a GPS system on a company vehicle, but California Penal Code section 637.7 limits when a GPS system can be installed on someone else’s vehicle. However, if you obtain consent from the owner, lessor or lessee of the vehicle consents to the installation of the GPS device.

Minnesota’s restriction on the installation of a GPS tracking device is similar, but broader in its application. (Minn. Stat. 626A.35.) Instead of limiting only installation of a tracking device, Minnesota’s statute prohibits use of a mobile tracking device without a court order, unless consent is obtained from the owner “of the object to which the mobile tracking device is attached…” There are similar laws in Tennessee (Tenn. Code § 39-13-606) and Texas (Texas Penal Code § 16.06).

These statutes create a conundrum for employers who have their employees install GPS tracking apps on their smart phones. Arguably, the statutes would not cover that situation because both statutes say that the tracking device has to be “attached”, and it’s not clear if the installation of an app means the app is “attached.” With the ambiguity in the wording of the statutes, if an employer is going to require the installation of a tracking app on a smart phone, the best practice to avoid potential invasion of privacy claims is to obtain express consent from the employee. Just like a GPS device, the employee should be put on notice of the types of data and information the app will track.

There are additional considerations like when the tracking device is tracking the employee. To avoid invasion of privacy claims, tracking devices should not be active when the employee is not working.

This area of the law continues to change, but its pace is behind the changes in technology so it is important to consult with your employment counsel before implementing new technologies.

IRS Issues Warning About W-2 Cyber-Scams, Especially for Schools, Nonprofits and Tribal Organizations

On February 2, 2017, the IRS issued a warning to all employers regarding the resurgence of a W-2 based cyber scam. The scam, which targets the corporate world during tax season, is currently “spreading to other sectors, including school districts, tribal organizations and nonprofits.” (irs.gov/news-events).

This cyber-scam is simple, but highly successful. It consists of an e-mail sent to an employee in the Human Resources or Accounting department from an executive within the organization. Both the TO and FROM e-mail addresses are accurate internal addresses, as are the sender’s and recipient’s names. The e-mail requests that the recipient forward the company’s W-2 forms, or related data, to the sender. This request aligns with the job responsibilities of both parties to the email.

Despite appearances, the e-mail is a fraud. The scammer is “spoofing” the executive’s identity. In other words, the cyber-criminal assumes the identity and e-mail address of the executive for the purpose of sending what appears as a legitimate request. The recipient relies on the accuracy of the sender’s e-mail address, coupled with the sender’s job title and responsibilities, and forwards the confidential W-2 information. The forwarded information goes to a hidden e-mail address controlled by the cyber-criminal.

When successful, the cyber-criminal obtains a trove of sensitive employee data that may include names, dates of birth, addresses, salary information, and social security numbers. This information is used to file fake tax returns and requests for tax refunds and/or sold on the dark web to perpetrators of identity theft.

The IRS gives examples of these W-2 e-mail requests on its website:

  • “Kindly send me the individual 2016 W-2 (PDF) and earnings summary of all W-2 of our company staff for a quick review.”
  • “Can you send me the updated list of employees with full details (name, Social Security Number, Date of Birth, Home Address, Salary).”
  • “I want you to send me the list of W-2 copy of employees wage and tax statement for 2016. I need them in PDF file type, you can send it as an attachment. Kindly prepare the lists and email them to me asap.”

These cyber-scams, known as business email compromise (BEC) attacks, or CEO spoofing, are a form of ‘spear phishing.’ Spear phishing targets a specific victim using personal or organizational information to elicit the victim’s trust. The cyber-criminal obtains and uses information such as personal and work e-mail addresses, job titles and responsibilities, names of friends and colleagues, personal interests, etc. to lure the victim into providing sensitive or confidential information.  Quite often, the scammers cull this information from social media, LinkedIn, and corporate websites. The method is both convincing and highly successful.

While an organization can use firewalls, web filters, malware scans or other security software to hinder spear phishing, experts agree the best defense is employee awareness. This includes ongoing security awareness training (see our white paper with best practices for setting up a training program) for all levels of employees, simulated phishing exercises, internal procedures for verifying transfers of sensitive information, and reduced posting of personal information on-line.

Although simple, the W-2 e-mail scam can have a devastating impact on an organization and its employees. And, although equally simple, employee awareness can help prevent it.

Instances of W-2 or similar attacks should be reported to the IRS at phishing@irs.gov and the Internet Crime Complaint Center of the FBI.

 

Mary Costigan is working with our Privacy, e-Communications and Data Security Group as part of an externship with Pace University Law School’s New Directions for Attorneys Program.

Email Privacy Act Introduced With Bi-Partisan Support in the House

On January 9, 2017, lawmakers in the House re-introduced legislation, the Email Privacy Act, which, if enacted, would require the government to obtain a court-issued warrant to access electronic communications, including emails and social networking messages, from cloud providers (e.g., Google, Yahoo) when such communications are older than 180 days. Current law, the Electronic Communications Privacy Act (ECPA), only requires court-issued warrants for electronic communications that are 180 days old or less, but authorizes law enforcement and some government agencies — such as the SEC — to obtain electronic communications from cloud providers with a subpoena, issued by a prosecutor without approval of a judge, if the communications are older than 180 days.

Supporters of the Email Privacy Act point out that, when Congress enacted the ECPA in 1986, electronic storage was expensive and email service providers typically deleted electronic communications within 90 days.  Congress, when enacting the ECPA, did not require warrants for electronic communications that were older than 180 days, because such communications were, to the limited extent any existed, considered “abandoned property.” Supporters of the Email Privacy Act contend that Congress looked at then existing technology and never contemplated that one day many people would store their electronic communications with email service providers for well beyond 180 days. The Email Privacy Act would, according to supporters, fix this outdated flaw in the ECPA.

Federal agencies, which have relied on the ECPA, have pushed for there to be no changes to the law. In a 2013 letter to Senate Judiciary Committee, the Chair of the SEC stated, in opposition to similar legislation, that a warrant requirement would block the SEC from obtaining digital content from service providers, and has recently reaffirmed these sentiments. The SEC is a civil agency and lacks authority to issue warrants, relying instead on subpoenas for investigations.

The Email Privacy Act has bi-partisan support in the House, with four Republicans and five Democrats signed on as original co-sponsors of the legislation. The Email Privacy Act has not been introduced in the Senate, and it remains unclear if any senator will sponsor the legislation in that chamber. Senator Lee (R.-Utah), who sponsored the same legislation in the 114th Congress, reportedly does not plan to introduce it again. It is also unclear at this time if President Trump would sign this legislation into law if it passes both the House and the Senate.

In 2016, during the 114th Congress, the Email Privacy Act passed the House unanimously but then stalled in the Senate Judiciary Committee after Senator Cornyn (R-Texas) offered a controversial amendment that would have provided the FBI with expanded surveillance power.

We will continue to monitor this important legislation and post updates as there are new developments.

SCOTUS Won’t Slime Viacom in Class Action Challenging Tracking Children Online

A class action alleging Viacom illegally obtained and disclosed personally identifiable information from children under the age of thirteen through the Nickelodeon website recently reached the end of line (almost) when the class’ petition for writ of certiorari was denied by the Supreme Court this month. The high court chose not to further define the contours of what constitutes “personally identifiable information” and “disclosure.”

The drafters of the 1988 Video Privacy Protection Act (“the Act”) likely had no idea that the law passed nearly thirty years ago would be raised to challenge the practice we each encounter hundreds of times per week—tracking of our IP addresses through the use of cookies on websites. The law prohibits disclosure of personally identifying information relating to viewers’ consumption of video-related services. When passed, lawmakers probably envisioned video rental clerks being prohibited from sharing the list of videos a particular renter selected with others. Now, in a world where the number of viewers and followers is equivalent to profits for all who sell, information gained from IP addresses makes it possible for companies to target individuals in a way that was probably never imagined.

The Third Circuit decided as a matter of first impression that Viacom had not disclosed personally identifiable information in violation of the Act when it shared IP addresses, collected through cookies, with Google for its use in targeted advertising. The court did identify that there is a split of authority regarding whether or not “static digital identifiers,” such as IP addresses, constituted personally identifiable information because they could, in theory, be combined with other information to identify an individual. Other courts, including the First Circuit, have held that any unique identifier, including an IP address combined with GPS coordinates, could constitute personally identifying information. This decision also stands in contrast with a recent EU ruling, Breyer v. Bundesrepublik Deutschland, E.C.J., No. C-582/14, which held that under certain circumstances IP addresses could constitute personal data protected under EU data protection law. However, in the Nickelodeon case, the court determined the information could not be used to identify a specific individual without extraordinary effort and that the information had not been disclosed.

Advice for Businesses

Businesses striving to not run afoul of the Act can learn valuable lessons from this case. First, do not think narrowly when identifying “personal information.” It is not always as straightforward as a Social Security number or bank account number. Think about combinations of information that could enable another person or entity to identify a specific individual. Second, use caution when sharing information about customers or employees—even when it might seem innocuous or unlikely that specific individuals could be identified. Third, do not promise more privacy or data security than you actually provide. The class claim alleging Viacom collected personal information about children, despite its promise not to do so, lives on and the court described that violation as “highly offensive.”

 

LexBlog