VOTE 2016 – The Only Easy Choice

In an election year that has divided much of the country, we are providing you with a clear and simple choice this voting cycle.  To this end, we are proud to announce that the Workplace Privacy Report Has Been Nominated for The Expert Institute’s Best Legal Blog Competition.

From a field of hundreds of potential nominees, the Workplace Privacy Report has received enough nominations to join one of the largest competitions for legal blog writing online today.  If you enjoy the Workplace Privacy Report, it is up to you, our readers, to follow the link below and vote!

To vote, simply click here!

We appreciate your readership and will continue to provide new and exciting content for you in the future.

EU Top Court Rules IP Addresses Maybe Protected Personal Data

In a decision that could have significant impact for online companies that have European operations, the European Union’s (EU) top court ruled that Internet Protocol addresses (IP addresses) could, under certain circumstances, constitute protected data under EU data protection law (Breyer v. Bundesrepublik Deutschland, E.C.J., No. C-582/14, 10/19/16).  As most of us know, the IP address is a series of numbers that is allocated to a specific device (i.e., computer or smart phone) by an Internet service provider. A device is identified through the IP address and allows it access to the Internet.  IP addresses can either be static or dynamic.  Dynamic IP addresses change every time an electronic device connects to the Internet, and are the more common of the two.

Directive 95/46/EC, commonly known as the “Directive,” sets out certain standards EU members must legally adopt as law in order to protect personal data. Consequently, if IP addresses are considered “personal data” online companies (Facebook and Google, for example) would have to treat them in accordance with potentially restrictive data handling requirements.  Under the Directive, the processing of personal data (e.g., marketing or profiling) is only lawful if it is necessary “to achieve a legitimate objective pursued by the controller, or by the third party to which the data are transmitted, provided that the interest or the fundamental rights and freedom of the data subject does not override the objective.”

This specific case involves websites operated by the Federal Republic of Germany (“BRD”) which, like most website operators, records the IP addresses of visitors of its websites. Patrick Breyer sued the BRD claiming that if the IP addresses qualify as personal data under EU data protection law, then the BRD would be mandated to require consent before processing such data.  Breyer alleged the retention of IP addresses by the Republic of Germany could enable profiling of website users and other non-legitimate objectives.

The EU’s top court, the Court of Justice of the European Union (the “CJEU”), held that dynamic IP addresses could be considered personal data provided the website “has the legal means to identify the visitor with the help of additional information that the visitors’ internet service provider has. Since this is generally the case with most providers, the Court held dynamic addresses could potentially be considered protected personal data. While this case was decided under the Directive, it is important to note that the decision is consistent with the expanding concept of personal data under the General Data Protection Regulations which will take effect in May 2018.

However, in a material caveat, the high court here stated that the federal German institutions running the websites in question “may have a legitimate interest in ensuring the continued functioning of their websites which goes beyond each specific use of their publicly accessible websites” when protecting their sites against online attack. The case now will be returned to the German Federal Court of Justice, which will decide the case based on the CJEU’s holding.


Defining IP addresses as personal data could, in certain circumstances, impose significant limitations on the storage and use of that information. Companies that seek to identify users through their IP addresses for marketing or other purposes should closely monitor continuing developments in this area and be prepared to address not only how they safeguard this data, but also what legitimate business reason they have for its collection.


Yelling at Your Smartphone Could Get You Fired!

Michael Schrage at Harvard Business Review warns his readers, “Stop swearing at Siri. Quit cursing Cortana,” arguing such behavior could soon be seen just as destructive to an organization as ridiculing a subordinate. In the 1993 film, Demolition Man, Sylvester Stallone’s character, John Spartan, received multiple tickets from a wall box that overheard him violate the “Verbal Morality Statute” during a conversation with a colleague. [mature ears only please!] Spartan, who had been awoken from his cryogenic sleep, was not aware of the dramatic changes in technology that had taken place while he had been asleep. We see technological change every day, but we may not be ready for the far-reaching implications machine learning and artificial intelligence (AI) will have on society and the workplace.

Schrage describes how adaptive bots enable devices to learn from each encounter they have with humans, including negative ones, such as cursing at Siri or slamming a smartphone down when it reports about one restaurant, though the user was searching for a different eating place. Faced with repeated interactions like this, the bot is likely to be adversely affected by the bad behavior, and will fail to perform as intended. As companies leverage more of this technology to enhance worker productivity and customer interactions, employee abuse of bots will frustrate the company’s efforts and investment. That can lead to reduced profits and employee discipline.

Employees are seeing some of this already with the use of telematics in company vehicles. Telematics and related technologies provide employers with a much more detailed view of their employees’ use of company vehicles including location, movement, status and behavior of the vehicle and the employees. That detailed view results from the extensive and real time reports employers receive concerning employees’ use of company vehicles. Employers can see, for example, when their employees are speeding, braking too abruptly, or swerving to strongly. With some applications, employers also can continually record the activity and conversations inside the vehicle, including when vehicle sensors indicate there has been an accident. It is not hard to see that increased use of these technologies can result in more employee discipline, but also make employees drive more carefully.

Just as employers can generate records of nearly all aspects of the use of their vehicles by employees, there surely are records being maintained about the manner in which individuals interact with Siri and similar applications. While those records likely are currently being held and examined by the providers of the technology, that may soon change as organizations want to collect this data for their own purposes. Employers having such information could be significant.

As Mr. Schrage argues, making the most of new AI and machine learning technologies requires that the users of those technologies be good actors. In short, workers will need to be “good” people when interacting with machines that learn, otherwise, it will be more difficult for the machines to perform as intended. Perhaps this will have a positive impact on the bottom line as well as human interactions generally. But it also will raise interesting challenges for human resource professionals as they likely will need to develop and enforce policies designed to improve interactions between human employees and company machines.

We’ll have to see. But in the meantime, be nice to Siri!

How Much Do You Spend on Cybersecurity…and on What? reported that according to an International Data Corporation (IDC) forecast, by 2020, spending on security-related hardware, software, and services will eclipse $100 billion. However, consulting company NTT Com Security recently surveyed 1,000 executives and found only about half of them reported having a formal plan to respond to a data breach. Franklin wisely noted that “an ounce of prevention is worth a pound of cure,” but he also reminded us that “by failing to prepare, you are preparing to fail.”

According to the IDC report, the banking industry is forecast to make the largest investment in security for 2016. This makes some sense – that is where the money is. But there is significant value and opportunity in other data that companies should consider when evaluating their data security spend.

For some, value is in access to data, not necessarily the data itself. According to a recent post by my colleague, Damon Silver, ransomware attacks have increased four-fold from just a year ago – now estimated to be 4,000 attacks reported per day. These criminals often do not want the business’ data, but prefer to extract significant dollars from companies by preventing the businesses from accessing their own data.

Of course, there are steps companies can take to help prevent these incidents. But if reports about the number of these attacks are true, it seems few businesses have taken those steps and those that have are not having much success.

For those that have been attacked, there are a range of things they have to address, and quickly – what should be done first, how can the business continue to operate, what vendors and who in law enforcement can help, is there insurance coverage, do the criminals possess the company’s information and how much, what are the legal obligations, including notification.

Data is power and can be used to influence. It is neither identity theft nor the desire to extract a few Bitcoins that is behind the hacking and release of emails about Hillary Clinton. Obviously these bad actors want to harm the presidential candidate, and have been somewhat successful influencing the election. If there is one thing we can learn from the current presidential election, it is that data breach prevention and preparedness is not just about credit cards and Social Security numbers.

Though on a different scale, breaches exposing insensitive email or other communications such as high-level strategy discussions among C-suite members, or that suggest systemic discriminatory practices, or that outline detailed labor management strategies can have significant implications for a company’s market position and profitability. Consider that the Ashley Madison breach did not just result in exposing potential cheaters. The hackers also disclosed company emails (at least 12.7 gigabytes of emails) which included sensitive computer code and worker salary data, furthering the efforts to bring the company down.

Increased investment and vigilance in preventing attacks and releases of sensitive data are coming. But, a steady drumbeat of security professionals and others continue to warn businesses that cyber attacks are not a matter of if, but when. Recognizing that no system of security is perfect, and as spending on data security continues to rise, a significant item of that spending ought to include breach preparedness and response planning.

DoD Updates Cyber Incident Reporting Rule

On October 4, 2016, a final rule was published in the Federal Register which implements statutory requirements for Department of Defense (DoD) contractors and subcontractors to report cyber incidents that result in an actual or potentially adverse effect on a covered contractor information system or covered defense information residing therein, or on a contractor’s ability to provide operationally critical support.

The final rule responds to public comments to the interim final rule published on October 2, 2015, and updates DoD’s Defense Industrial Base (DIB) Cybersecurity (CS) Activities.  The mandatory reporting requirements apply to all forms of agreements between DoD and DIB companies (contracts, grants, cooperative agreements, other transaction agreements, technology investments agreements, and any other type of legal instrument or agreement) and the revisions provided are part of DoD’s efforts to establish a single reporting mechanism for such cyber incidents on unclassified DoD contractor networks or information systems.  Importantly, reporting under this rule does not abrogate the contractor’s responsibility for any other applicable cyber incident reporting requirement which the contractor may be subject to (e.g. FTC, state laws, etc.).

The final rule includes new definitions of covered contractor information system and covered defense information.  Covered contractor information system means an unclassified information system that is owned or operated by or for a contractor and that processes, stores, or transmits covered defense information.  Covered defense information means unclassified controlled technical information or other information that requires safeguarding or dissemination controls pursuant to and consistent with law, regulations, and Government wide policies, and is: (1) marked or otherwise identified in an agreement and provided to the contractor by or on behalf of the DoD in support of the performance of the agreement; or (2) collected, developed, received, transmitted, used, or stored by or on behalf of the contractor in support of the performance of the agreement.

A foundational element of the mandatory reporting requirements, as well as the voluntary DIB CS program, is the recognition that the information being shared between the parties includes extremely sensitive information that requires protection.  The final rule is meant to permit the sharing of information, including cyber threat information, and thereby provide greater insights into the hostile activity targeting the DIB.

Organizations which do business with the Government, must familiarize themselves with this final rule as well as other regulations governing the information they process, store, or transmit.

HHS Issues Cloud Computing Guidance Which Is Helpful To All Users of Cloud Services

Last week, the Department of Health and Human Services’ Office for Civil Rights (OCR) provided guidance for HIPAA covered entities and business associates that use or want to use cloud computing services involving protected health information (PHI). Covered entities and business associates seeking cloud services often have many concerns regarding HIPAA compliance, and this guidance helps to address some of those concerns. The guidance also will help cloud service providers (CSPs) understand some of their obligations when serving the vast health care sector. Frankly, this guidance is helpful for any entity that desires to use cloud services to store, transfer or otherwise process sensitive information, including personal information. We summarized some of the key points in the guidance below.

CSPs that only store PHI and provide “no-view” services are not subject to HIPAA, right?

Wrong. OCR reminds everyone that when a covered entity engages a CSP to create, receive, maintain, store or transmit ePHI, on its behalf, the CSP is a business associate under HIPAA.  Likewise, when a business associate subcontracts with a CSP for similar services, the CSP is a business associate.

Practically, however, with regard to no-view services, CSPs and their HIPAA-covered customers can take advantage of the flexibility and scalability built into the HIPAA rules. OCR’s guidance points out that when a CSP is providing only no-view services, certain Security Rule requirements may be satisfied for both parties through the actions of one of the parties. For example, certain access controls, such as unique user identification, may be the responsibility of the customer (when the customer has sole access to ePHI), while others, such as encryption, may be the responsibility of the CSP.  Thus, the parties will have to review these issues carefully and modify the agreements accordingly.

Is this true even if the CSP processes or stores only encrypted ePHI and lacks an encryption key for the data?

Yes. Accordingly, the covered entity (or business associate) and the CSP must enter into a HIPAA-compliant business associate agreement (BAA), and the CSP is both contractually liable under the BAA and directly liable for compliance with the applicable requirements of the HIPAA Rules. Note that the absence of a BAA does not change that the CSP is a business associate subject to the applicable requirements under the rules, but the HIPAA covered entity would not have contractual protection, such as breach of contract claims and indemnity.

For entities not covered by HIPAA, you may have other legal obligations that apply when you decide to share certain information with a CSP. For example, rules in California and Massachusetts generally require businesses to obtain written agreements from third parties to safeguard the personal information they maintain for the business to perform the desired services.

So, if we use a CSP, we only have to worry about having a BAA in place?

Probably not. Use of cloud services likely will require the covered entity or business associate to perform a risk assessment to understand how those services will affect overall HIPAA compliance. Some of those compliance issues will be addressed in the BAA. However, contracting with a CSP often involves a “Service Level Agreement” or “SLA” which can raise other HIPAA compliance issues. For example, specific SLA provisions concerning system availability or back-up and data recovery may not be permissible under HIPAA. Entities not covered by HIPAA have similar needs to ensure that the cloud services will meet their needs with respect to these and other issues, such as return of data following termination of the SLA.

If data is encrypted in the cloud, is HIPAA satisfied?

No. Strong encryption reduces risk to PHI for sure, but does not maintain its integrity and availability. That is, for example, encryption does not ensure that ePHI is not corrupted by malware, or that it will remain available to authorized persons during emergency situations. Further, encryption does not address other administrative and physical safeguards. For example, even when the parties have agreed that the customer is responsible for authenticating access to ePHI, the CSP may still need to implement appropriate internal controls to assure only authorized access to administrative tools that manage resources (e.g., storage, memory, network interfaces, CPUs).  The SLA and the BAA are important vehicles for confirming which entity is responsible for these requirements.

Can CSPs block our access to PHI?

No. Blocking a covered entity’s access to PHI would violate the Privacy Rule. Thus, for example, an SLA cannot contain a provision that allows the CSP to block access to ePHI to resolve a payment dispute. Note this may not be the case with arrangements not covered by HIPAA. Accordingly, owners of the data in these situations need to proceed with care when negotiating and disputing payment under come SLAs.

Do CSPs have to report “pings” and others unsuccessful security incidents?

In general, the answer is yes. Security Rule § 164.314(a)(2)(i)(C) provides that a BAA must require the business associate to report any security incidents of which it becomes aware. A security incident means the attempted or successful unauthorized access, use, disclosure, modification, or destruction of information or interference with system operations in an information system.  However, the Security Rule is flexible and does not prescribe the level of detail, frequency, or format of reports of security incidents, which may be worked out in the BAA.  Thus, the parties should consider different levels of detail, frequency, and formatting of reports based on the nature of the security incidents.

Does HIPAA permit PHI to be stored in the cloud outside of the United States?

In short, the answer is yes. But, as noted above, the covered entity or business associate needs to consider the applicable risks.


Cloud services can yield substantial cost savings and offer substantial convenience to users. CSPs also tend to offer a higher level of sophistication in the area of data security than most health care providers and their service providers. But the failure to think carefully about adoption and implementation of these services can create substantial exposure for the company. Significant exposure can result not only from a breach of PHI in the cloud environment, but also from the failure to appropriately consider and document the risks relating to that environment.


New York State Proposes Cybersecurity Regulation Impacting Banks, Insurance Companies & Other Financial Services Institutions

New York Governor Andrew M. Cuomo announced yesterday a new proposed regulation to address the growing threat posed by cyber-attacks. According to the State’s press release, the proposed regulation, which is subject to a 45-day notice and public comment period before final issuance, “aims NYDFS-Logo-300x300to protect consumer data and financial systems from terrorist organizations and other criminal enterprises.”  In the past 18 months, several other states – including Connecticut, Nevada, and Washington – have also taken legislative action to promote greater protection against cyber-threats.

Once in place, New York’s regulation will require regulated organizations – specifically banks, insurance companies, and other financial services institutions regulated by the State’s Department of Financial Services – to: (1) establish a cybersecurity program; (2) adopt a written cybersecurity policy; (3) designate a Chief Information Security Officer; and (4) implement policies and procedures designed to ensure the security of information systems. The Department of Financial Services has published guidance fleshing out each of the foregoing requirements.

In the wake of Gov. Cuomo’s announcement, banks, insurance companies, and subject financial services institutions that do business in New York should carefully review their current programs, policies, and procedures to evaluate what action, if any, they will need to take to comply with the new obligations contemplated by the State’s proposed regulation.


3 Essential Steps For Responding To Ransomware Attacks

Likely because most victims comply with their demands, the incidence of attacks by ransomware hackers has exploded in 2016. Guidance issued by the U.S. Department of Health and Human Services (“HHS”) in July notes that, on average, there have been 4,000 reported ransomware attacks per day thus far in 2016, far exceeding the average of 1,000 attacks per day last year.

What Is Ransomware?

Ransomware is a type of malware that denies the affected user access to his or her data, typically by encrypting it. Once the user’s data is encrypted, the hacker who launched the ransomware attack notifies him or her that, in order to obtain a key to decrypt the data, he or she must pay a ransom, often in a cryptocurrency such as Bitcoin.  Hackers sometimes impersonate government entities – like the IRS or FBI – in their ransom notes.

Image result for ransomwareImage result for ransomware

Can I Just Pay The Ransom And Move On?

While it may be tempting to do so, there are serious risks to this approach. Even if the ransom demanded by a ransomware hacker is not prohibitively expensive, an organization victimized by an attack must bear in mind that simply paying off the hacker is unlikely to make its problems go away.

As an initial matter, there is no guarantee that, upon receipt of the ransom payment, the hacker will provide a fully functional key that enables your organization to regain access to its data. Moreover, your organization must evaluate whether the ransomware attack triggered legal obligations under federal or state privacy laws, or other regulatory or contractual requirements.

What Are My Legal Obligations In The Event Of A Ransomware Attack?

Determining your organization’s legal obligations in responding to a ransomware attack requires a fact-specific inquiry. For organizations subject to HIPAA, for example, HHS’s guidance indicates that a ransomware attack is presumed to be a breach triggering HIPAA obligations unless the affected organization can demonstrate that there is a low probability that protected health information (“PHI”) has been compromised.  This low probability analysis, the HHS instructs, should include consideration of a the following four factors, among others: (1) the nature of the PHI involved, including the types of identifiers and the likelihood of re-identification; (2) the unauthorized person who used the PHI or to whom the disclosure was made; (3) whether the PHI was actually acquired or viewed; and (4) the extent to which the risk to the PHI has been mitigated.

Image result for ransomware

Organizations that are not subject to HIPAA must also assess their legal obligations in the wake of a ransomware attack, such as those imposed by the Gramm-Leach-Bliley Act or under state law. Under the data breach laws of certain states – such as New Jersey, Connecticut, Florida, Kansas, and Louisiana – unauthorized access to personal information constitutes a breach, even absent evidence that the personal information accessed was actually acquired.  Organizations whose affected employees or consumers work or reside in these states thus face increased risk that a ransomware incident will trigger breach notification obligations.

Additionally, during some ransomware attacks, hackers do not simply block the user’s access to its data, but also exfiltrate that data to external locations, and/or destroy or alter it. Accordingly, organizations subject to the data breach laws of any state may be required to take certain actions in the event of a ransomware incident.

What Should I Do After I Discover A Ransomware Attack?

If you believe your organization has been victimized by a ransomware attack, you should proceed as follows, carefully documenting each of the steps laid out below:

ONE: Notify your cyber liability insurer. This step is essential not only to ensure applicable coverage, but also because your insurance contact will likely be able to provide valuable early-stage guidance, such as on retention of qualified data security professionals to investigate the ransomware incident, and implementation of appropriate measures to mitigate existing and future risk.

TWO: Investigate the incident. Your internal or outside data security professionals should immediately launch (and document) an investigation of the incident. This investigation should include, at minimum, analysis of:

  • When the incident occurred.
  • The methods the hackers used to carry out the attack.
  • Which of your systems were affected.
  • The nature of the data affected – e.g., was PHI or personal information accessed or acquired. (Most state breach notification laws define personal information as the affected individual’s full name, or first initial and last name, in combination with any of the following data elements: (i) social security number; (ii) government identification card number; or (iii) account number or credit / debit card number with any required security code, access code, or password.)
  • The states in which the individuals whose data was affected work or reside.
  • Whether there is evidence that the affected data was exfiltrated to the attacker’s servers, or elsewhere.
  • Whether the attack is completed or ongoing; and, if that latter, whether additional systems have been compromised.
  • What mitigation measures were and are in place. For example:
    • Were the affected files encrypted and, if so, is there evidence that the hackers successfully decrypted those files.
    • What data backup, disaster recovery, and/or data restoration plans did you have in place.
    • What post-discovery steps did you take to prevent continued or future acquisition, access, use, or disclosure of the compromised data.

THREE: Consult legal counsel.  As discussed above, ransomware attacks may trigger obligations under federal and state privacy laws, such as HIPAA, the Gramm-Leach-Bliley Act, and state breach notification laws.  They may also require an affected organization to comply with other regulatory and contractual requirements, and to communicate with government agencies like the FBI, U.S. Secret Service, or state attorneys general offices.  Consulting an experienced attorney upon discovery of a ransomware attack will ensure that your organization complies with applicable legal requirements, thereby controlling the costs inflicted by the attack to full extent possible.

No Harm, No Foul (And No Class Action Lawsuit): TCPA Class Action Dismissed For Failure to Allege Harm

Earlier this month, United States District Court Judge Peter Sheridan dismissed a class action brought against Work Out World (“WOW”) under the Telephone Consumer Protection Act (TCPA).  In doing so, Judge Sheridan relied on the recent decision by the United States Supreme Court in Spokeo, Inc. v. Robins.

The named plaintiff, Norreen Susinno, filed a class action complaint against WOW alleging WOW negligently, knowingly and/or willfully contacted the plaintiffs on their cellular telephones in violation of the TCPA and thereby invaded their privacy.  Ms. Susinno sought to certify a nationwide class of all persons who, in the preceding four years, had received telephone calls from WOW which were made with the use of an automatic telephone dialing system and/or used an artificial or prerecorded voice.

On June 10, 2016, WOW filed a motion to dismiss the complaint. Following a hearing on the motion to dismiss, Judge Sheridan granted WOW’s motion and dismissed the matter with prejudice.

Although Ms. Susinno filed an appeal of the district court’s decision, the decision may be very helpful to companies that are looking for various arguments to dispose of and otherwise defend against class claims, particularly where the alleged harm at issue is negligible, to the extent there is any harm at all.

For additional insight regarding this case, please see our related post on our Employment Class and Collective Action Update.

Sharing of Passwords Under Certain Circumstances Unlawful

Many companies have experienced the departure of an employee and the elimination of that former employees access to the company’s computers and networks. In the recent case of USA v. Nosal, D.C. No. 3:08-cr-00237-EMC-1 (July 5, 2016), the Ninth Circuit Court of Appeals was presented with the following facts:  Nosal, a former employee of Korn/Ferry departed and launched a competitive entity.  When Nosal left the company, the company revoked his computer access credentials.  After his departure, Nosal was nevertheless able to continue accessing the company’s confidential and proprietary information when his former secretary provided Nosal with her database access credentials.  In Nosal, the question for the court was whether the jury properly convicted David Nosal of the crime of conspiracy under the Computer Fraud and Abuse Act (“CFAA”) for accessing and downloading information from the company’s database “without authorization.”  The Court in a 2-1 decision held that indeed Nosal violated the criminal provisions of CFAA even though he did not himself access and download the information.

The CFAA prohibits access to a computer or computer system by ones who are either exceeding authorized use or are not authorized users.  18 U.S.C. § 1030.  The applicable section of the CFAA addressed in the Nosal case provides that:

Whoever . . . knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct further the intended fraud and obtains anything of value. . .shall be punished. . . . .

The prosecution successfully argued that after Nosal left the company, he lacked any rights to use the company’s network.  Because he lacked rights to access the network, the use of the secretary’s login credentials violated the CFAA’s ban on access “without authorization.” The court found that Nosal violated the CFAA because he “knowingly and with intent to defraud blatantly circumvented the affirmative revocation of his computer access.  This access falls squarely within the CFAA’s prohibition on access ‘without authorization’ and thus we affirm Nosal’s conviction for violations of . . . the CFAA.”

But, what about the fact that a person who did have authorization – Nosal’s secretary – granted Nosal permission to access the database?  On this point, the court stated that access:

‘without authorization’ is an unambiguous, non-technical term that, given its plain and ordinary meaning, means accessing a protected computer without permission. This definition has a simple corollary: once authorization to access a computer has been affirmatively revoked, the user cannot sidestep the statute by going through the back door and accessing the computer through a third party. Unequivocal revocation of computer access closes both the front door and the back door.

The court further stated that an “employee could willy nilly give out passwords to anyone outside the company – former employees whose access had been revoked, competitors, industrious hackers, or bank robbers who find it less risky and more convenient to access accounts via the Internet rather than through armed robbery.”

As a result of this decision, some privacy groups have expressed concern that the court’s ruling could make it easier to prosecute people for ordinary password sharing, such as when a husband logs into his wife’s Facebook account with her credentials and permission, or to print a boarding pass.

However, the majority addressed this concern square on stating that “hypotheticals about the dire consequences of criminalizing password sharing. . . miss the mark in this case.  This case is not about password sharing” and noted that the case “bears little resemblance to asking a spouse to log in to an email account to print a boarding pass.”

While this decision involved a criminal prosecution, with which most companies would not be involved, it is still worthy of consideration for employers.  Many employers have some form of agreement in place that would make accessing the company’s database after termination a violation.  In light of Nosal it would be prudent for a company to also include in its policies and agreements what is seemingly obvious – prohibit current employees from providing their passwords to former employees.  At least with this statement in writing, the company will have (1) a basis upon which to take appropriate disciplinary action – including termination – against the current employee who provided their password to a former employee, and (2) the ability to commence a civil legal action against the former employee under the CFAA.