In the face of seemingly daily news reports of company data breaches and the mounting legislative concern and efforts on both the state and federal level to enact laws safeguarding personal information maintained by companies, employers should be questioning whether they should implement privacy policies to address the protection of personal information they maintain on their employees.

To date, there is no all-encompassing federal privacy law. Rather, there are several federal laws which touch upon an aspect of protecting personal or private information collected from individual, such as the Children’s Online Privacy Protection Act (giving parents control over the information collected from their children online); Federal Trade Commission Act (pursuant to which the FTC has sought enforcement against companies who failed to follow their own privacy policies relating to consumers); Gramm-Leach-Bliley Act (requiring financial institutions, such as banks, to protect consumer financial information); Health Insurance Portability and Accountability Act of 1996 (requiring covered entities to protect individually identifiable health information); and the Americans with Disabilities Act and Family and Medical Leave Act (requiring confidentiality of employee medical information obtained by employer).

State legislatures have likewise used a piecemeal approach at attacking the problem by some states mandating the protection of social security numbers, protecting credit card information, protecting consumer financial information, and securing personally identifiable information (usually aimed at preventing identity theft). Additionally, forty seven (47) states now have laws addressing notification and other requirements when a data breach occurs. While only a handful of states explicitly require a written privacy policy (such as Connecticut when collecting social security numbers and Massachusetts in connection with a written information security program), the overwhelming majority of states inexplicitly require privacy policies by requiring security of personal information (such as California which now requires encryption) and notification when a breach of personal information has occurred. As such, where companies are required to notify affected individuals of a breach, they are implicitly required to protect the information to prevent such a breach. The first step in assembling that protection armor is to institute a privacy policy.

Employers maintain various types of personally identifiable information on their employees, including, but not limited to: names, dates of birth, social security numbers, addresses, telephone numbers, financial information (such as bank account numbers and credit/debit card numbers), email addresses and passwords, driver’s license, state issued identification and passport numbers, health insurance number, biometric data, and personally identifiable information on an employee’s spouse and/or children (most commonly contained in benefit enrollment forms), and any other information maintained about an individual that could be used to identify him/her or obtain access to an online account.

Employer privacy policies should at a minimum address: (1) the types of personal information, (such as that listed above), whether in electronic or paper format, obtained and maintained regarding employees and their family members; (2) where the information is maintained/stored; (3) how the information is protected both while being maintained and also when being transferred from the employee to the employer, between the employer’s systems/departments, and outside of the employer’s organization (such as to a third party vendor); (4) who has access to the information, including any outside vendors who perform personnel-related services for the employer; (5) the effective date of the policy; and (6) identify the individual within the organization responsible for compliance with the policy.

Additionally, employers should consider training their employees on the policy. Employees who handle private information in the course of their employment should be trained on the contents of the policy; importance of maintaining the privacy of the information; methods to be used to achieve the protection of such information; limiting disclosure of the information within the duties performed by the employee with respect to use of the information; and what to do when a suspected breach of the information has occurred. The general employee population should also be trained on the contents of the policy; the importance of maintaining the privacy of the information; and what to do if the employee suspects or has knowledge that the information has been breached.

Recognizing the growing number of connected and interconnected devices, a bipartisan group of Senators recently introduced a bill which would convene a working group of Federal stakeholders to provide recommendations to Congress on how to appropriately plan for and encourage the proliferation of the Internet of Things (IoT).
The Developing Innovation and Growing the Internet of Things Act (DIGIT Act) would require the working group to examine the IoT for current and future spectrum needs, the regulatory environment (including identification of sector-specific regulations, Federal grant practices, and budgetary or jurisdictional challenges), consumer protection, privacy and security, and the current use of technology by Federal agencies and their preparedness to adopt it in the future.
While the working group would seek representatives from the Department of Transportation (DOT), the Federal Communications Commission (FCC), the Federal Trade Commission (FTC), the National Science Foundation, the Department of Commerce, and the Office of Science and Technology, the working group would also be required to consult with non-Governmental stakeholders – including: i) subject matter experts, ii) information and communications technology manufactures, suppliers, and vendors, (iii) small, medium, and large businesses, and (iv) consumer groups.  The findings and recommendations of the working group would be submitted to the appropriate committees of Congress within one year of the bill’s enactment.
Additionally, the DIGIT Act would also direct the FCC, in consultation with the National Telecommunications and Information Administration, to conduct its own study to assess the current and future spectrum needs of the IoT. The FCC would similarly have one year after the enactment of the Act to submit a report including recommendations as to whether there is adequate licensed and unlicensed spectrum availability to support the growing IoT, what regulatory barriers may exist, and what the role of licensed and unlicensed spectrum is in the growth of the IoT.
According to the bill, estimates indicate more than 50,000,000,000 devices will be connected by the year 2020 with the IoT having the potential to generate trillions of dollars in economic opportunity.  Similarly, the IoT will allow businesses across the country to simplify logistics, cut costs, and pass savings on to consumers by utilizing the IoT.  Believing the United States leads the world in the development of technologies that support the IoT and this technology may be implemented by the U.S. Government to better deliver services to the public, the DIGIT Act was introduced following a previous Senate Resolution (Senate Resolution 110, 114th Congress) calling for a national strategy for the development of the IoT.

The Consumer Financial Protection Bureau (“CFPB”) gave the fintech online payment sector a “wake up call” with an enforcement action against a Des Moines start up digital payment provider, Dwolla, Inc. (“Dwolla”).

The CFPB alleged that Dwolla misrepresented how it was protecting consumers’ data. Dwolla entered into a Consent Order to settle the CFPB charges and agreed to pay a $100,000 penalty and to change and improve its current security practices.  The CFPB never alleged that Dwolla had breached any consumer data.  According to the CFPB, Dwolla “failed to employ reasonable and appropriate measures to protect data obtained from consumers from unauthorized access,” while telling consumers that the information was “securely encrypted and stored.”  Dwolla, had over 650,000 customer accounts and was transferring as much as $5M a day in 2015.

In a nutshell, the CFPB alleged that Dwolla’s representations regarding “securely encrypted and stored data,” were inaccurate for a number of specific reasons including:

  • Failing to implement appropriate data security policies and procedures until at least September 2012,
  • Failing to implement a written data security plan until at least October 2013,
  • Failing to conduct adequate risk assessments,
  • Failing to use encryption technology to properly safeguard consumer information,
  • Failing to provide adequate or mandatory employee training on data security, and
  • Failing to practice secure software development for consumer facing applications

In addition to the fine, Dwolla agreed to take preventative steps to address security concerns including:

  • Implementing a comprehensive data security plan,
  • Conducting data security risk assessments twice annually,
  • Designating a qualified individual to be accountable for data security issues,
  • Implementing appropriate data security policies and procedures,
  • Implementing an appropriate and precise method of customer identity authentication before any funds transfer,
  • Adopting specific procedures for the selection and retention of service providers capable of maintaining security practices,
  • Conducting regular and mandatory security data training, and
  • Obtaining an annual data security audit from an independent, third party acceptable to CFPB’s enforcement director.

The Consent Order will remain in effect for five (5) years.

This is the CFPB’s first enforcement action directly related to data security and appears to expand the CFPB’s jurisdiction into this arena. In the CFPB press release Director Richard Cordray stated, “With data breaches becoming commonplace and more consumers using these online payment systems, the risk to consumers is growing.  It is crucial that companies put systems in place to protect this information and accurately informed consumers about their data security practices.”

This virgin enforcement action by the CFPB appears to be a direct response to the growing concern about the lack of regulation for fintech digital payment firms. The enforcement action is also a welcome signal to traditional banks who have argued that the fintech sector has not received near the level of oversight or enforcement as they have.  It appears regulators are attempting to find the right balance between acting too “heavy handed” and not squelching the technical advances that have made finance more convenient for consumers while still insuring an adequate level of consumer protection.

Whether Google Docs, Dropbox, or some other file sharing system, employees, especially millennials and other digital natives, are increasingly likely to set up personal cloud-based document sharing and storage accounts for work purposes, usually with well-meaning intentions, such as convenience and flexibility. Sometimes this is done with explicit company approval, sometimes it is done with tacit awareness by middle management, and often the employer is unaware of this activity.

When an employee quits or is terminated, however, that account, and the business documents it contains, may be locked away in an inaccessible bubble. Worse, the employee could access trade secrets and other information stored in the cloud to unfairly compete. For example, in 2012, the computer gaming company Zynga sued a former employee for uploading trade secrets onto the employee’s personal Dropbox account before leaving to work for a competitor. At a minimum, it may take time to recover the information or obtain the user name and password from the former employee.  Storage of proprietary information, especially personally identifiable information (PII) on personal cloud accounts also increases the risk of a company data breach if the information is hacked.  Finally, allowing business documents to be stored outside of the system can also create headaches when enacting a litigation hold or responding to electronic discovery requests in litigation. What should employers be doing now, to address this trend?

Interested in reading more? Please see the full post on the Non-Compete and Trade Secrets Report.

Earlier today, the European Commission (the Commission) issued a draft “adequacy decision” as well as the texts that will constitute the EU-U.S. Privacy Shield (the Privacy Shield). This includes the Privacy Shield Principles companies have to abide by, as well as written commitments by the U.S. Government on the enforcement of the arrangement, including assurance on the safeguards and limitations concerning access to data by public authorities.

An “adequacy decision” is a decision, adopted by the Commission, which establishes that a non-EU country ensures an adequate level of protection of personal data by reason of its domestic law and international commitments.  The practical effect of such a decision is that personal data can flow from the 28 EU Member States (and the three European Economic Area member countries: Norway, Liechtenstein and Iceland) to that third country, without any further restrictions.  Once adopted, the Commission’s adequacy finding establishes that the safeguards provided when data are transferred under the new Privacy Shield are equivalent to data protection standards in the EU.

In the Commission’s press release, Commissioner Jourová said:

Protecting personal data is my priority both inside the EU and internationally. The EU-U.S. Privacy Shield is a strong new framework, based on robust enforcement and monitoring, easier redress for individuals and, for the first time, written assurance from our U.S. partners on the limitations and safeguards regarding access to data by public authorities on national security grounds. Also, now that President Obama has signed the Judicial Redress Act granting EU citizens the right to enforce data protection rights in U.S. courts, we will shortly propose the signature of the EU-U.S. Umbrella Agreement ensuring safeguards for the transfer of data for law enforcement purposes. These strong safeguards enable Europe and America to restore trust in transatlantic data flows.

As we previously discussed, the Commission and the U.S. Department of Commerce reached agreement on February 2, 2016 for a new framework for transatlantic exchanges of personal data for commercial purposes, known as the Privacy Shield.  The Privacy Shield reflects the requirements set out by the European Court of Justice in its October 2015 ruling in Schrems which declared the old Safe Harbor framework invalid.

What are the main differences between the “Safe Harbor” arrangement and the EU-U.S. Privacy Shield?

According to the Commission, the Privacy Shield provides stronger obligations on companies in the U.S. to protect the personal data of Europeans. It requires stronger monitoring and enforcement by the U.S. Department of Commerce (DoC) and Federal Trade Commission (FTC), including through increased cooperation with European Data Protection Authorities (DPAs).

The Privacy Shield will include:

  • Strong obligations on companies and robust enforcement: the new arrangement will be transparent and contain effective supervision mechanisms to ensure that companies respect their obligations, including sanctions or exclusion if they do not comply. The new rules also include tightened conditions for onward transfers to other partners by the companies participating.
  • Clear safeguards and transparency obligations on U.S. government access: the U.S. government has given the EU written assurance that any access of public authorities for national security purposes will be subject to clear limitations, safeguards and oversight mechanisms, preventing generalized access.  The U.S. will also establish a redress possibility in the area of national intelligence for Europeans through an Ombudsperson mechanism within the Department of State, who will be independent from national security services. The Ombudsperson will follow-up complaints and enquiries by individuals and inform them whether the relevant laws have been complied with. These written commitments will be published in the U.S. federal register.
  • Effective protection of EU citizens’ rights with several redress possibilities: Complaints have to be resolved by companies within 45 days. A free of charge Alternative Dispute Resolution solution will be available. EU citizens can also go to their national Data Protection Authorities, who will work with the DoC and FTC to ensure that unresolved complaints by EU citizens are investigated and resolved. If a case is not resolved by any of the other means, as a last resort there will be an enforceable arbitration mechanism. Moreover, companies can commit to comply with advice from European DPAs. This is obligatory for companies handling human resource data.
  • Annual joint review mechanism: that will monitor the functioning of the Privacy Shield, including the commitments and assurance as regards access to data for law enforcement and national security purposes.

How will the Privacy Shield work?

U.S. companies will register to be on the Privacy Shield List and self-certify that they meet the requirements.  This procedure has to be done each year. The US Department of Commerce will monitor and actively verify companies’ privacy policies are in line with the relevant Privacy Shield principles and are readily available. The U.S. will maintain an updated list of current Privacy Shield members and remove those companies that have left the arrangement. The DoC will ensure that companies that are no longer members of Privacy Shield must still continue to apply its principles to personal data received when they were in the Privacy Shield, for as long as they continue to retain such data.

What’s Next?

A committee composed of representatives of the Member States will be consulted and the EU Data Protection Authorities (Article 29 Working Party) will give their opinion, before a final decision is issued. In the meantime, the U.S. side will make the necessary preparations to put in place the new framework, monitoring mechanisms, and the new Ombudsperson mechanism.

The Commission has encouraged companies to begin their preparations so as to be in a position to join the Privacy Shield as soon as possible after it is in place following the adoption of the Commission decision.

The Privacy Shield requires action from many actors:

  • U.S. companies must fulfill their obligations under the framework in the full knowledge that it will be strictly enforced and they will be sanctioned if they are non-compliant.  Specifically, the Privacy Shield requires commitment to the following privacy principles: 1) Notice, 2) Choice, 3) Security, 4) Data Integrity and Purpose Limitation, 5) Access, 6) Accountability for Onward Transfer, and 7) Recourse, Enforcement and Liability. The Commission also encouraged companies to opt for EU DPAs as their chosen avenue to resolve complaints under the Privacy Shield and to publish transparency reports on national security and law enforcement access requests concerning EU data they receive.
  • U.S. authorities are entrusted with overseeing and enforcing the framework, respecting the limitations and safeguards as far as access to data for law enforcement and national security purposes is concerned, and those entrusted with responding in a timely and meaningful manner to complaints by EU individuals about the possible misuse of their personal data;
  • EU DPAs play an important role in ensuring that individuals can effectively exercise their rights under the Privacy Shield, including by channeling their complaints to the appropriate U.S. authorities, triggering the Ombudsperson mechanism, assisting complainants in bringing their case to the Privacy Shield Panel, as well as exercising oversight over human resources data transfers; and
  • The Commission is responsible for making a finding of adequacy and reviewing it on a regular basis.

For additional information, please visit the Commission’s page dedicated to the Privacy Shield.

We will continue to update the status of the Privacy Shield as we await the final decision.

Earlier this month, the Office for Civil Rights (OCR) issued guidance on an individual’s right to access the individual’s health information. That an individual has a broad right to access has been recognized in the HIPAA privacy regulations since they became effective in 2003. OCR has found, however, that individuals are facing obstacles to accessing their health information, and believes this needs to change. To help covered providers, plans and business associates better understand the right to access, the agency issued a comprehensive set of frequently asked questions (FAQ). These FAQs address a number of access issues, but they also provide practical insight on some key points, one of which is summarized below.

In general, the FAQs address the scope of information covered by HIPAA’s access right, the very limited exceptions to this right, the form and format in which information is provided to individuals, the requirement to provide timely access to individuals, and the intersection of HIPAA’s right of access with the requirements for patient access under the HITECH Act’s Electronic Health Record (EHR) Incentive Program. In some cases, the guidance in the FAQs goes beyond just accessing health information. Consider the following FAQ:

What is a covered entity’s obligation under the Breach Notification Rule if it transmits an individual’s PHI to a third party designated by the individual in an access request, and the entity discovers the information was breached in transit?

If a covered entity discovers that the PHI was breached in transit to the designated third party, and the PHI was “unsecured PHI” as defined at 45 CFR 164.402, the covered entity generally is obligated to notify the individual and HHS of the breach and otherwise comply with the HIPAA Breach Notification Rule at 45 CFR 164, Subpart D. However, if the individual requested that the covered entity transmit the PHI in an unsecure manner (e.g., unencrypted), and, after being warned of the security risks to the PHI associated with the unsecure transmission, maintained her preference to have the PHI sent in that manner, the covered entity is not responsible for a disclosure of PHI while in transmission to the designated third party, including any breach notification obligations that would otherwise be required.  Further, a covered entity is not liable for what happens to the PHI once the designated third party receives the information as directed by the individual in the access request.

A couple of interesting points are made and clarified with this FAQ. One is that if an individual is warned about the risks of unsecured transmissions of PHI, but decides to proceed with the communication despite the warning, the covered entity is not responsible if there is a breach of the information while it is in transit. That is, no breach of unsecured PHI (but the covered entity still would have to consider state law). Second, after the covered entity fulfills the request of the individual and provides the PHI to a third party, the covered entity is no longer responsible.

So, as covered entities and business associates read though the new access guidance, they should be on the lookout for points like this which can reduce costs and better manage risk.

The U.S. District Court for the Southern District of California recently granted Wilshire Consumer Capital’s (WCC) motion to deny class certification in a putative class action filed under the Telephone Consumer Protection Act (TCPA).
The named plaintiff, Alu Banarji, filed suit after receiving numerous telephone calls on her cell phone.  According to the Court, Ms. Banarji’s father, Sami, took out a loan with WCC and on the loan application he listed his daughter’s cell phone number as his own.  Ms. Banarji is the primary caregiver for her father.  When Mr. Banarji failed to make payment, WCC began calling the cell phone number he had listed on his loan application to inquire about the debt.  Ms. Banarji claims she had no involvement with her father’s loan and she repeatedly asked WCC to stop calling her cell phone.
Following some limited discovery, including the depositions of both Mr. Banarji and Ms. Banarji, WCC filed a Motion to Deny Class Certification under Fed. R. Civ. P. 23.  WCC’s Motion to Strike, filed at the same time, was denied as untimely.  The Court found the timing of WCC’s Motion to Deny Class Certification appropriate despite Ms. Banarji’s argument that the motion was premature and she should be permitted to conduct discovery on the certification issue.
In its Motion to Deny Class Certification, WCC challenged Ms. Banarji’s ability to meet the typicality requirement in Rule 23(a)(3).  In the Ninth Circuit, the typicality requirement is construed permissibly and requires only that the representative’s claims be “reasonably coextensive with those of absent class members.”  However, the Court went on to clarify that if unique defenses exist that threaten to divert the focus of the litigation to the detriment of the class as a whole, the typicality requirement is not satisfied.
Judge Roger T. Benitez found that although Ms. Banarji was probably annoyed by the calls, her case is unique to herself and perhaps a small subset of the proposed class.  This is particularly true as Ms. Banarji’s phone number was given to WCC by her father; her father indicated that the phone number was in fact his own; and it is possible given the family relationship that Ms. Banarji’s father may be a non-subscriber customary user of the phone line, which would give him the authority to consent to receiving robocalls on that line.  Judge Benitez found the majority of the proposed class may suffer as Ms. Banarji will be engrossed with disputing WCC’s arguments regarding her individual case.
As such, the Court granted WCC’s Motion to Deny Class Certification, holding that Ms. Banarji’s claim is not typical of the proposed class’s claims.

Last week, California Attorney General, Kamala D. Harris – who has been mentioned as a potential nominee to fill Justice Antonin Scalia’s recently vacated seat on the U.S. Supreme Court – issued the California Data Breach Report (Report).  The Report provides an analysis of the data breaches reported to the California AG from 2012-2015.

The Report details that nearly 50 million records of Californians have been breached and the majority of these breaches resulted from security failures.  In fact, the Report explains that nearly all of the exploited vulnerabilities, which enabled the breaches, were compromised more than a year after the solution to address the vulnerability was publicly available.  According to Ms. Harris, “It is clear that many organizations need to sharpen their security skills, trainings, practices, and procedures to properly protect consumers.”

Malware and hacking, physical breaches, and breaches caused by error have been the three most common types of breaches. Of the three, malware and hacking have been by far the largest source of data breaches, with 90% of all records breached by means of malware and hacking.  Physical breaches, resulting from the theft or loss of unencrypted data on electronic devices, were next most common, with heath care entities and small businesses most heavily impacted.  Breaches caused by error – such as mis-delivery of email and inadvertent exposure of information on the public Internet – ranked third.  Government entities made half of all such errors.

Under California law, “A business that owns, licenses, or maintains personal information about a California resident shall implement and maintain reasonable security procedures and practices appropriate to the  nature of the information, to protect the personal information from unauthorized access, destruction, use,  modification, or disclosure.”  This requirement is important as the Report specifically states an organization’s failure to implement all of the 20 controls set forth in the Center for Internet Security’s Critical Security Controls (The Controls) constitutes a lack of reasonable security.

The Report goes on to discuss numerous findings and provide an analysis of the breach types, data types, and industry sectors impacted.  The Report concludes with recommendations which include:

  1. Reasonable Security:  The Standard of Care for Personal Information.  Implementation of The Controls mentioned above as a minimum level of information security (available as at Appendix A to the Report).
  2. Multi-Factor Authentication.  Organizations should make multi-factor authentication available on consumer-facing online accounts that contain sensitive personal information. This stronger procedure would provide greater protection than just the username-and-password combination for personal accounts such as online shopping accounts, health care websites and patient portals, and web-based email accounts.
  3. Encryption of Data in Transit. Organizations should consistently use strong encryption to protect personal information on laptops and other portable devices, and should consider it for desktop computers.  This is a particular imperative for health care, which appears to be lagging behind other sectors in this regard.
  4. Fraud Alerts.  Organizations should encourage individuals affected by a breach of Social Security numbers or driver’s license numbers to place a fraud alert on their credit files and make this option very prominent in their breach notices. This measure is free, fast, and effective in preventing identity thieves from opening new credit accounts.
  5. Harmonizing State Breach Laws.  State policy makers should collaborate to harmonize state breach laws on some key dimensions. Such an effort could reduce the compliance burden for companies, while preserving innovation, maintaining consumer protections, and retaining jurisdictional expertise.

While the Report, and California’s existing law, are focused on protecting the personal information of California residents, it is important to remember California has continuously been at the forefront of data security legislation.  In fact, California was the first state to enact a data breach notification law in 2003, and since that time 46 other states have followed suit.  As such, it would not be surprising if other states consider the recommendations in the Report and implement similar requirements.

As NCAA basketball tournament season approaches, employers may be wondering if they can monitor employees at work to see how much time they are spending checking their brackets, or for other purposes. There are many reasons companies monitor employees, including boosting productivity, dissuading cyber-slacking or social “not-working,” protecting trade secrets and confidential business information, preventing theft, avoiding data breaches, avoiding wrongful termination lawsuits, ensuring that employees are not improperly snooping themselves, complying with electronic discovery requirements, and generally dissuading improper behavior.

Excessive, clumsy, or improper employee monitoring, however, can cause significant morale problems and, worse, create potentially legal liability for invasion of privacy under statutory and common law.  With new technology, there are more methods of monitoring than ever before.  Each has different limitations under the law.  Here are the top contenders in the bracket:

  1. Monitoring work email communications. Pros: generally lawful, effective. Notice requirements exist in some states (e.g. CT, DE).
  2. Monitoring internet usage. Cons: Often misleading, can be expensive.
  3. Monitoring social media. Cons: May violate state law regarding social media passwords or common law.
  4. Accessing employee cloud-based internet accounts by accessing and obtaining user name and password from a work computer. Cons: Likely to violate the federal Stored Communications Act.
  5. Tracking employee whereabouts by GPS (either a phone app or vehicle based device). Cons: Morale issues, may be invasion of privacy. (An employee in CA recently sued and reached a settlement with her employer after she was terminated for uninstalling a company-required 24-hour tracking app in her phone).
  6. Tracking employees with a Radio Frequency Identification Device (RFID). Cons: Expensive, strange, morale issues, some states (WI, ND, MO) explicitly prohibit employers from implanting chips in employees.
  7. Motion Sensors. Cons: The Daily Telegraph, a London-based newspaper, recently reversed a decision to install motion sensors at desks after employees cried Big Brother. (The employer claimed it was just seeking to monitor how many shared desks were used and not used).
  8. Video. Pros: Extremely effective in loss prevention and investigation of bad acts. Cons: Some notice requirements. Avoid cameras in changing areas, locker rooms, etc.
  9. Audio. Pros: Also effective in obtaining and preserving certain types of evidence. State wire-tap laws apply.
  10. Physical searches. Pros: Sometimes necessary, little or no expense. Cons: May violate common law right of privacy depending on circumstances.
  11. Obtaining health or fitness information. Cons: May violate the Genetic Information Nondiscrimination Act (GINA) and other laws.
  12. Drug testing. Pros: Workplace safety; Cons: expense, tightly regulated in some states.
  13. Polygraphs. Cons: Restricted by federal law and many states.

Although new technologies may be up and coming, the Final Four of monitoring methods are probably email, video, audio, and physical searches, all of which have been around for quite a while.  Always review policies and applicable state and federal law before embarking on a monitoring program and remember to monitor the monitors!