Data Breach in Georgia Affecting Six Million Voters Adds to 2015 National Tally

The Georgia Secretary of State acknowledged that last month his office improperly disclosed social security numbers and other private information for more than 6,000,000 registered voters in Atlanta due to a “clerical error.” Anyone in Georgia who is registered to vote (approximately 6.2M citizens) may be affected. The Secretary acknowledged that his office shares voter registration data on a monthly basis with various news media and political parties as required by Georgia law upon request. He indicated that due to a clerical error, twelve recipients of this data received a disk that contained personal information including social security numbers and driver’s license information that should not have been provided. Two class-action lawsuits have been filed alleging significant damages as a result of the breach. Information regarding the breach became public upon the filing of the lawsuits.

Georgia’s identity theft law, enacted in 2005, requires certain private businesses and state and local government agencies to notify affected consumers after a breach is discovered. On November 19, the state of Georgia provided notice to affected persons, describing among other things that the Secretary of State’s office took immediate corrective action, including contacting the recipients receiving the personal information and requesting them to return it. This breach is somewhat similar to a massive data breach reported in South Carolina in 2012 that exposed 3.8M social security numbers possessed by the South Carolina Department of Revenue. The state of South Carolina paid a credit monitoring company approximately $12M to provide credit monitoring for victims of the breach, a service apparently not being made available to affected Georgia voters. South Carolina lawmakers also earmarked an additional $25M into the budget for an extra year of credit protection and to upgrade computer security for the state.

According to the Identity Theft Resource Center (ITRC) there have been a total of 669 data breaches to date in 2015 exposing nearly 182M records. The annual total includes 21.5M records exposed in the attack on the U.S. Office of Personnel Management in June and 78.8M healthcare customer records exposed at Anthem in February. Of the data breaches to date in 2015, approximately 38.6% represents the business sector, 36% represents the medical/healthcare sector, 9.1% represents the banking/credit/financial sector, 8.5% represents the government/military sector and 7.8% represents the education sector. By comparison, the ITRC tracked the total number of 2014 breaches at 783, which was up approximately 28% compared with 2013.

Data breaches that require notification under federal and state mandates, which may include even some inadvertent disclosures, continue to happen. It is true that not all such breaches can be prevented, but in addition to taking steps to prevent these incidents, businesses need to be prepared to respond quickly and thoroughly should such an unfortunate incident occur.

The Growing List of States Protecting Social Media Privacy

As we have previously reported, a growing list of jurisdictions have enacted social media privacy laws applicable to employers.  The most recent state to join the list is Maine, which brings the total to 22 states having enacted similar measures.

Under Maine’s law, an employer may not:

  • Require or coerce an employee or applicant to disclose the password to a personal social media account;
  • Require or coerce an employee or applicant to access a personal social media account in the presence of the employer or its agent;
  • Require or coerce an employee or applicant to disclose any personal social media account information;
  • Require or cause an employee or applicant to add anyone to the list of contacts associated with a personal social media account;
  • Require or cause an employee or applicant to alter or change setting to allow a 3rd party to view the content of a personal social media account;
  • Discharge, discipline, or otherwise penalize (including the threat of same) an employee for refusing to disclosure or provide access to a personal social media account as prohibited above;
  • Fail or refuse to hire an application for refusing to disclosure or provide access to a personal social media account as prohibited above.

Importantly, Maine’s law, like many of the other similar laws which have been enacted, does not prohibit or restrict an employer from requiring an employee to disclose personal social media account information the employer believes to be relevant to an investigation of employee misconduct or a workplace-related violation of laws, rules, or regulations — so long as the information accessed is used solely for purposes of the investigation or a related proceeding.

The prohibition on employer access to personal social media accounts began in 2012 and in the past 3 years has expanded to 21 additional states.  We expect this trend to continue elsewhere in 2016.

FCC Data Security Enforcement Continues

Demonstrating its continued commitment to data security enforcement, the Federal Communications Commission (FCC) recently announced Cox Communications Inc., the nation’s third largest cable operator, agreed to pay $595,000 to resolve an investigation into whether the company failed to properly protect its customers’ personal information.  The agreement ends the first data security enforcement action brought by the FCC against a cable operator.

The investigation by the FCC Enforcement Bureau determined that Cox’s electronic data systems were breached in 2014 by a hacker who pretended to be from Cox’s information technology department and convinced both a Cox customer service representative and Cox contractor to enter their account IDs and passwords into a fake, or “phishing,” website.  The user access information was then utilized to obtain customers’ personally identifiable information, which included names, addresses, email addresses, secret questions/answers, PIN, and in some cases partial Social Security and driver’s license numbers, as well as Customer Proprietary Network Information (CPNI) of the company’s telephone customers.

Under the Communications Act, a cable operator shall not disclose personally identifiable information concerning any subscriber without the prior consent of the subscriber and shall take steps necessary to prevent unauthorized access to such information by a person other than the subscriber or cable operator.   Importantly, during its investigation, the FCC found Cox’s data security systems did not include readily available measures that might have prevented the use of the compromised credentials. Additionally, the company never reported the breach to the FCC’s data breach portal, as required by law.

According to Travis LeBlanc, Chief, Enforcement Bureau: “Cable companies have a wealth of sensitive information about us, from our credit card numbers to our pay-per-view selections….This investigation shows the real harm that can be done by a digital identity thief with enough information to change your passwords, lock you out of your own accounts, post your personal data on the web, and harass you through social media.”

In addition to identifying (and notifying) all affected individuals, the order and consent decree also requires the company to provide free credit monitoring services for one year.  Further, Cox must improve its privacy and data security practices, by: (i) designating a senior corporate manager who is a certified privacy professional; (ii) conducting privacy risk assessments; (iii) implementing a written information security program; (iv) maintaining reasonable oversight of third party vendors; (v) implementing a more robust data breach response plan; and (vi) providing privacy and security awareness training to employees and third-party vendors.

In the past year, the FCC has taken three enforcement actions for violations of the Communications Act and Commission rules related to protection of customer personal information resulting in over $28 million in penalties.

This resolution, and the facts underlying the data incident, demonstrate not only the lengths that hackers will go in order to obtain personal information, but also how easily the hacker was able to obtain IDs and passwords.  As we have discussed, implementation of a written information security program, including prohibitions on sharing user access credentials (IDs and passwords) and employee training on data security, may very well have prevented this incident.

Senate Passes Cybersecurity Law as the Struggle Between Data Security and Privacy Continues

The Cybersecurity Information Sharing Act or CISA passed the Senate this week by vote of 74-21, but not without controversy. CISA would not establish a generally applicable federal standard for safeguarding personal information, nor would it enact a federal breach notification requirement. Rather, if signed into law, CISA would among other things create a framework for governmental entities and private entities to share cyber threat information for cybersecurity purposes in order to help protect against the massive data breaches that have hit the federal government and major U.S. companies. A companion bill has been passed in the House and, if successfully reconciled, the law will be sent to President Obama, who indicated support for the bill.

The controversy surrounding CISA relates primarily to the belief that the bill’s measures seeking greater data security come at too great a cost – privacy. Advocates of CISA believe, in general, that the sharing of cyber threat indicators or defensive measures for cybersecurity purposes and the monitoring of information systems are needed to address attacks on these systems. Privacy advocates and others reject that view, arguing in essence that this move toward greater data security jettisons privacy protections. That is, under the guise of security, companies would be free to monitor and share personal data with the federal government, enabling more expansive data collection and surveillance and without regard to privacy protections under other laws.

Section 104 of CISA would permit a private entity to monitor that entity’s information system or the information system of another entity, including a Federal entity (with that other entity’s written consent), or the information stored on those systems – for a cybersecurity purpose. A cybersecurity purpose means “the purpose of protecting an information system or information that is stored on, processed by, or transiting an information system from a cybersecurity threat or security vulnerability.”

Subsection (c) of 104 also would permit an entity to share with, or receive from, any other entity or the Federal Government a “cyber threat indicator” or “defensive measure.” Under the bill, cyber threat indicator is defined to mean an “action on or through an information system that may result in an unauthorized effort to adversely impact the security, availability, confidentiality, or integrity of an information system or information that is stored on, processed by, or transiting an information system.” However, the entity receiving the cyber threat indicator or defensive measure would be required to comply with otherwise lawful restrictions placed on the sharing or use of such cyber threat indicator or defensive measure by the sharing entity. In addition, CISA would require that:

An entity monitoring an information system, operating a defensive measure, or providing or receiving a cyber threat indicator or defensive measure … [must] implement and utilize a security control to protect against unauthorized access to or acquisition of such cyber threat indicator or defensive measure.

The bill goes on to provide that when sharing a cyber threat indicator, entities must either:

review such cyber threat indicator to assess whether such cyber threat indicator contains any information that the entity knows at the time of sharing to be personal information or information that identifies a specific person not directly related to a cybersecurity threat and remove such information; or

implement and utilize a technical capability configured to remove any information contained within such indicator that the entity knows at the time of sharing to be personal information or information that identifies a specific person not directly related to a cybersecurity threat.

Privacy advocates argue that this language leaves open the possibility that entities sharing cyber threat indicators might not “know” if the indicator contains personal information, thus weakening the privacy protection for personal information under this provision. However, Senate Select Committee on Intelligence (SSCI) Chairman Richard Burr (R-NC), sponsor of CISA, points to these provisions and others to refute the privacy concerns raised by opponents of the bill. In a press release, “Debunking Myths about Cybersecurity Information Sharing Act,” Sen. Burr argues, among other things, that under CISA:

The cyber threat information sharing is completely voluntary. Companies have the choice as to whether they want to participate in CISA’s cyber threat information sharing process, but all privacy protections are mandatory.

The debate surely will continue as the House and Senate reconcile their versions of the law. For now, business should continue to focus on their own efforts to safeguard personal and other confidential data, and be prepared in the event they experience a data breach.

Changes to California’s Data Breach Notification Requirements

On October 6, 2015, California Governor Jerry Brown signed three new laws which substantially alter and expand the state’s security breach notification requirements. The new changes to California Civil Code sections 1798.29 and 1798.82, the Golden State’s laws that require notifications by state agencies and private sector entities of certain breaches of security (i) provide a definition for encryption, (ii) establish new requirements for the content and form of breach notifications, and (iii) add license plate information gathered through automated license plate recognition (ALPR) systems to the definition of personal information subject to the state’s notification requirements. These changes become effective January 1, 2016.

When is Personal Information Considered “Encrypted”

Under California’s current law, if personal information is “encrypted,” the notification requirements will not apply. Until now, the law had not defined when personal information would be considered to be “encrypted.” Assembly Bill 964 amends California Civil Code sections 1798.29 and 1798.82 to provide a definition for this previously undefined term. With passage of the amendment, the term “encrypted” is now defined as “rendered unusable, unreadable or indecipherable to an unauthorized person through a security technology or methodology generally accepted in the field of information technology.” This language seems to allow for flexibility in the types of encryption that can be applied, as well as for future changes in encryption technology. For more information on encryption technologies, click here.

Updates to Content and Form of Breach Notification

Senate Bill 570 amends California Civil Code sections 1798.29 and 1798.82 to require government agencies and businesses to clarify the content of security breach notifications and provides a model security breach notification. All security breach notifications must now be titled “Notice of Data Breach” and present required information under the following headlines: “What Happened,” “What Information Was Involved,” “What We Are Doing,” “What You Can Do,” and “For More Information.” The notice must be designed to call attention to the nature and significance of the matter, must be clear and conspicuous and must be in text no smaller than 10-point type.

Use of the model notification form will be deemed compliant with California’s notification requirements, and thus helpful for agencies and business when trying to understand what the notice needs to say. However, in the case of breaches affecting individuals in multiple states, when simplifying the notification process is critical, use of California’s model notice across multiple states may be problematic. For example, the “What Happened” section should not be included in notices to Massachusetts residents as that state’s law prohibits including a description of the nature of the breach or unauthorized acquisition or use.

Information Obtained from Automated License Plate Recognition Systems is Personal Information

Popular among local law enforcement, automated license plate recognition (ALPR) systems allow license plate information to be captured from videos and stored. Senate Bill 34 added new sections to California’s Civil Code, beginning with section 1798.90.5, to require that certain users of those systems – called ALPR operators – safeguard ALPR information, including a requirement to implement a usage and privacy policy in order to ensure that the collection, use maintenance, sharing and dissemination of ALPR information is consistent with respect for individuals’ privacy and civil liberties. However, the Senate Bill also amends California Civil Code sections 1798.29 and 1798.82 to include information obtained from ALPR systems in the definition of “personal information” when used along with an individual’s name. Thus, if this information is involved in a breach of security, it will trigger a notification requirement. Also, individuals harmed by unauthorized access or use of ALPR information or a breach of security of an ALPR system may bring a private right of action.

These amendments represent significant changes to the security breach notifications provisions of Civil Code sections 1798.29 and 1798.82, as well as additional protections for information obtained from ALPR systems. In particular, they impact how to respond to security breaches, how to protect personal information and the scope of what information is protected. Businesses are encouraged to review their encryption policies, adopt compliant security breach notification forms and, if using an ALPR system, adopt compliant policies with respect to ALPR information and the employees who control those systems.

HIPAA Phase 2 Audits to Start in Early 2016, OCR States In Response to OIG Recommendations

Responding to a Department of Health and Human Services Office of Inspector General (OIG) report recommending stronger oversight of covered entities’ compliance with the HIPAA Privacy Rule, the Office for Civil Rights (OCR) stated that in early 2016 it will launch Phase 2 of its audit program measuring compliance with HIPAA’s privacy, security and breach notification requirements by covered entities and business associates.

After conducting a study to assess OCR’s oversight of covered entities’ compliance with the HIPAA Privacy Rule, OIG issued a report finding that OCR should strengthen its oversight of covered entities and making several recommendations. Specifically, OIG recommended that OCR:

  1. fully implement a permanent audit program;
  2. maintain complete documentation of corrective action;
  3. develop an efficient method in its case-tracking system to search for and track covered entities;
  4. develop a policy requiring OCR staff to check whether covered entities have been previously investigated; and
  5. continue to expand outreach and education efforts to covered entities.

OCR concurred with each of OIG’s recommendations. In its response to the report, OCR stated it is moving forward with a permanent audit program and will launch Phase 2 of that program in early 2016. The program will target common areas of noncompliance and will include business associates as well as covered entities. Phase 2 “will test the efficacy of the combination of desk reviews of policies as well as on-site reviews.” Accordingly, both covered entities and business associates should be reviewing their HIPAA policies and practices and developing a plan for working with OCR in on-site reviews.

OCR also indicated it is working on improving its ability to document and track corrective actions taken by covered entities and business associates in response to an OCR investigation. In addition, OCR revealed that it now has the ability to search for and track covered entities’ compliance history. OCR will now require investigators to check for prior investigations at the outset of new investigations of covered entities and business associates. This may mean a greater likelihood of on-site visits if a covered entity’s history indicates a potential for systemic compliance issues.

Finally, OCR agreed with OIG’s recommendation that it should continue to expand its outreach and education efforts. Information about those efforts can be found in Appendix C to OIG’s report.

As we previously reported, having the right documents in place can go a long way toward helping an organization survive an OCR HIPAA audit. Now that it is clear that these audits are coming early next year, it is important that covered entities and business associates invest the time in identifying and closing any HIPAA compliance gaps before an OCR investigator does this for them.

Wearables, Wellness and Privacy

Bloomberg BNA (subscription) recently reported that this fall the Center for Democracy & Technology (CDT) will be issuing a report on Fitbit Inc.’s privacy practices. Avid runners, walkers or those up on the latest gadgets likely know about Fitbit, and its line of wearable fitness devices. Others may know about Fitbit due to the need to measure progress in their employers’ wellness programs, or even whether they qualify for an incentive. When participating in those programs, employees frequently raise questions about the privacy and security of data collected under such programs, a compliance issue for employers. Earlier this month, FitBit reported that its wellness platform is HIPAA compliant.

FitBitFitBit’s Charge HR (the one I use) tracks some interesting data in addition to the number of steps: heart rate, calories burned, sleep activity, and caller ID. This and other data can be synched with a mobile app allowing users to, among other things: create a profile with more information about themselves, to track progress daily and weekly, and to find and communicate with friends also using a similar device.

Pretty cool stuff, and reasons why FitBit is the most popular manufacturer of wearables with nearly 25 percent of the market, as noted by Bloomberg BNA. But, of course, FitBit is not the only player in the market, and the same issues have to considered with the use of wearables regardless of the manufacturer.

According to Bloomberg BNA’s article, one of the concerns raised by CDT about FitBit and other wearables is that the consumer data collected by the devices may not be protected by federal health privacy laws. However, CDT’s deputy director of the Consumer Privacy Project stated to Bloomberg BNA that she has “a real sense that privacy matters” to FitBit. This is a good sign, but the laws that apply to the use of these kinds of devices depend on how they are used.

When it comes to employer-sponsored wellness programs and health plans, a range of laws may apply raising questions about what data can be collected, how it can be used and disclosed, and what security safeguards should be in place. At the federal level, the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and the Genetic Information Nondiscrimination Act (GINA) should be on every employer’s list. State laws, such as California’s Confidentiality of Medical Information Act, also have to be taken into account when using these devices in an employment context.

Recently issued EEOC proposed regulations concerning wellness programs and the ADA address medical information confidentiality. If finalized in their current form, among other safeguards, the regulations would require employers to provide a notice informing employee about:

  • what medical information will be obtained,
  • who will receive the medical information,
  • how the medical information will be used,
  • the restrictions on its disclosure, and
  • the methods that will be used to prevent improper disclosure.

Preparing these notices for programs using wearables will require knowing more about the capabilities of the devices and how data is accessed, managed, disclosed and safeguarded.

But is all information collected from a wearable “medical information”? Probably not. The number of steps a person takes on a given day, in and of itself, seems unlikely to be medical information. However, data such as heart rate and other biometrics might be considered medical information subject to the confidentiality rule. Big data analytics and IoT may begin to play a greater role here, enabling more detailed pictures to be developed about employees and their activities and health through the many devices they use.

Increasingly wellness programs seek to incentivize the household, or at least employees and their spouses. Collecting data from wearables of both employee and spouse may raise issues under GINA which prohibits employers from providing incentives to obtain genetic information from employees. Genetic information includes the manifestation of disease in family members (yes, spouses are considered family members under GINA). The EEOC is currently working on proposed regulations under GINA that we are hoping will provide helpful insight into this and other issues related to GINA.

HIPAA too may apply to wearables and their collection of health-related data when related to the operation of a group health plan. Employers will need to consider the implications of this popular set of privacy and security standards including whether (i) changes are needed in the plan’s Notice of Privacy Practices, (ii) business associate agreements are needed with certain vendors, and (iii) the plan’s risk assessment and policies and procedures adequately address the security of PHI in connection with these devices.

Working through plans for the design and implementation of a typical wellness program certainly must involve privacy and security; moreso for programs that incorporate wearables. FitBits and other devices likely raise employees’ interest and desire to get involved, and can ease administration of the program, such as in regard to tracking achievement of program goals. But they raise additional privacy and security issues in an area where the law continues to develop. So, employers need to consider this carefully with their vendors and counselors, and keep a watchful eye for more regulation likely to be coming.

Until then, I need to get a few more steps in…

HIPAA Audits Maybe, But Audit Preparedness Definitely!

According to a Bloomberg article, the second phase of HIPAA audits by the Office for Civil Rights (OCR), originally set to commence in 2014, may be coming soon. This update came at a HIPAA conference co-hosted by OCR during which OCR Director Jocelyn Samuels said the agency was in the process of confirming contact information of those entities that would be audited. Reason for the delay – budgetary limitations and gaps in personnel.

Covered entities and business associates have been hearing about a second phase of HIPAA audits and a permanent OCR audit program since the OCR pilot program back in 2011 and 2012. But inaction by the agency should not delay an organization’s preparedness. Perhaps more likely than an OCR audit, a covered entity or business associate may experience a data breach affecting protected health information (PHI). Most recently, Excellus Healthcare experienced a breach affecting 10.5 million. In the case of a breach, a resulting OCR investigation/compliance review and findings of inadequate compliance with the privacy or security rules could result in far more dire consequences to the organization than what might follow an audit.

Reports about the upcoming audit program indicate some key areas of focus by the OCR. These also are areas that OCR has raised numerous times in settlements with covered entities following data breach investigations.

  • Has a risk assessment been carried out and documented?
  • Are written policies and procedures in place that address the privacy and security standards, and vulnerabilities identified in the assessment?
    • Strong “practices” are not enough – they need to be in writing.
  • Is an incident response plan in place for responding to breaches of unsecured PHI?
  • Are adequate safeguards in place for mobile devices and storage media?
    • Your doctors, nurses and staff have their own devices – do you have a BYOD policy that incorporates HIPAA issues, not just data security?
  • Is a training program in place, with documented training for new workforce members and periodically for all workforce members?
  • Is a compliant Notices of Privacy Practices provided to patients?
    • Have you checked your website lately? Many covered healthcare providers only provide hardcopies of these Notices in the office without realizing that they may need to have the Notice prominently available on the practice’s website.
  • Do you have appropriate agreements in place with business associates?

It is anticipated that most of the audits will be “desk audits.” This means that an OCR investigator will not be coming to visit you in person, but will be asking for documents. The investigator will want to see that the assessment has taken place, that the policies have been adopted, that the training has been conducted, that the notices have been delivered, etc. Operational compliance (that is, are you doing what compliant policies say you should be doing) may not always be 100%, but having the right documents in place can go a long way toward helping you survive an OCR audit, whether in connection with the long-awaited second phase of audits, or following a data breach.

DoD Issues Interim Rule For Contractors on Incident Reporting and Cloud Computing Services

Government contractors have a wide range of unique challenges (find out more about these here), not the least of which is data security. A good example is the interim rule the Department of Defense (DoD) issued last month that implements sections of the National Defense Authorization Act for Fiscal Years 2013 and 2015. In short, these provisions expand the incident reporting requirements for contractors and increase the security requirements for cloud service providers.

The Secretary of Defense determined that “urgent and compelling” reasons exist to issue the interim rule without prior opportunity for public comment. There is an urgent need to protect covered defense information and to increase awareness of the full scope of cyber incidents being committed against defense contractors. The use of cloud computing has greatly increased, according to the Secretary, and has increased the vulnerability of DoD information. The recent high-profile breaches of Federal information also influenced this determination. It is easy to see how similar considerations will influence other federal and state agencies to tighten their data security requirements on their contractors and subcontractors.

The hope here is that the rule will increase the cyber security on DoD information on contractor systems, help to mitigate risk, and gather information for the development of future improvements in cyber security. Note that the DoD will consider public comments to the interim rule before issuing the final rule. Comments must be submitted on or before October 26, 2015 to be considered.

Incident Reporting Highlights

  • Contractors and subcontractors must report cyber incidents that result in an actual or potentially adverse effect on a covered contractor information system or covered defense information residing on that system, or on a contractor’s ability to provide operationally critical support.
  • A “cyber incident” means actions taken through the use of computer networks that result in a compromise or an actual or potentially adverse effect on an information system and/or the information residing therein. A “compromise” is the disclosure of information to unauthorized persons, or a violation of the security policy of a system, in which unauthorized intentional or unintentional disclosure, modification, destruction, or loss of an object, or the copying of information to unauthorized media may have occurred.
  • Rapid reporting is required – this means 72 hours of discovery of a cyber incident.
  • The DoD recognizes that the reporting may include the contractor’s proprietary information, and will protect against the unauthorized use or release of that information.
  • The reporting of a cyber incident will not, by itself, be interpreted as evidence that the contractor or subcontractor has failed to adequately safeguard covered defense information.

Cloud Computing Highlights

  • Contracts for cloud computing services may be awarded only to providers that have been granted provisional authorization by Defense Information Systems Agency, at the appropriate level.
  • Cloud computing service providers must maintain government data within the 50 states, the District of Columbia, or outlying areas of the United States, unless physically located on DoD premises. Government data can be maintained outside the U.S. upon written notification from the contracting officer.
  • Government data means any information, document, media, or machine readable material regardless of physical form or characteristics, that is created or obtained by the government in the course of official government business.
  • Purchase requests for cloud computing service must, among other things, describe government data and the requirement for the contractor to coordinate with the responsible government official to respond to any “spillage” occurring in connection with the services. Spillage happens when a security incident results in the transfer of classified or controlled unclassified information onto an information system not authorized for the appropriate security level.

Defense contractors and their subcontractors will need to review the interim rule carefully and make adjustments. Of course, the focus here is not solely on personal identifiable information, but the same principles apply. Maintaining a well-thought out and practiced incident response plan is critical.

Cancer Care Group to Pay $750,000 to Settle HIPAA Breach, as KPMG Finds 81 Percent of Hospitals and Health Insurance Companies had a Breach in the Past Two Years

On September 2, the Office for Civil Rights (OCR) reported that it agreed to settle potential violations of the HIPAA privacy and security regulations with Cancer Care Group, Inc. The dollar amount of the settlement, $750,000, is significant, and the agreement to adopt a robust, multi-year corrective action plan under the watchful eye of the government is likely to be an unwanted strain on the business.

With 17 radiation oncologists, Cancer Care is by no means a mom and pop outfit, but it is also not a national provider. Small to mid-sized healthcare providers and their business associates need to take note. What started as a seemingly small theft issue – laptop bag stolen from an employee’s car – has led to nearly a million dollars in settlement and other costs, and years of government monitoring of the practice’s privacy and security compliance.

Thinking your healthcare or related business will not experience a breach may not be a wise approach. According to a KPMG report, highlighted by ConsumerAffairs, in the past two years 81 percent of hospitals and health insurance companies have had a data breach. The question really is, however, will your business be able to stand up to an OCR compliance review that comes along with the OCR’s investigation of the breach.

What happened: On August 29, 2012, OCR received notification from Cancer Care regarding a breach of unsecured electronic protected health information (ePHI) after a laptop bag was stolen from an employee’s car. The bag contained the employee’s computer and unencrypted backup media, which contained the names, addresses, dates of birth, Social Security numbers, insurance information and clinical information of approximately 55,000 current and former Cancer Care patients. A fairly typical scenario many businesses face, including health care providers, with the myriad of devices employees use every day in their jobs.

The OCR investigation: OCR claims Cancer Care was in “widespread non-compliance with the HIPAA Security Rule.” According to OCR’s press release, the provider “had not conducted an enterprise-wide risk analysis…did not have in place a written policy specific to the removal of hardware and electronic media containing ePHI into and out of its facilities, even though this was common practice within the organization.” So you see, it was not so much the theft of the laptop, but the alleged lack of safeguards and compliance that could have (even if it in fact would not have) prevented the breach from happening, that drew the agency’s ire.

OCR’s Corrective Action Plan (CAP): You can read the CAP here. Under the CAP, you’ll find that Cancer Care needs to get OCR’s approval before it can proceed with key compliance steps. For example, it needs to provide its risk assessment to OCR within 90 days of the effective date of the settlement agreement, and await OCR’s approval. A similar process applies for other components of the HIPAA security rules, including the development of a risk management plan and training program. Cancer Care must also provide an annual report to OCR for at least three years concerning updates or changes to its risk management plan, among a number of other things.

Take Away: No health care provider or other business wants to have a breach. But if it does, it will be less likely to face significant enforcement action by OCR if it has a compliance program in place – perform and document a risk assessment; address the risks of mobile devices; design and implement a quality training program. These are just a few of the steps a health care provider, health plan, business associate or other organization with HIPAA privacy and security obligations should be taking to mitigate these compliance risks.