As organizations aim to return to some type of normalcy, and help ensure a healthy and safe workplace, many have implemented COVID-19 screening programs that check for symptoms, and an employee’s recent travel and potential contact with the virus. Moreover, many states and localities across the nation are mandating or recommending the implementation of COVID-19 screening programs in the workplace, and beyond. In many cases, organizations have leveraged various technologies, such as social distancing bands, apps, and thermal scanners, to streamline their screening programs.

Despite the benefits of COVD-19 screening programs, organizations should proceed carefully to examine not only whether the particular solution will have the desired effect, but whether it can be implemented in a compliant manner with minimal legal risk, particularly regarding the privacy and security implications. Just last week Amazon was hit with a proposed class action lawsuit in Illinois state court, claiming the company’s COVID-19 screening program violated Illinois’s Biometric Information Privacy Act (BIPA).  According to the complaint, Amazon employees were required to undergo facial geometry scans and temperature scans before entering company warehouses, without prior consent from employees as required by law when collecting biometrics identifiers, such as a facial geometry scan.

The BIPA sets forth a comprehensive set of rules for companies doing business in Illinois when collecting biometric identifiers or information of state residents. The BIPA has several key features: • Informed consent prior to collection • Limited right of disclosure of biometric information • Written policy requirement addressing retention and data destruction guidelines • Prohibition on profiting from biometric data • A private right of action for individuals harmed by BIPA violations. Statutory damages can reach $1,000 for each negligent violation, and $5,000 for each intentional or reckless violation.

The complaint alleges that Amazon employees “lost the right to control” how their biometric data was collected, used and stored, exposing them to “ongoing, serious, and irreversible privacy risks — simply by going into work”.  In addition to claims of failure to notify employees and obtain express consent regarding their biometric data collection practices, the complaint also alleges that Amazon failed to develop and follow a publicly available retention schedule and guidelines for permanently destroying workers’ biometric data.

While this case is an important reminder of BIPA implications, implementing a COVID-19 screening program, or any type social distancing or contact tracing technology to help prevent/limit the spread of coronavirus for that matter, can have privacy and security implications that extend well beyond the BIPA. In addition to the BIPA, depending on the type of data being collected and who is collecting it, such practices may trigger compliance obligations under several federal laws, such as the Americans with Disabilities Act (ADA), the Genetic Information Nondiscrimination Act (GINA), and the Health Insurance Portability and Accountability Act (HIPAA). In addition to BIPA, other state laws should be considered, if applicable, such as the California Consumer Privacy Act (CCPA) and state laws that require reasonable safeguards to protect personal information and notification in the event of a data breach. International laws, including the General Data Protection Regulation (GDPR) also can affected screening programs depending on their scope. In addition to statutory or regulatory mandates, organizations will also need to consider existing contracts or services agreements concerning the collection, sharing, storage, or return of data, particularly for service providers supporting the screening program.  Finally, whether mandated by law or contract, organizations should still consider best practices to help ensure the privacy and security of the data it is responsible for.

COVID-19 screening programs, as well as the extensive technology at our disposal and/or in development are certainly helping organizations address the COVID-19 pandemic, ensuring a safe and health workplace and workforce, and preventing future pandemics.  Nevertheless, organizations must consider the legal risks, challenges, and requirements with any such technology prior to implementation.

Back in August, after much anticipation and several rounds of review and modification, the California Consumer Privacy Act (CCPA) regulations finally became effective. This was long awaited by businesses and their service providers looking for compliance guidance and clarity on key issues related to facilitation of consumer rights.  This week, the California Department of Justice (“DOJ”) announced there would now be a third set of proposed modifications made to the CCPA regulations.

As a quick recap of past of developments related to the CCPA regulations, the DOJ first published CCPA proposed regulations on October 11, 2019.  In February 2020 and again in March, the DOJ gave notice of modifications to the proposed regulations, based on comments received during the relevant public commentary periods.  The final version of the CCPA regulations that became effective in August, was substantively unchanged from the previous version from March.

Below are highlights from the third set of proposed modifications made to the CCPA regulations, released this week:

  • Addition of examples of how businesses that collect personal information in the course of interacting with consumers offline can provide the notice of right to opt-out of the sale of personal information through an offline method.
  • Guidance on how a business’s methods for submitting requests to opt-out should be easy and require minimal steps. It provides illustrative examples of methods designed with the purpose or substantial effect of subverting or impairing a consumer’s choice to opt-out.
  • Clarification on the proof that a business may require an authorized agent to provide, as well as what the business may require a consumer to do to verify their request.
  • Clarification that businesses that have actual knowledge that they sell PI of minors are required to include in their privacy policies a description of their method for verifying that the person authorizing the sale of a child’s data is actually that child’s parent or guardian.

The DOJ’s notice regarding the proposed modifications and a comparative version of the new text are available here.  The DOJ will accept written comments from the public regarding the proposed modifications between Tuesday, October 13, 2020 and Wednesday, October 28, 2020. Written comments may be submitted to the DOJ via email to PrivacyRegulations@doj.ca.gov.

Since the CCPA’s effective date back in January there have been an influx of developments, as the legislature and regulators help to clarify ambiguities and provide greater specificity on key compliance issues facing covered businesses and their service providers. Just last week we reported on CCPA amendment, AB 1281, which extended exemptions for “B2B” and employee personal information. We will continue to update on CCPA and other related developments as they unfold.

New York and New Jersey release “COVID Alert NY” and “COVID Alert NJ,” apps designed to alert their users when they have been exposed to someone who tested positive for COVID-19. These apps follow those released in Pennsylvania and Delaware and are soon to be joined by Connecticut. The states hope to enhance their contact tracing efforts, but what about privacy?

According to New Jersey Governor Murphy,

The app is free and secure, and your identity, personally identifying information, and location will never be collected. The more phones that have the app, the better we can fight this pandemic.

Larry Schwartz, a former high-ranking aid to Governor Cuomo, explains privacy is achieved “not through location services tied to smartphones but through the device’s Bluetooth proximity detection.” More specifically, the apps use the Exposure Notification System technology developed by Google and Apple. By using Bluetooth instead of GPS, location tracking of individuals is not necessary and users can turn it off at any time.

According to state officials, the COVID Alert apps will notify users if they have been in “close contact” (within six feet for at least 10 minutes) with someone who has tested positive for COVID-19. In order for the apps to work between users in close contact, a few things have to happen.

First, both users must have downloaded the app on their mobile devices and opted-in to receive “Exposure Notifications.” For COVID Alert NY and NJ, the apps are free and available to anyone 18 or older who lives, works, or attends college in New York or New Jersey, and can be downloaded in multiple languages from the Google Play Store or Apple App Store. As with all apps, users should read the app’s privacy statement – here is New Jersey’s privacy statement.

Second, one of the users would need to have tested positive for COVID-19 and cooperated with the local health department by agreeing to anonymously enter a code into the user’s app.

Third, when the two users are in close contact, as described above, their devices will exchange codes via Bluetooth. Using Bluetooth Low Energy technology, a device can detect when another phone with the same app is within six feet. If a code matches with a list of codes associated with positive COVID-19 app users, the user will get an “Exposure Alert” together with recommendations on next steps to stay safe and prevent community spread like self-quarantining and getting tested.

With reports of data breaches and intrusive government surveillance of citizens, it is no wonder New York and New Jersey state officials are touting COVID Alert’s attention to privacy. However, app users are permitted to do a “COVID Check-In” and enter any symptoms they are having. As I write this post, there were 15,561 check-ins today, with 97% percent feeling good. When Checking-In, users are reminded that the app does not reveal the user’s identity, but the information, which could include race, gender, and ethnicity, can be useful for public health action. Users also are reminded that a record is kept of symptoms entered into the app for future reference.

According to reports, the app cost $700,000 to develop, a cost reportedly paid for by the Bloomberg Foundation. It remains to be seen whether the app will serve its intended purpose and will keep user data private and secure.

By signing AB 1281 into law on September 29th, 2020, California Governor Gavin Newsom amended the California Consumer Privacy Act (“CCPA”) to extend until January 1, 2022, not only the current exemption on employee personal information from most of the CCPA’s protections, but also the so-called “B2B” exemption. Welcomed by many “B2B” (business to business) organizations, this exemption originally enacted under AB 1355 removed significant amounts of personal information from the CCPA’s reach. Note, however, this exemption could be further extended until January 1, 2023, if the California Privacy Rights Act (CPRA) is approved by voters on Nov. 3, 2020.

The “B2B” exemption applies to the following:

Personal information reflecting a written or verbal communication or a transaction between the business and the consumer, where the consumer is a natural person who is acting as an employee, owner, director, officer, or contractor of a company, partnership, sole proprietorship, nonprofit, or government agency and whose communications or transaction with the business occur solely within the context of the business conducting due diligence regarding, or providing or receiving a product or service to or from such company, partnership, sole proprietorship, nonprofit or government agency

In other words, for example, the personal information obtained by a business from a consumer under the CCPA is generally exempt under this provision when that consumer is acting as a representative of another organization and the consumer engages with the business in communications or transactions that relate solely to providing or receiving products or services.  However, similar to the employee personal information exemption, certain personal information in this context remains subject to the CCPA’s private right of action if that personal information is involved in a data breach and reasonable safeguards were not in place.

CCPA covered businesses have a temporary reprieve on employment and “B2B” personal information, and will have to wait until election day to see if they will get another year.

On September 29th, California Governor Gavin Newsom signed into law AB 1281, an amendment to the California Consumer Privacy Act (“CCPA”) that would extend the current exemption on employee personal information from most of the CCPA’s protections, until January 1 2022. The exemption on employee personal information was slated to sunset on December 31, 2020.  It is important to highlight that under the current exemption, while employees are temporarily excluded from most of the CCPA’s protections, two areas of compliance remain: (i) providing a notice at collection, and (ii) maintaining reasonable safeguards for a subset of personal information driven by a private right of action now permissible for individuals affected by a data breach caused by a business’s failure to do so.

Notably, the operation of the extension is contingent upon voters not approving ballot proposition 24 in November, the California Privacy Rights Act (“CPRA”), which would amend the CCPA to include more expansive and stringent compliance obligations and inter alia, would extend the employment personal information exemption until January 1, 2023.

As a reminder, during this challenging time, it is important for employers, regardless of jurisdiction, to remain vigilant on the types of personal information collected from employees and how it is used. Pre COVID-19, employers, for example, were not thinking of performing temperature checks on employees or collecting other personal information in connection with COVID-19 screenings, and as a result may need to update their privacy notices to capture this category of information and the purpose it was used.

A full discussion on AB 1281 is available here.

During the same session, Governor Newsom vetoed an additional privacy bill, AB 1138, which would have required parental or guardian consent for creation of a social media or application account for children under 13. Under the federal Children’s Online Privacy Protection Act (COPPA) operators of Internet websites or online services to obtain parental or guardian consent before collecting personal information from a child known to be under 13. States have the authority to enforce COPPA.  In Governor Newsom’s veto statement, he highlighted that “Given its overlap with federal law, this bill would not meaningfully expand protections for children, and it may result in unnecessary confusion.” However, Governor Newsom concluded that his Administration is “open to exploring ways to build upon current law to expand safeguards for children online”.

California continues to be a leader in privacy and cybersecurity legislation. We will continue to update on CCPA and other related developments as they unfold.

The House of Representatives recently passed the Internet of Things (IoT) Cybersecurity Improvement Act of 2020 (the Act).  The Act has been moved to the Senate for consideration. The legislation sets minimum security standards for all IoT devices purchased by government agencies.

IoT refers to the myriad of physical devices that are connected to the internet, collecting and sharing data.  They are used by both consumers and corporations.

Common examples include products used by consumers such as fitness trackers and home thermostats, to devices used by business and government that measure air quality and the operation of military components.

Despite the tasks that can be accomplished by IoT devices, they remain vulnerable to cyberattack.  Currently, there is no national standard addressing cybersecurity for IoT devices.  There have been several attempts in recent years to develop of a national IoT strategy. For example, in late 2017, a coalition of tech industry leaders released a report that put out a call for creation and implementation of a national strategy to invest, innovate and accelerate development and deployment of IoT, and stressed the need to enact legislation which would, inter alia, require IoT security measures in a “comprehensive manner.” Further, as far back as 2015, the FTC issued “concrete steps” businesses can take to enhance the privacy and security of IoT for consumers.

According to a statement issued by Rep. Robin Kelly (D-IL), sponsor of the Act in the House, “Securing the Internet of Things is a key vulnerability Congress must address. While IoT devices improve and enhance nearly every aspect of our society, economy and everyday lives, these devices must be secure in order to protect Americans’ personal data.”  Senator Mark Warner (D-VA), who introduced the Senate version of the legislation back in 2017, and again in 2019, stated that, “manufacturers today just don’t have the appropriate market incentives to properly secure the devices they make and sell – that’s why this legislation is so important.”  Rep. Kelly’s statement noted that many IoT devices are shipped with factory-set passwords that are frequently unable to be updated or patched. IoT devices also can represent a weak point in a network’s security, leaving the rest of the network vulnerable to attack.

The Act requires the National Institute of Standards and Technology (NIST) to publish standards and guidelines on federal government agencies’ use of IoT devices.  The Act states that the Office of Management and Budget is to review government policies to ensure they are in line with NIST guidelines. Federal agencies would be prohibited from procuring IoT devices or renewing contracts for such devices if it is determined that they do not comply with the security requirements.

New technologies and devices continuously emerge, promising a myriad of societal, lifestyle and workforce advancements and benefits including increased productivity, talent recruiting and management enhancements, enhanced monitoring and tracking of human and other assets, and improved wellness tools. While these advancements are undoubtedly valuable, the privacy and security risks should be considered and addressed prior to implementation or use, even without national IoT security legislation in place.

The passing of U.S. Supreme Court Justice Ruth Bader Ginsburg will likely bring with it many shifts in the Court on key issues, among which are matters regarding the Telephone Consumer Protection Act (TCPA), most imminently –  what qualifies as an auto dialer. The TCPA has been ever evolving in recent years as courts and legislatures attempt to keep pace with changes in technology.

When the TCPA was enacted in 1991, most American consumers were using landline phones, and Congress could not begin to contemplate the evolution of the mobile phone. The TCPA defines “Automatic Telephone Dialing System” (ATDS) as “equipment which has the capacity—(A) to store or produce telephone numbers to be called, using a random or sequential number generator; and (B) to dial such numbers.” 47 U.S.C § 227(a)(1). In 2015, the Federal Communications Commission (FCC) issued its 2015 Declaratory Ruling & Order (2015 Order), concerning clarifications on the TCPA for the mobile era, including the definition of ATDS and what devices qualify. The 2015 Order only complicated matters further, providing an expansive interpretation for what constitutes an ATDS, and sparking a surge of TCPA lawsuits in recent years.

This past July the Supreme Court accepted petition for review of a Ninth Circuit ruling on the issue of whether the definition of “ATDS” in the TCPA encompasses any device that can “store” and “automatically dial” telephone numbers, even if the device does not “us[e] a random or sequential number generator.” The Supreme Court’s decision should help resolve the circuit split and provide greater clarity and certainty for parties facing TCPA class action litigation.

President Trump’s recent nomination of Seventh Circuit judge Amy Coney Barrett could be particularly impactful on the issue of defining ATDS under the TCPA.  In February of this year, Judge Barrett authored an opinion in which the Seventh Circuit narrowly held that the TCPA’s definition of Automatic Telephone Dialing System (ATDS) only includes equipment that is capable of storing or producing numbers using a “random or sequential” number generator, excluding most “smartphone age” dialers. The Seventh Circuit court expressly rejected the Ninth Circuit’s more expansive interpretation from a ruling in 2018 (currently under review by the Supreme Court), concluding that the TCPA covers any dialer that calls from a stored list of numbers “automatically”. These rulings are significant as most technologies in use today only dial numbers from predetermined lists of numbers.

In the Seventh Circuit case’s fact-pattern, the plaintiffs alleged that they had received over a dozen unsolicited calls over a one-year period, from the defendant. While the defendants acknowledged that that they had indeed placed the calls, they argued that this was not a TCPA violation, as their calling system required too much “human intervention” to qualify as an ATDS. Judge Barrett highlighted in the Seventh Circuit ruling that accepting the plaintiffs’ arguments against the defendant’s dialing system would have “far-reaching consequences…it would create liability for every text message sent from an iPhone. That is a sweeping restriction on private consumer conduct that is inconsistent with the statute’s narrower focus”.

Given Justice Ginsburg’s history as a proponent of protecting a consumer’s right to bring a class action both within the TCPA context and beyond, she very well may have supported a broader reading of the definition of ATDS. Whether Judge Barrett ultimately becomes Justice Ginsburg’s replacement remains to be seen, but anyone interested in the Supreme Court’s review of an ATDS under the TCPA should be following this development.

Trump Administration To Test Biometric Program To Scan Faces Of Drivers |  Zero Hedge

Earlier this month, our Immigration Group colleagues reported the Department of Homeland Security (DHS) would release a new regulation to expand the collection of biometric data in the enforcement and administration of immigration laws. However, as reported by Roll Call, a DHS Inspector General report raised significant concerns about whether Department is able to adequately protect sensitive biometric information, particularly with regard to its use of subcontractors. The expanded use of biometrics outlined in the Department’s proposed regulation, just as increased use of biometric information such as fingerprint or facial recognition by private organizations, heightens the risk to such data.

The amount of biometric information maintained by DHS is already massive. The DHS Office of Biometric Identity Management maintains the Automated Biometric Identification System, which contains the biometric data repository of more than 250 million people and can process more than 300,000 biometric transactions per day. U.S. Customs and Border Protection (CBP) is mandated to deploy a biometric entry/exit system to record arrivals and departures to and from the United States, with the long-term goal to biometrically verify the identity of all travelers exiting the United States and ensure that each traveler has physically departed the country at air, land, and sea departure locations.

In 2018, CBP began a pilot effort known as the Vehicle Face System (VFS) in part to test the ability to capture volunteer passenger facial images as they drove by at speeds under 20 mph and the ability to biometrically match captured images against a gallery of recent travelers. DHS hired a subcontractor to assist with the development of the technology.

According to the inspector general’s report, DHS has a range of policies and procedures to protect biometric information, which it considers sensitive personally identifiable information (SPII). Among those policies, DHS’ Handbook for Safeguarding Sensitive PII, Privacy Policy Directive 047-01-007, Revision 3, December 2017, requires contractors and consultants to protect SPII to prevent identity theft or other adverse consequences, such as privacy incidents, compromise, or misuse of data information on them.

Despite these policies, the DHS subcontractor engaged to support the pilot directly violated DHS security and privacy protocols when it downloaded SPII, including traveler images, from an unencrypted device and stored it on its own network. The subcontractor obtained access to this data between August 2018 and January 2019 without CBP’s authorization or knowledge. Later in 2019, the subcontractor’s network was subjected to a malicious cyberattack involving ransomware resulting in the compromise of 184,000 facial images of cross-border travelers collected through a pilot program, at least 19 of which were posted on the dark web.

As one of our 10 Steps for Tackling Data Privacy and Security Laws, “Vendors – trust but verify” is critical. For DHS, its failure to do so may damage the public’s trust resulting in travelers’ reluctance to permit DHS to capture and use their biometrics at U.S. ports of entry. Non-governmental organizations that experience a similar situation with one of their vendors face an analogous loss of trust, as well as adverse impacts on business, along with compliance enforcement and litigation risks.

Among the recommendations CBP made following the breach was to ensure implementation of USB device restrictions and to apply enhanced encryption methods. CBP also sent a memo requiring all IT contractors to sign statements guaranteeing compliance with contract terms related to IT and data security.  Like DHS, more organizations are developing written policies and procedures following risk assessments and other best practices. However, it is not enough to prepare and adopt policies, implementation is key.

A growing body of law in the United States requires not only the safeguarding of personal information, including biometric information, by organizations that own it, but also by the third-party service providers that process it on behalf of the owners. Carefully and consistently managing vendors and their access, use, disclosure, and safeguarding of personal information is a critical part of any written information security program.

A proposal by Indiana’s Attorney General Curtis Hill on Wednesday would add a significant step in the incident response process for responding to breaches of security affecting Indiana residents. On Wednesday, during a U.S. Chamber of Commerce virtual event, he announced his proposed rule designed to better protect Hoosiers from cyberattacks. It is expected that the proposed rule will take effect by the end of the year.

In short, there are two components to the proposed regulations:

  • A requirement for data base owners to create, implement and report a corrective action plan (CAP) to the Attorney General within thirty days of the date it reports a breach to the Attorney General under the state’s existing breach notification law.
  • A “safe harbor” for what constitutes “reasonable measures” to safeguard personal information in Indiana.

If the regulations are adopted, covered entities will need to revisit their incident response plans to ensure they have steps in place to timely submit a CAP to the Attorney General’s office. They might also consider modifying their data security plans to take advantage of the safe harbor.

Currently, Indiana law imposes general requirements on data base owners to “implement and maintain reasonable procedures, including taking any appropriate corrective action, to protect and safeguard from unlawful use or disclosure any personal information of Indiana residents collected or maintained by the data base owner.” Data base owners include persons that own or license computerized data that include personal information. As in several other states, these general obligations have not been well defined. AG Hill’s proposed rule, if adopted, would provide some clarity creating several duties for data base owners.

First, the general requirement to take “any appropriate corrective action” would, in the context of a data breach, mean the following:

  • Continuously monitoring and remediating potential vulnerabilities in a timely fashion.
  • Taking reasonable steps to mitigate and prevent the continued unlawful use and disclosure of personal information following any breach of security of data.
  • Preparing a written CAP following any breach of security of data which does the following:
    • Outlines the nature and all known or potential causes of the breach with reasonable specificity and citations to applicable technical data.
    • Identifies the precise date and time of the initial breach, and any subsequent breaches, if feasible.
    • Confirms that corrected measures were implemented at the earliest reasonable opportunity.
    • Identifies the specific categories of personal information subject to unlawful use or disclosure, including the approximate number of individuals affected.
    • Identifies what steps have already been taken to mitigate and prevent the continued unlawful use and disclosure of personal information.
    • Identifies a specific corrective plan to mitigate and prevent the continued unlawful use and disclosure of personal information.
  • Certify the development and implementation of the CAP to the Attorney General under penalty of perjury within thirty (30) days of providing notice of the breach to the Attorney General under existing law. Among other requirements for the CAP, the Attorney General would be authorized to conduct random and unannounced audits.

In short, simply complying with the disclosure and notification requirements under Indiana’s existing breach notification law (IC 24-4.9-3) would not, by itself, constitute appropriate corrective action following a breach.

We need a way to separate the businesses that are taking important steps to secure data from those who are not,” Attorney General Hill said. “This rule would provide businesses a playbook on how to protect data, and would protect the businesses that follow the playbook. It’s a win for both consumers and businesses.

Second, the proposed rule outlines a “safe harbor” for what constitutes “reasonable measures” protect personal information. More specifically, the rule identifies certain data security frameworks that, if adopted, would be presumed reasonable. These include:

  • a cybersecurity program that complies with the National Institute of Standards and Technology (NIST) cybersecurity framework and follows the most recent version of specified standards, such as NIST Special Publication 800-171,
  • for certain regulated covered entities, compliance with the following:
    • The federal USA Patriot Act.
    • Executive Order 13224.
    • The federal Driver’s Privacy Protection Act.
    • The federal Fair Credit Reporting Act.
    • The federal Health Insurance Portability and Accountability Act
  • Entities that comply with the payment card industry data security standard (PCI) in place at the time of the breach of security of data.

Because data security is not a one-time process, maintaining the safe harbor under the NIST framework requires the covered entity to implement any new version of the applicable standard.  Any data security plan also would need to monitor vulnerabilities tracked by NIST National Vulnerability Database, and for each critical vulnerability commence remediation planning within twenty-four (24) hours after the vulnerability has been rated as such, and apply the remediation within one (1) week thereafter. Additionally, covered entities must conduct risk assessments annually and revise their data security plans accordingly.

The safe harbor provides further that data base owners which can bear the burden of demonstrating their data security plan is reasonably designed will not be subject to a civil action from the Office of the Attorney General arising from the breach of security of data.

It is worth nothing that the frameworks listed might not apply to all of the data maintained by a covered entity. For example, the privacy and security regulations under HIPAA would not apply to employee data or other activities of the covered entity that does not involve “protected health information,” but would involve personal information of Indiana residents. The regulations are unclear on this point, and covered entities must still consider reasonable measures for that data for the safe harbor to apply.

Over the past few years, and particularly during the COVID-19 pandemic, the Department of Health and Human Services Office for Civil Rights in Action (OCR) has made countless efforts to enhance its Health Insurance Portability and Accountability Act (HIPAA) guidance and other related resources on its website. Last week, the OCR launched a new feature on their website HHS.gov, entitled Health Apps, which updates and renames  the OCR’s previous Health App Developer Portal, and is available here.

The new site features the OCR’s helpful guidance on “when and how” HIPAA regulations may be applicable to mobile health applications, acutely relevant during the COVID-19 pandemic as many aspects of the healthcare industry shift to telehealth.

Here are the key features of the OCR’s new Health Apps:

  • Mobile Health Apps Interactive Tool
    • The Federal Trade Commission (FTC), in conjunction with OCR, the HHS Office of National Coordinator for Health Information Technology (ONC), and the Food and Drug Administration (FDA), created a web-based tool to help developers of health-related mobile apps understand what federal laws and regulations might apply to them.
  • Health App Use Scenarios & HIPAA
    • Provides various use scenarios for mHealth applications, and explains when an app developer may be acting as a business associate under the HIPAA Rules.
  • FAQs on the HIPAA Right of Access, Apps & APIs
    • Provides helpful insight on how the HIPAA Rules apply to covered entities and their business associates with respect to the right of access, apps, and application programming interface (APIs).
  • FAQs on HIPAA & Health Information Technology
    • Provides helpful insight on the relationship between HIPAA and Health IT.
  • Guidance on HIPAA & Cloud Computing
    • Assistance for HIPAA covered entities and business associates, including cloud service providers, in how to effectively utilize cloud computing while still maintain HIPAA compliance.

As telehealth has increasingly become the norm, and the US continues to implement and consider various forms of contact tracing apps, patient privacy and maintaining HIPAA privacy and security obligations has never been more important.   The increased use of mobile health applications and other related tools to assist healthcare providers with facilitation of telehealth capabilities, also comes with an increased risk of data breaches and improper disclosures of protected health information (PHI) to unauthorized individuals.  The features of OCR’s new Health apps are a great starting point for HIPAA covered entities and businesses associates that utilize mobile health apps, and want to ensure compliance with their HIPAA obligations.

Below are some of our additional resources on OCR HIPAA related initiatives of late: