With advances in technology and business marketing come changes in the law and new litigation. Many businesses are familiar with the federal Telephone Consumer Protection Act (TCPA) but may be less familiar with Florida’s version, the Florida Telephone Solicitation Act (FTSA). A recent wave of class-action lawsuits stems from a 2021 amendments to the FTSA, largely focusing on businesses utilizing phone calls and text messages to advertise their products and services. The following examines nuances of the FTSA and why the measures businesses may put in place to comply with the TCPA may not pass muster in Florida.

Difference Between the TCPA and FTSA

The critical difference between the TCPA (as currently interpreted by the Supreme Court) and the FTSA is set forth in Fla. Stat. §501.059(8)(a).  Specifically, 8(a) provides: A person may not make or knowingly allow a telephonic sales call to be made if such call involves an automated system for the selection or dialing of telephone numbers or the playing of a recorded message when a connection is completed to a number called without the prior express written consent of the called party.

Under both the TCPA and the FTSA violations require the use of automatic equipment. However, while the Supreme Court has clarified that to qualify as an “automatic telephone dialing system” under the TCPA, a device must have the capacity either to store or to produce, a telephone number using a random or sequential number generator, the FTSA does not currently include a similarly limited definition. Rather, the FTSA’s lack of a definition has opened the door for plaintiffs to argue that any automated system that dials numbers or selects the order in which numbers are dialed would fit within the statute. In short, if a business’s automatic equipment dials from a list, it will likely not implicate the TCPA but still may create risk under the FTSA.  This is a critical point as in today’s technological environment, it is far more likely for entities to utilize a list for outreach rather than a random or sequential number generator. 

In addition, the FTSA’s 2021 amendments added a requirement to obtain prior express written consent for such telephonic sales calls. The amendment also includes the elements prior express written consent should contain. 

The FTSA’s Critical Components

As noted, the FTSA prohibits all telemarketing sales calls, text messages, and direct-to-voicemail, also known as “ringless voicemail” messages, using an “automated system for selection or dialing of telephone numbers or playing of a recorded message” without prior express written consent.  In other words, it is against the law for a company to utilize automated telephone dialing systems or pre-recorded messages in all telemarketing sales calls, text messages, or direct-to-voicemail messages without the prior express written consent of the individual receiving the call. 

Consumers Covered

The law itself covers only Florida residents. The FTSA contains a rebuttable presumption that a call made to any Florida area code is made to a Florida resident or a person in Florida at the time of the call. The FTSA has been enforced against businesses located and incorporated outside Florida. While there may be legal challenges on this issue, it is imperative that out-of-state businesses ensure their communications to Florida residents comply with the FTSA.

Private Right of Action

Under the FTSA’s private right of action, any violation for automated calls and do-not-call violations allows for a right to recover $500 in statutory damages. The FTSA also provides for up to $1500 in treble damages for willful or knowing violations in damages, plus attorneys’ fees.

Similar to the TCPA, the FTSA’s private right of action is not limited to automated calls but includes other violations such as calls to persons registered on Florida’s do-not-call list or making calls that fail to transmit the caller’s originating telephone number.

Based on the unique structure and provisions of the FTSA, in the past year, businesses have faced a voluminous number of FTSA claims, normally pled on a class basis. This means that an individual who brings an action claiming that he or she did not provide prior written consent to receive an automated text message or phone call brings the action individually and on behalf of all Florida residents who also received texts or calls from the same business dating back to the amendment to the FTSA in 2021. As such, potential damages for a thousand-person putative class can quickly climb to six or seven figures. Thus, compliance is key.

Currently, defendants challenge the amendments to the FTSA by arguing that the Act is unconstitutional, plaintiffs do not have standing or actual harm, the cases should not be treated as class actions, and a myriad of other arguments. While these legal challenges continue, companies across the nation that conduct business and marketing in Florida should ensure their consents and practices comply with this law.

If you have questions regarding FTSA or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

Websites play a vital role for organizations. They facilitate communication with consumers, constituents, patients, employees, donors, and the general public. They project an organization’s image and promote goodwill, provide information about products and services and allow for their purchase. Websites also inform investors about performance, enable job seekers to view and apply for open positions, and accept questions and comments from visitors to the site or app, among many other activities and functionalities. Because of this vital role, websites have become an increasing subject of regulation making them a growing compliance concern.

Currently, many businesses are working to become compliant with the California Consumer Privacy Act (“CCPA”) which, if applicable, requires the conspicuous posting of a detailed privacy policy on a business’s website. But, the CCPA is not the first nor will it be the last compliance challenge for organizations that operate websites and other online services. An growing compliance burden has led to a wide range of operational and content requirements for websites. The push for CCPA compliance and responding to the flood of ADA accessibility litigation may cause more organizations to revisit their websites and, in the process, uncover a range of other issues that have crept in over the years.

What are some of these requirements?

AI – Artificial Intelligence. Organizations are increasingly leveraging automated decision-making tools to enhance their businesses in a range of areas, including employment. Needless to say, artificial intelligence (AI) and similar technologies, which power these tools, is being targeted for regulation. For example, the New York City Council passed a measure that subjects the use of automated decision-making tools to several requirements. One of those requirements is a “bias audit.” Employers that intend to utilize such a tool must first conduct a bias audit and must publish a summary of the results of that audit on their websites. We cover more about NYC Local Law 144 here.

ADA Accessibility. When people think about accommodating persons with disabilities, they often are drawn to situations where a person’s physical movement in a public place is impeded by a disability – stairs to get into a library or narrow doorways to use a bathroom. Indeed, Title III of the Americans with Disabilities Act grants disabled persons the right to full and equal enjoyment of the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation. Although websites were not around when the ADA was enacted, they are now, and courts are applying ADA protections to those sites. The question is whether a website or application is accessible.

Although not yet adopted by the Department of Justice, which enforces Title III of the ADA, guidelines established by the Website Accessibility Initiative appear to be the more likely place courts will look to access the accessibility of a website to which Title III applies. State and local governments have similar obligations under Title II of the ADA, and those entities might find guidance here.

HIPAA…and tracking technologies, pixels. For anyone who has had their first visit to a doctor’s office, they likely were greeted with a HIPAA “notice of privacy practices” and asked to sign an acknowledgement of receipt. Most covered health care providers have implemented this requirement, but may not be aware of the website requirement. HIPAA regulation 45 CFR 164.520(c)(3)(i) requires that covered entities maintaining a website with information about the entity’s customer services or benefits must prominently post its notice of privacy practices on the site and make the notice available electronically through site.

Beyond the notice posting requirement, websites of HIPAA covered entities and business associates have operational issues to consider. In December 2022, the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) issued a bulletin with guidance concerning the use of online tracking technologies by covered entities and business associates under HIPAA. The OCR Bulletin follows a significant uptick in litigation concerning these technologies in industries including but not limited to the healthcare. For healthcare entities, the allegations relate to the sharing of patient data obtained from patient portals and websites. We do a deeper dive into this issue here.

COPPA. The Children’s Online Privacy Protection Act (COPPA) was enacted to give parents more control concerning the information websites collect about their children under 13. Regulated by the Federal Trade Commission (FTC), COPPA requires websites and online services covered by COPPA to post privacy policies, provide parents with direct notice of their information practices, and get verifiable consent from a parent or guardian before collecting personal information from children. COPPA applies to websites and online services directed to children under the age of 13 that collect personal information, and to sites and online services geared toward general audiences when they have “actual knowledge” they are collecting information from children under 13. Find out more about compliance here.

FTCA and more on tracking technologies. Speaking of the FTC, that agency also enforces the federal consumer protection laws, including section 5 of the Federal Trade Commission Act (FTCA) which prohibits unfair and deceptive trade practices affecting commerce. When companies tell consumers they will safeguard their personal information, including on their websites, the FTC requires that they live up these promises. Businesses should review their website disclosures to ensure they are not describing privacy and security protections that are not actually in place.

Further to the issue of website tracking technologies noted above under HIPAA, the FTC took enforcement action against digital healthcare companies for sharing user information vie third-party tracking pixels, which enable the collection of user data. However, the FTC’s new focus highlights that issues with pixel tracking are not only a concern for covered entities and business associates under HIPAA.

ACA – Transparency in Coverage. Pursuant to provisions in the Consolidated Appropriations Act, 2021, the Departments of Labor, Health and Human Services, and the Treasury issued regulations to implement the Transparency in Coverage Final Rules.  The Final Rules require certain health plans and health insurance issuers to post information about the cost to participants, beneficiaries, and enrollees for in-network and out-of-network healthcare services through machine-readable files posted on a public website.  The Final Rules for this requirement are effective for plan years beginning on or after January 1, 2022 (an additional requirement for disclosing information about pharmacy benefits and drug costs is delayed pending further guidance).

Comprehensive State Privacy Laws, including the CCPA. As mentioned above, a CCPA-covered business that maintains a website must post a privacy policy on its website through a conspicuous link on the home page using the word “privacy,” on the download or landing page of a mobile application. That is not all. The website must also provide certain mechanisms for consumers (including employees and applicants) to contact the business about their CCPA rights, such as the right to require deletion of their personal information, and the right to opt-out of the sale of personal information. The latter must be provided through an interactive webform accessible via a clear and conspicuous link titled “Do Not Sell My Personal Information,” or “Do Not Sell My Info.” Several of these requirements have been enhanced beginning in 2023 under the California Privacy Rights Act.

Since we originally published this post, five other states have enacted a similar data privacy framework – Colorado, Connecticut, Iowa, Utah, and Virginia. For organizations subject to those law, additional work may be needed on their privacy policies to comply.

CalOPPA. Even if an organization is not subject to the CCPA, it still may be subject to the California Online Privacy Protection Act (CalOPPA). CalOPPA requires certain commercial operators of online services, including websites and mobile and social apps, which collect personally identifiable information from Californians to conspicuously post a privacy policy. Privacy policies should address how companies collect, use, and share personal information. Companies can face fines of up to $2,500 each time a non-compliant app is downloaded.

Delaware and Nevada. In 2016, Delaware became the second state to have an online privacy protection act, requiring similar disclosures to those under CalOPPA. Nevada enacted website privacy legislation of its own. First, like DelOPPA and CalOPPA, NRS 603A.340 requires “operators” to make a privacy notice reasonably accessible to consumers through its Internet website or online service. That notice must, among other things, identify the categories of covered information the operator collects through the site or online service about consumers who use or visit the site or service and the categories of third parties with whom the operator may share such covered information. In general, an operator is a person who: (i) owns or operates an Internet website or online service for commercial purposes; (ii) collects and maintains covered information from consumers who reside in this State and use or visit the Internet website or online service; and (iii) engages in any activity that constitutes sufficient nexus with this State, such as purposefully directing its activities toward Nevada. Effective October 1, 2019, Nevada added to its website regulation by requiring operators to designate a request address on their websites through which a consumer may submit a verified request to opt out of the sale of their personal information.

California’s Fair Chance Act. This is California’s version of the “ban the box” law, those law enacted in many states which generally prohibit employers from asking job applicants about criminal convictions before making a conditional job offer. In California, the law imposes similar restrictions on employers with five or more employees. Why is this a website requirement?

Recently, the state’s Department of Fair Employment and Housing (DFEH) announced new efforts to identify and correct violations of the statute by using technology to conduct mass searches of online job advertisements for potentially prohibited statements. The DFEH deems blanket statements in job advertisements indicating that the employer will not consider anyone with a criminal history to be violative of the statute. In its press release, the DFEH states in one day of review it found over 500 job advertisements with statements that violate the statute. The DFEH has released a new Fair Chance Toolkit, that includes sample forms and guides, and employers should also consider reviewing the descriptions of job opportunities on their websites.

California Transparency in Supply Chains Act. California seeks to curb slavery and human trafficking by making consumers and businesses more aware that the goods and products they buy could be supporting the commission of these crimes. To do so, the Transparency in Supply Chains Act requires large retailers and manufacturers to provide consumers with information regarding their efforts to eradicate slavery and human trafficking from their supply chains. This information must be conspicuously provided on the company’s website (or provided in writing if it does not have a website). To be subject to the law, a company must (a) identify itself as a retail seller or manufacture in its tax returns; (b) satisfy the legal requirements for “doing business” in California; and (c) have annual worldwide gross receipts exceeding $100,000,000. To assist with compliance, the state has published a Resource Guide and Frequently Asked Questions.

GDPR. In 2018, the European Union’s General Data Protection Regulation (GDPR) became effective in 2018 and reached companies and organizations globally. In general, organizations subject to the GDPR which collect personal data on their websites must post a privacy policy on their website setting for the organization’s privacy practices.

Not-For-Profits, Donors, and Ratings. A donor’s decision to contribute to an organization is significantly affected by that organization’s reputation. To assist donors, several third-party rating sites, such as Charity Navigator, the Wise Giving Alliance, and CharityWatch, do much of the legwork for donors. They collect large amounts of data about these organizations, such as financial position, use of donated funds, corporate governance, transparency, and other practices. They obtain most of that data from the organizations’ Forms 990 and websites, where many organizations publish privacy policies.

Rating sites such as Charity Navigator base their ratings on comprehensive methodologies. A significant component of Charity Navigator’s rating, for example, relates to accountability and transparency, made up of 17 categories. A review of an organization’s website informs five of those 17 categories, namely (i) board members listed, (ii) key staff listed, (iii) audited financials published, (iv) Form 990 published, and (v) privacy policy content. Addressing some of these issues on an organization’s website could help boost its ratings and drive more contributions.

This is by no means an exhaustive list of the regulatory requirements that may apply to your website or online service. Organizations should regularly revisit their websites not just to add new functionality or fix broken links. They should have a process for ensuring that the sites or services meet the applicable regulatory requirements.

As we round the corner into the second quarter of 2023, the following enforcement dates for new or amended state data protection laws are quickly approaching.

  • The New York City Local Law 144, Automated Employment Decision Tools: April 15, 2023.
  • California Consumer Privacy Act Regulations: July 1, 2023.
  • Colorado Consumer Privacy Act (CPA): July 1, 2023.
  • Connecticut Act Concerning Personal Data Privacy and Online Monitoring (CTDPA): July 1, 2023.
  • Virginia SB 1040 – Employer use of use of employee’s social security number: July 1, 2023.

Depending on the law, preparation may require reviewing data collection and use, updating notices, internal policies and procedures, and conducting employee training.

If you have questions about data protection laws, cybersecurity, or related issues, contact a Jackson Lewis attorney to discuss.

Last week, a New York Times’ article discussed ChatGPT and AI’s “democratization of disinformation,” along with their potentially disruptive effects on upcoming political contests. Asking a chatbot powered by generative AI to produce a fundraising email is not the main concern, according to the article. Leveraging that technology to create and disseminate disinformation and deepfakes is. Some of the tactics described in the article intended to further political goals are unsettling for and well beyond politics, including the workplace.

“Now any amateur with a laptop can manufacture the kinds of convincing sounds and images that were once the domain of the most sophisticated digital players. This democratization of disinformation is blurring the boundaries between fact and fake…”

Voice-cloning tools could be used, for example, to create convincing audio clips of political figures. One clip might convey a message that is consistent with the campaign’s platform, albeit never uttered by the candidate. Another clip might be produced to position the candidate in a bad light by suggesting the candidate was involved in illicit behavior or conveyed ideas damaging to the campaign, such as using racially-charged language. Either way, such clips would be misleading to the electorate. The same would be true of AI-generated images or videos.

And as synthetic media gets more believable, the question becomes: What happens when people can no longer trust their own eyes and ears?”

It’s not hard to see how these same technologies, which are increasingly accessible by most anyone and relatively easy to use, can create significant disruption and legal risk in workplaces across the country. Instead of creating a false narrative about a political figure, a worker disappointed in his annual review might generate and covertly disseminate a compromising “video” of his supervisor. The failure to investigate a convincing deepfake video could have substantial and unintended consequences. Of course, the creation of this kind of misinformation can be directed at executives and the company as a whole.

Damaging disinformation and deepfakes are not the only risks posed by generative AI technologies. To better understand the kinds of risks an organization might face, assessing how workers are using ChatGPT and other similar generative AI technologies is a good first step. If a group of workers are like the millions of other people using ChatGPT, activities might include performing research, preparing draft communications such as the fundraising email in the NYT article discussed above, coding, and other tasks. Workers in different industries with different responsibilities likely will be approaching the technology with different needs and identifying a range of creative use cases.

Greater awareness about the uses of generative AI in an organization can help with policy development, but there are some policies that might make sense for most if not all applications of this technology.

Other workplace policies generally apply. As good example of this is harassment and nondiscrimination policies. As with an employee’s activity in social media, an employee’s use of ChatGPT is not shielded from existing policies on discrimination or harassment of others. Existing policies should apply.

Follow the application’s terms and understand its limitations. Using online resources for company business in violation of the terms of use of those resources could create legal exposure for organizations. Also, employees should be aware of the capabilities and limitations of the tools they are using. For instance, while ChatGPT may seem omniscient, it is not, and it may not be up to date – OpenAI notes “ChatGPT’s training data cuts off in 2021.” Employees can avoid a little embarrassment for the organization (and themselves) knowing this kind of information.

Avoid impermissible sharing of data. ChatGPT is just that, a chat or conversation with OpenAI that employees at OpenAI can view:

Who can view my conversations?

As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.

Employees should avoid sharing personal information as well as confidential information about the company or its customers without understanding the applicable obligations that may apply. For example, there may be contractual obligations to customers of the organization prohibiting the sharing of their confidential information with third parties. Similar obligations could be established through website privacy policies or statements through which an organization has represented how it would share certain categories of information.

Establish a review process to avoid improper uses. Information generated through AI-powered tools and platforms may not be what it seems. It may be inaccurate, incomplete, biased, or it may infringe on another’s intellectual property rights. The organization may want to conduct a review of certain content obtained through the tool or platform to avoid subpar service to customers or an infringement lawsuit.

There is a lot to think about when considering the impacts of ChatGPT and other generative AI technologies. This includes carefully wading through political blather during the imminent election season. It also includes thinking about how to minimize risk related to these technologies in the workplace. Part of that can be accomplished through policy, but there are other steps to consider, such as employee training, monitoring utilization, etc.

On March 28, 2023, Iowa’s Governor signed Iowa’s new statute relating to consumer data protection. Iowa joins CaliforniaColoradoConnecticutUtah, and Virginia in the ever-growing patchwork of consumer privacy laws across the country.

The new law takes effect on January 1, 2025.

Iowa’s consumer privacy law covers businesses that control or process personal data on 100,000 consumers in the state or derive 50% of their revenue from selling the data of more than 25,000 consumers. A consumer under Iowa’s statute is defined as a natural person who is a resident of the state and active in an individual or household context. Individuals acting in a commercial or employment context are excluded.

Like other states’ comprehensive consumer privacy laws, the statute provides consumers with the right to access personal data being processed, to delete personal data, and to opt out of the sale of their personal data.

 For additional information on Iowa’s new privacy statute and other data privacy laws and regulations, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

The Federal Trade Commission (FTC) recently took enforcement action against digital healthcare companies for sharing user information vie third-party tracking pixels, which enable the collection of user data. At the start of the year, the U.S. Health and Human Services Office of Civil Rights issued its own bulletin with guidance regarding tracking pixel technology for covered entities and business associates subject to the Health Insurance Portability and Accountability Act (HIPAA). However, the FTC’s new focus highlights that issues with pixel tracking are not only a concern for covered entities and business associates under HIPAA.

The following definition of pixel tracking from the FTC is helpful:

Tracking pixels have evolved from tiny, pixel-sized images on web pages for tracking purposes to include a broad range of HTML and JavaScript embedded in web sites (and email). Tracking pixels can be hidden from sight and can track and send all sorts of personal data such as how a user interacts with a web page including specific items a user has purchased or information users have typed within a form while on the site. Businesses often want to use them to track consumer behavior (pageviews, clicks, interactions with ads) and target ads to users who may be more likely to engage or purchase something based on that prior online behavior.

In its recent article about pixel tracking, the FTC discusses concerns about the practice:

  • Ubiquity and persistence. The FTC cited to significant research indicating that thousands of the most visited websites have pixels potentially leaking personal information to third parties. And, unlike cookies which can be disabled, “[p]ixel tracking can still occur even if cookies are disabled.”
  • Lack of clarity. The technology permits any kind of data to be shared and in some cases the providers of the technology are not sure what data is being shared. This can leave consumers in the dark about the categories of their personal information shared with third parties as a result of their activity on a website.
  • Steps to remove personal information may be ineffective. The agency notes that some attampts to appropriately remove personal information may be inadequate. As an example, while some pixel technologies attempt to “hash” personal information to scramble personally identifiable information, that scambling can be reversed.

The concerns raised by the FTC are more general than just HIPAA and go to consumer privacy and data protection. For example, the FTC observed:

Companies using tracking pixels that impermissibly disclose an individual’s personal information (which may include health information) to third parties may be violating the FTC Act, the FTC’s Health Breach Notification Rule, the HIPAA Privacy, Security, and Breach Notification Rules, other state or federal statutes involving the disclosure of personal information, and your privacy promises to consumers.

As such, even companies outside of healthcare need to consider their use of pixel technology to ensure compliance with state and federal laws on the protection of consumer data. And, in particular, businesses need to consider what promises they are making to consumers, such as in their website privacy policies and terms of use.  

If questions about compliance with consumer privacy and data protection or related issues, contact a Jackson Lewis attorney to discuss.  

On March 7, 2023, the Consumer Financial Protection Bureau (CFPB), the federal government agency charged with protecting consumers in the financial sector, and the National Labor Relations Board (NLRB), the federal government agency tasked with protecting private sector employees’ rights to engage in union organizing and other concerted activity, announced an information sharing agreement in order to better protect both consumers and workers.

The announcement indicated that shared areas of concern for the two agencies include employer surveillance and employer-driven debt. The CFPB hopes that sharing information with the NLRB will support the agency’s efforts to end “debt traps” in employment. Last year the CFPB began seeking information about risks consumers face from employers, such as workers who take on debt due to employer-mandated training and equipment.

In the announcement, the NLRB General Counsel echoed concerns previously addressed by the NLRB in a memorandum last year regarding employers’ use of electronic monitoring and automated management in the workplace and the potential chilling effect on organization efforts.

The information-sharing agreement does not create a legally binding obligation between the CFPB and the NLRB and does not waive any existing statutory or regulatory requirements governing the disclosure of nonpublic information. Both agencies will still be required to protect the confidentiality of nonpublic information and personally identifiable information.

Businesses should take note of this new agreement and partnership as it could mean an increase in investigations and charges triggered by information shared between the agencies. The announcement specifically identified the “gig economy” as a focus of concern among the agencies.

If you have questions about CFPB and NLRB efforts or related issues, contact a Jackson Lewis attorney to discuss.

On March 15, 2023, the Iowa legislature unanimously passed Senate File 262, the Consumer Privacy Act, which relates to consumer data and privacy protection. Once signed by Iowa’s governor, the statute will become operative on January 1, 2025, and  Iowa will join California, Colorado, Connecticut, Utah, and Virginia in passing a comprehensive consumer privacy statute.

Covered Businesses

Covered businesses that must comply with the requirements of this new consumer privacy law are those entities that control or process personal data on 100,000 consumers in the state or derive 50% of their revenue from selling the data of more than 25,000 consumers.

Consumer Defined

Under the statute, a consumer is defined as a natural person who is a resident of Iowa and acting only in an individual or household context. The definition of consumer excludes individuals acting in a commercial or an employment context.

Personal Data

The Act applies to Personal Data, which means information linked or reasonably linkable to an identified individual or an identifiable individual.

Consumer Data Rights

 The statute provides consumers with the following rights:

  • To confirm that covered businesses are processing the consumer’s personal data and access that personal data.
  • To delete personal data provided by the consumer.
  • To port the personal data.
  • To obtain a copy of the consumer’s personal data with certain limitations.
  • To opt out of processing for the sale of personal data or targeted advertising.

Covered Business Obligations

Covered businesses under the statute must comply with requests by consumers to exercise their rights as follows:

  • Respond to consumer requests without undue delay, but in all cases within 90 days of receipt of the request. The response period may be extended by 45 days when reasonably necessary, based on the complexity of the request and the number of consumer requests.
  • If the covered business declines to take action, it must inform the consumer.
  • Information provided in response to a consumer request must be provided to the consumer free of charge twice annually per consumer.

In addition to complying with consumer requests covered businesses must:

  • Adopt reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data.
  • Protect sensitive data, which is a broad category under the statute that includes racial information, biometric data, and even geolocation under the statute but not processing such data without the consumer having been presented clear notice and an opportunity to opt-out of such processing.
  • Avoid processing data in such a way as to violate the state or federal laws that prohibit unlawful discrimination against a consumer. Moreover, a covered business may not discriminate against a consumer for exercising rights under the statute including denying goods or services or changing the prices or rates.
  • Contractually obligate processors to adhere to the business’s instructions, where the business is a controller, and implement appropriate technical and organizational measures to assist the controller in meeting its obligations under the Act.  
  • Develop a privacy notice and a secure and reliable means for consumers to submit requests to exercise their rights.

Enforcement

The statute does not include a private right of action and the attorney general of the state has exclusive authority to enforce the provisions of this chapter.

For additional information on Iowa’s new privacy statute and other data privacy laws and regulations, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

While the California Privacy Protection Agency (CPPA) only recently approved revised amended regulations pertaining to the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA), it is already on to its next rulemaking.

On February 10, 2023, the CPPA issued an invitation for preliminary comments on proposed rulemaking pertaining to cybersecurity audits, risk assessments, and automated decision-making. The invitation includes some specific questions the CPPA would like to receive comments on, but comments are not limited to those areas of inquiry.

The comment period will be open until March 27, 2023, and can be submitted:

Electronic: Comments may be submitted electronically to regulations@cppa.ca.gov. Please include “PR 02-2023” in the subject line.

Mail: California Privacy Protection Agency Attn: Kevin Sabo 2101 Arena Blvd Sacramento, CA 95834

The questions posed by the CPPA appear to be attempting to harmonize the efforts of the CPPA with other laws other than the CCPA and CPRA that apply to covered businesses. There are also specific questions regarding the European Data Protection Board’s Guidelines on Data Protection Impact Assessment, as well as Colorado’s Privacy Act, suggesting that the CPPA is looking more widely than mere consistency with California law.

If you have questions on the CPPA rulemaking or related issues, contact a Jackson Lewis attorney to discuss.

While programs such as Artificial Intelligence bots that can write poetry or develop art are capturing people’s interest, administrative agencies across the country are concerned about how similar technology including algorithms and automated decision making may affect employees and consumers alike. The Equal Employment Opportunity Commission (EEOC) to the New York City Department of Consumer and Worker Protection are issuing guidance and regulations about AI and related technologies.

The latest administrative body to join the fray is the Colorado Division of Insurance. At the start of February, the Division issued a draft of proposed regulations pertaining to algorithm and predictive model governance. The purpose of the regulation is to establish requirements for a life insurance company’s internal governance and risk management necessary to ensure that the company’s use of consumer data and information, as well as algorithms and predictive models, does not result in unfair discrimination. This is a similar concern voiced in much of the guidance and regulations around the country.

The Division of Insurance’s proposed regulations includes governance and risk management framework, documentation mandates, and reporting requirements. The regulations would require life insurers that use external consumer data and information sources (ECDIS) as well as algorithms and predictive models using ECDIS to establish a governance and risk management framework that ensures the ECDIS is credible, and its use does not result in unfair discrimination.

That framework includes components that are similar to what we are seeing in other efforts to regulate AI and related technologies. These include:

  • Documenting governing principles aimed at transparency and accountability.
  • Board of directors and senior management’s responsibility and accountability for strategy and use of ECDIS and the algorithms and predictive models using ECDIS
  • Establishing written policies and processes for design, development, testing, deployment, use, and ongoing monitoring
  • Maintaining a process for addressing consumer complaints and inquiries, one that provides sufficiently clear information to enable consumers to take meaningful action in response to adverse decisions. 

Additionally, as with other regulations in this area, insurers will be required to document their use of ECDIS and algorithms and predictive models using ECDIS, and report to the Division of Insurance progress toward compliance with the applicable regulatory requirements.

The type of regulation proposed by the Division of Insurance is going to proliferate as algorithms and automated decision-making tools become more and more common. As such, businesses exploring these technologies should consider putting similar measures and principles in place – e.g., governance, documentation, accountability, notice, and responsibility – during the design, development, testing, deployment, use, and monitoring phases.