Earlier this year, North Dakota’s Governor signed HB 1127,  which introduces new compliance obligations for financial corporations operating in North Dakota. This new law will take effect on August 1, 2025.

The law applies to certain “financial corporations.” Under the law, financial corporation means all entities regulated by the Department of Financial Institutions, excluding credit unions, as well as banks and similar institutions organized under North Dakota or U.S. law. Entities covered by the law include collection agencies, money brokers, money transmitters, mortgage loan originators, and trust companies.

Covered financial corporations must implement a WISP. HB 1127 requires the implementation of comprehensive, written information security programs tailored to each organization’s size, complexity, and the sensitivity of customer information they handle. The law mandates specific program elements, including risk assessments, designated security personnel, implementation of technical safeguards, regular testing, incident response planning, and prompt notification of security events to authorities, discussed further below.

The law defines “information security program” as “the administrative, technical, or physical safeguards a financial corporation uses to access, collect, distribute, process, protect, store, use, transmit, dispose of, or otherwise handle customer information.” 

HB 1127 also outlines several elements required for the programs, which include, among other things:

  • Designated Security Leadership: The information security program must denote a qualified individual responsible for implementing, overseeing, and enforcing the program.
  • Risk Assessment: foundational to the information security program is the written risk assessment, which identifies reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of customer information.
  • Safeguards: The corporation must design and implement safeguards to control and mitigate the risks identified through the risk assessment. This should include a periodic review of the corporation’s data retention policy.
  • Testing and Monitoring: the above safeguards’ key controls, systems, and procedures must be regularly tested or otherwise monitored.
  • Incident Response Planning: The corporation must establish a written incident response plan designed to promptly respond to and recover from any security event materially affecting the confidentiality, integrity, or availability of customer information the corporation controls.
  • Notification Requirements: the corporation must notify the state’s Commissioner of Financial Institutions of a “notification event” – defined as “the acquisition of unencrypted customer information without the authorization of the individual to which the information pertains.” For notification events implicating five hundred or more consumers, the corporation must notify the Commissioner as soon as possible, but no later than forty-five days after the discovery of the event.
  • Oversee Service Providers: The corporation must take reasonable steps to select and retain service providers capable of maintaining the safeguards of customer information. Moreover, the corporation must periodically assess the service providers based on the risk they present.
  • Annual Report to Board: Must designate a qualified individual to report in writing at least annually to the corporation’s board of directors or similar on the overall status of the information security program and material matters related to the program, including risk assessment.

If you have questions about compliance with these new requirements or related issues, contact a Jackson Lewis attorney to discuss.

The U.S. Senate voted early Tuesday to remove a proposed moratorium from the federal budget bill. This outcome marks a pivotal moment in the ongoing debate over artificial intelligence regulation in the United States.

The AI moratorium, initially proposed as part of the One Big Beautiful Bill Act, proposed a 10-year moratorium on the enforcement of AI-related legislation by states or other entities. Specifically, it was designed to restrict the regulation of AI models, systems, or automated decision systems involved in interstate commerce.

The provision faced strong bipartisan opposition, with critics warning that it would leave consumers vulnerable to AI-related harm and undermine state-level consumer protection laws.

After extensive negotiations and public outcry, an amendment to strip the provision from the budget bill was introduced. The Senate voted overwhelmingly in favor of the amendment, thus ending the moratorium effort.

Support for the removal was partially based on states having the ability to enforce their own AI regulations, particularly in areas such as robocalls, deepfakes, and autonomous vehicles. Currently, several state and local jurisdictions have AI protections planned or already in place.

These developments also reflect a growing recognition of the complexities and potential risks associated with AI technologies. Both the federal government and states will likely be grappling with balancing regulation with innovation when it comes to AI for the foreseeable future.

Jackson Lewis will continue to monitor legislative developments related to AI and related technologies.

The Senate recently voting 99-1 to remove a 10-year moratorium on state regulation of AI says something about the impact of AI, but also its challenges.

A new MIT study, presented at the ACM Conference on Fairness, Accountability and Transparency, demonstrates that large language models (LLMs) used in healthcare can be surprisingly “brittle.” As discussed in Medical Economies, the researchers evaluated more than 6,700 clinical scenarios and introduced nine types of minor stylistic variations—typos, extra spaces, informal tone, slang, dramatic language, or even removing gender markers. Across all variants, these small changes altered AI treatment recommendations in clinically significant ways, with a 7‑9% increase in the AI advising patients to self‑manage rather than seek care—even when medical content remained identical.

“These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case. There is still so much about LLMs that we don’t know,” Abinitha Gourabathina, Lead author of the study

Notably, these misinterpretations disproportionately impacted women and other vulnerable groups—even when gender cues were stripped from the input. In contrast, according to the study’s findings, human clinicians remained unaffected by such stylistic changes in deciding whether care was needed.

Why This Matters For Healthcare and Beyond

Healthcare Providers

  • Patient safety risk: If patients’ informal language or typos unintentionally trigger incorrect triage outcomes, patients may be more likely to engage in self-care when in-person care is recommended, and serious conditions may go unflagged.
  • Health equity concerns: The disproportionately poorer outcomes linked to female or vulnerable‐group messages highlight amplified bias through sensitivity to style, not substance.

Professional Services

  • Faulty legal, compliance, or regulatory advice: The same kind of “brittleness” that the MIT study suggests could compromise patient care, may also lead to varied and inaccurate and/or incomplete legal or compliance recommendations.

Human Resources

  • Compliance and bias risks: There are many use cases for LLMs in the employment arena which also rely on prompts from employees. And, there are lots of concerns about bias. Without sufficient training, auditing, and governance, they too may fall prey to the same kind of limitations witnessed in the MIT study.  

Governance, Testing & Accountability

Healthcare surely is not the only industry grappling with the new challenges LLMs present. Establishing a governance framework is critical, and organizations might consider the following measures as part of that framework:

  1. Pre‑deployment auditing: LLMs should undergo rigorous testing across demographic subgroups and linguistic variants—not just idealized text—but also informal, error‐ridden inputs.
  2. Prompt perturbation testing: Simulate typos, tone shifts, missing or added markers—even swapping gender pronouns—to check for output stability.
  3. Human validation oversight: At minimum, have subject matter experts (SMEs) review AI outputs, especially where high‐stakes decisioning is involved.
  4. Developer scrutiny and certification: Organizations deploying LLMs should understand and assess to the extent necessary and appropriate efforts made by their developers to address these and other issues before adoption and implementation.

The MIT research reveals that small changes in input—like a typo or a slangy phrase—can meaningfully skew outputs in potentially high‑stakes contexts. For users of LLMs, including healthcare providers, lawyers, and HR professionals, that “brittleness” may heighten risks to safety, accuracy, fairness, and compliance. Strong governance is needed. And with the moratorium on state regulation of AI removed from the Big Beautiful Bill, should its removal hold, organizations are like to see more attention given to governance, risk, and compliance requirements as legislation develops.  

Explained in more detail below, under the recent vacatur of most of the HIPAA Privacy Rule to Support Reproductive Health Care Privacy (the “Reproductive Health Rule”):

  • The broad prohibitions on disclosing protected health information (“PHI”) relating to reproductive health for law enforcement or investigatory purposes are vacated nationally.
  • The attestation requirement that was included as part of the Reproductive Health Rule no longer applies to requests for such information.

Note, however, the more recent U.S. Supreme Court decision in Trump v. CASA, Inc. (S. Ct. 2025) may limit the scope of the Texas District Court’s decision.

What This Means for HIPAA-Covered Entities

  • Providers and plans should:
    • Revisit their policies, notices, and related materials to determine what  changes should be made. They also should be looking at the activities of business associates acting on their behalf (and consider any recent changes to business associate agreements).
    • Audit previous attestation workflows, disclosures, and programming.
    • Communicate any implemented changes.
    • Re-train staff who handle PHI or receive subpoenas to ensure workflows align with and policy revisions.
  • HIPAA’s core Privacy Rule remains unchanged, so protected uses/disclosures (e.g., treatment, payment, health oversight or law enforcement under the usual standards) still apply.
  • Compliance with substance use disorder (“SUD”)-related Notice of Privacy Practices updates, per the CARES Act, must continue.

State Law Dynamics

  • Remember that more stringent protections for health information under state law that conflicts with HIPAA’s Privacy Rule are not preempted by HIPAA.
  • For example, many states have enacted their own enhanced protections, like limits on geofencing or disclosure of reproductive health data.
  • As a result, providers and plans must make sure that their policies comply not only with HIPAA’s Privacy Rule, but with applicable state law requirements.

2024 Reproductive Health Rule

In 2024, HIPAA covered entities, including healthcare providers and health plans, began taking steps to comply with the Department of Health and Human Services 2024 Reproductive Health Rule under the HIPAA Privacy Rule. See our prior articles on these regulations:  New HIPAA Final Rule Imposes Added Protections for Reproductive Health Care Privacy | Workplace Privacy, Data Management & Security Report and HIPAA Final Rule For Reproductive Health Care Privacy with December 23, 2024, Compliance Deadline | Benefits Law Advisor

Fast forward to June 18, 2025, when U.S. District Judge Matthew Kacsmaryk (N.D. Tex.), issued a nationwide injunction in Purl v. HHS, vacating most of the Reproductive Health Rule. Key holdings include:

  • HHS exceeded its statutory authority, overstepped via the “major‑questions doctrine,” and unlawfully redefined terms like “person” and “public health.”
  • The agency impermissibly intruded on state authority to enforce laws, including child abuse reporting.

The essence of the ruling is that it blocks most of the protections in the Reproductive Health Rule, including the attestation requirement. However, unrelated SUD-related Notice of Privacy Practices provisions are not affected and remain in force.

Although the court determined that vacatur is the default remedy for unlawful agency actions challenged under the Administrative Procedures Act, it also noted that the Supreme Court would need to address vacatur and universal injunctions. On June 27, 2025, the Supreme Court did just that in Trump v. CASA, Inc., when it held that universal injunctions likely exceed the equitable authority that Congress gave to federal courts. In Casa, the Court granted the government’s applications for a partial stay of the injunctions at issue, but only to the extent that the injunctions were broader than necessary to provide complete relief to each plaintiff with standing to sue. Stay tuned for further developments on how Casa will impact the nation-wide vacatur discussed in this post.

Update as of July 1, 2025.

The federal budget bill titled One Big Beautiful Bill aims to unharness artificial intelligence (AI) development in the U.S. The current draft of the bill, which has passed the House, proposes a 10-year moratorium on the enforcement of AI-related legislation by states or other entities. Specifically, it restricts the regulation of AI models, systems, or automated decision systems involved in interstate commerce.

Supporters believe the moratorium will encourage innovation and prevent a fragmented landscape of state AI laws, which are already emerging. However, opponents express concerns about the potential impact on existing state laws that regulate issues such as deepfakes and discrimination in automated hiring processes.

The ultimate outcome of this provision of the federal budget bill remains uncertain. However, if the budget bill passes with the moratorium, then it will take effect upon enactment.

Meanwhile, states continue to propose their own legislation to regulate AI in the workplace and other areas. Jackson Lewis will continue to monitor this and other related legislative developments.

For businesses subject to the California Consumer Privacy Act (CCPA), a compliance step often overlooked is the requirement to annually update the businesses online privacy policy. Under Cal. Civ. Code § 1798.130(a)(5), CCPA-covered businesses must among other things update their online privacy policies at least once every 12 months. Note that CCPA regulations establish content requirements for online privacy policies, one of which is that the policy must include “the date the privacy policy was last updated.” See 11 CCR § 7011(e)(4).

As businesses continue to grow, evolve, adopt new technologies, or otherwise make online and offline changes in their business, practices, and/or operations, CCPA required privacy policies may no longer accurately or completely reflect the collection and processing of personal information. Consider, for example, the adoption of emerging technologies, such as so-called “artificial intelligence” tools. These tools may be collecting, inferring, or processing personal information in ways that were not contemplated when preparing the organization’s last privacy policy update.

The business also may have service providers that collect and process personal information on behalf of the business in ways that are different than they did when they began providing services to the business.

Simply put: If your business (or its service providers) has adopted any new technologies or otherwise changed how it collects or processes personal information, your privacy policy may need an update.

Practical Action Items for Businesses

Here are some steps businesses can take to comply with the annual privacy policy review and update requirement under the CCPA:

  • Inventory Personal Information
    Reassess what categories of personal information your organization collects, processes, sells, and shares. Consider whether new categories—such as biometric, geolocation, or video —have been added.
  • Review Data Use Practices
    Confirm whether your uses of personal information have changed since the last policy update. This includes whether you are profiling, targeting, or automating decisions based on the data.
  • Assess adoption of new technologies, such as AI and New Tech Tools
    Has your business adopted any new technologies or systems, such as AI applications? Examples may include:
    • AI notetakers, transcription, or summarization tools for use in meetings (e.g., Otter, Fireflies)
    • AI used for chatbots, personalized recommendations, or hiring assessments
  • Evaluate Third Parties and Service Providers
    Are you sharing or selling information to new third parties? Has your use of service providers changed, or have service providers changed their practices around the collection or processing of personal information?
  • Review Your Consumer Rights Mechanisms
    Are the methods for consumers to submit access, deletion, correction, or opt-out requests clearly stated and functioning properly?

These are only a few of the potential recent developments that may drive changes in an existing privacy policy. There may be additional considerations for businesses in certain industries and departments within those businesses that should be considered as well. Here are a few examples:

Retail Businesses

  • Loyalty programs collecting purchase history and predictive analytics data.
  • More advanced in-store cameras and mobile apps collecting biometric or geolocation information.
  • AI-driven customer service bots that gather interaction data.

Law Firms

  • Use of AI notetakers or transcription tools during client calls.
  • Remote collaboration tools that collect device or location data.
  • Marketing platforms that profile client interests based on website use.

HR Departments (Across All Industries)

  • AI tools used for resume screening and candidate profiling.
  • Digital onboarding platforms collecting sensitive identity data.
  • Employee productivity and monitoring software that tracks usage, productivity, or location.

The online privacy policy is not just a static compliance document—it’s a dynamic reflection of your organization’s data privacy practices. As technologies evolve and regulations expand, taking time once a year to reassess and update your privacy disclosures is not only a legal obligation in California but a strategic risk management step. And, while we have focused on the CCPA in this article, inaccurate or incomplete online privacy policies can elevate compliance and litigation risks under other laws, including the Federal Trade Commission Act and state protections against deceptive and unfair business practices.

On June 20, 2025, Texas Governor Greg Abbott signed SB 2610 into law, joining a growing number of states that aim to incentivize sound cybersecurity practices through legislative safe harbors. Modeled on laws in states like Ohio and Utah, the new Texas statute provides that certain businesses that “demonstrate[] that at the time of the breach the entity implemented and maintained a cybersecurity program” meeting the requirements in the new law may be shielded from exemplary (punitive) damages in the event of a data breach lawsuit.

This development comes amid a clear uptick in data breach class action litigation across the country. Notably, plaintiffs’ attorneys are no longer just targeting large organizations following breaches that expose millions of records. Recent cases have been filed against small and midsize businesses, even when the breach affected relatively few individuals.

What the Texas Law Does

SB 2610 erects a shield from liability to protect certain businesses (those under 250 employees) from exemplary damages in a tort action resulting from a data breach. That shield applies only if the business demonstrates that at the time of the breach the entity implemented and maintained a cybersecurity program that meets certain requirements, which may include compliance with a recognized framework (e.g., NIST, ISO/IEC 27001). This is not immunity from all liability—it applies only to punitive damages—but it can be a significant limitation on financial exposure.

This is a carrot, not a stick. The law does not impose new cybersecurity obligations or penalties. Instead, it encourages proactive investment in cybersecurity by offering meaningful protection when incidents occur despite those efforts.

Why the Size of the Entity Isn’t the Whole Story

A unique aspect of the Texas law is that it scales cybersecurity expectations in part based on business size. Specifically, for businesses with fewer than 20 employees, a “reasonable” cybersecurity program may mean something different than it does for one between 100 and 250 employees. But here’s the problem: Many businesses with small employee counts handle large volumes of sensitive data.

Consider:

  • A 10 employee law firm managing thousands of client files, including Social Security numbers and health records;
  • A small dental practice storing patient health histories and billing information;
  • A title or insurance agency processing mortgage, escrow, or policy documents for hundreds of customers each month.

These entities may employ fewer than 20 people but process exponentially more personal information than a 250-employee manufacturing plant. In this context, determining what qualifies as “reasonable” cybersecurity must focus on data risk, not just employee headcount.

Takeaways for Small and Midsize Organizations

  • Don’t assume you’re too small to be a target: Plaintiffs’ firms are increasingly focused on any breach with clear damages and weak safeguards—regardless of business size.
  • Adopt a framework: Implementing a recognized cybersecurity framework not only enhances your defense posture but could also help limit damages in litigation.
  • Document, document, document: The presumption under SB 2610 is available only if the business can demonstrate it created and followed a written cybersecurity program at the time of the breach.
  • Review annually: As threat landscapes evolve, your security program must adapt. Static programs are unlikely to satisfy the “reasonable conformity” standard over time.

Final Thought

Texas’s new law reinforces a growing national trend: states are rewarding—not just punishing—cybersecurity efforts. But the law also raises the bar for smaller businesses that may have historically viewed cybersecurity as a lower priority. If your organization handles personal data, no matter how many employees you have, it’s time to treat cybersecurity as a critical business function—and an essential legal shield.

Montana recently amended its privacy law through Senate Bill 297, effective October 1, 2025, strengthening consumer protections and requiring businesses to revisit their privacy policies that apply to citizens of Montana. Importantly, it lowered the threshold for applicability to persons and businesses who control or process the personal data of 25,000 or more consumers (previously 50,000), unless the controller uses that data solely for completing payments. For those who derive more than 25% of gross revenue from the sale of personal data, the threshold is now 15,000 or more consumers (previously 25,000).

With the amendments, nonprofits are no longer exempt unless they are set up to detect and prevent insurance fraud. Insurers are now similarly exempt.

When a consumer requests confirmation that a controller is processing their data, the controller can no longer disclose but must identify possession of: (1) social security numbers, (2) ID numbers, (3) financial account numbers, (4) health insurance or medical identification numbers, (5) passwords, security questions, or answers, or (6) biometric data.

Privacy notices must now include: (1) personal data categories, (2) controller’s purpose in possessing personal data, (3) categories controller sells or shares with third parties, (4) categories of third parties, (5) contact information for the controller, (6) explanation of rights and how to exercise them, and (7) the date privacy notice was last updated. Privacy notices must be accessible to and usable to people with disabilities and available in each language in which the controller provides a product or service. Any material changes to the controller’s privacy notice or practices require notices to affected consumers and the opportunity to withdraw consent. Notices need not be Montana-specific, but controllers must conspicuously post them on websites, in mobile applications, or through whatever medium the controller interacts with customers.

The amendments further clarified information the attorney general must publicly provide, including an online mechanism for consumers to file complaints. Further, the attorney general may now issue civil investigative demands and need not issue any notice of violation or provide a 60-day period for the controller to correct the violation.

Artificial Intelligence (AI) is transforming businesses—automating tasks, powering analytics, and reshaping customer interactions. But like any powerful tool, AI is a double-edged sword. While some adopt AI for protection, attackers are using it to scale and intensify cybercrime. Here’s a high-level discussion at emerging AI-powered cyber risks in 2025—and steps organizations can take to defend.

AI-Generated Phishing & Social Engineering

Cybercriminals now use generative AI to craft near-perfect phishing messages—complete with accurate tone, logos, and language—making them hard to distinguish from real communications . Voice cloning tools enable “deepfake” calls from executives, while deepfake video can simulate someone giving fraudulent instructions.

Thanks to AI, according to Tech Advisors, phishing attacks are skyrocketing—phishing surged 202% in late 2024, and over 80% of phishing emails now incorporate AI, with nearly 80% of recipients opening them. These messages are bypassing filters and fooling employees.

Adaptive AI-Malware & Autonomous Attacks

It is not just the threat actors but the AI itself that drives the attack. According to Cyber Defense Magazine reporting:  

Compared to the traditional process of cyber-attacks, the attacks driven by AI have the capability to automatically learn, adapt, and develop strategies with a minimum number of human interventions. These attacks proactively utilize the algorithms of machine learning, natural language processing, and deep learning models. They leverage these algorithms in the process of determining and analyzing issues or vulnerabilities, avoiding security and detection systems, and developing phishing campaigns that are believable.

As a result, attacks that once took days now unfold in minutes, and detection technology struggles to keep up, permitting faster, smarter strikes to slip through traditional defenses.

Attacks Against AI Models Themselves

Cyberattacks are not limited to business email compromises designed to effect fraudulent transfers or to demand a ransom payment in order to suppress sensitive and compromising personal information. Attackers are going after AI systems themselves. Techniques include:

  • Data poisoning – adding harmful or misleading data into AI training sets, leading to flawed outputs or missed threats.
  • Prompt injection – embedding malicious instructions in user inputs to hijack AI behavior.
  • Model theft/inversion – extracting proprietary data or reconstructing sensitive training datasets.

Compromised AI can lead to skipped fraud alerts, leaked sensitive data, or disclosure of confidential corporate information. Guidance from NIST, Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations, digs into these quite a bit more, and outlines helpful mitigation measures.

Deepfakes & Identity Fraud

Deepfake audio and video are being used to mimic executives or trusted contacts, instructing staff to transfer funds, disclose passwords, or bypass security protocols.

Deepfakes have exploded—some reports indicate a 3,000% increase in deepfake fraud activity. These attacks can erode trust, fuel financial crime, and disrupt decision-making.

Supply Chain & Third-Party Attacks

AI accelerates supply chain attacks, enabling automated scanning and compromise of vendor infrastructures. Attackers can breach a small provider and rapidly move across interconnected systems. These ripple-effect attacks can disrupt entire industries and critical infrastructure, far beyond the initial target. We have seen these effects with more traditional supply chain cyberattacks. AI will only amplify these attacks.  

Enhancing Cyber Resilience, Including Against AI Risks

Here’s some suggestions for stepping up defenses and mitigating risk:

  1. Enhance Phishing Training for AI-level deception
    Employees should recognize not just misspellings, but hyper-realistic phishing, voice calls, and video
 impersonations. Simulations should evolve to reflect current AI tactics.
  2. Inventory, vet, and govern AI systems
    Know which AI platforms you use—especially third-party tools. Vet them for data protection, model integrity, and update protocols. Keep a detailed registry and check vendor security practices. Relying on a vendor’s SOC report simply may not be sufficient, particularly is not read carefully and in context.
  3. Validate AI inputs and monitor outputs
    Check training data for poisoning. Test and stress AI models to spot vulnerabilities. Use filters and anomaly detection to flag suspicious inputs or outputs.
  4. Use AI to defend against AI
    Deploy AI-driven defensive tools—like behavior-based detection, anomaly hunting, and automated response platforms—so you react in real time.
  5. Adopt zero trust and multi-factor authentication (MFA)
    Require authentication for every access, limit internal privileges, and verify every step—even when actions appear internal.
  6. Plan for AI-targeted incidents
    Update your incident response plan with scenarios like model poisoning, deepfake impersonation, or AI-driven malware. Include legal, communications, and other relevant stakeholders in your response teams.
  7. Share intelligence and collaborate
    Tap into threat intelligence communities, “Information Sharing and Analysis Centers” or “ISACs”, to share and receive knowledge of emerging AI threats.

Organizations that can adapt to a rapidly changing threat landscape will be better position to defend against these emerging attack vectors and mitigate harm.

A recent breach involving Indian fintech company Kirana Pro serves as a reminder to organizations worldwide: even the most sophisticated cybersecurity technology cannot make up for poor administrative data security hygiene.

According to a June 7 article in India Today, KiranaPro suffered a massive data wipe affecting critical business information and customer data. The company’s CEO believes the incident was likely the result of a disgruntled former employee, though he has not ruled out the possibility of an external hack, according to reporting. TechCrunch explained:

The company confirmed it did not remove the employee’s access to its data and GitHub account following his departure. “Employee offboarding was not being handled properly because there was no full-time HR,” KiranaPro’s chief technology officer, Saurav Kumar, confirmed to TechCrunch.

Unfortunately, this is not a uniquely Indian problem. Globally, organizations invest heavily in technical safeguards—firewalls, multi-factor authentication, encryption, endpoint detection, and more. These tools are essential, but not sufficient.

The Silent Risk of Inactive Accounts

One of the most common (and preventable) vectors for insider incidents or credential abuse is failure to promptly deactivate system access when an employee departs. Whether termination is amicable or not, if a former employee retains credentials to email, cloud storage, or enterprise software, the organization is vulnerable. These accounts may be exploited intentionally (as suspected in the KiranaPro case) or unintentionally if credentials are stolen or phished later.

Some organizations assume their IT department is handling these terminations automatically. Others rely on inconsistent handoffs between HR, legal, and IT teams. Either way, failure to follow a formal offboarding checklist—and verify deactivation—may be a systemic weakness, not a fluke.

It’s Not Just About Tech—It’s About Governance

This breach illustrates the point that information security is as much about governance and process as it is about technology. Managing who has access to what systems, when, and why is a core component of security frameworks such as NIST, ISO 27001, and the CIS Controls. In fact, user access management—including timely revocation of access upon employee separation—is a foundational expectation in every major cybersecurity risk assessment.

Organizations should implement the following best practices:

  1. Establish a formal offboarding procedure. Involve HR, IT, and Legal to ensure immediate deactivation of all accounts upon separation.
  2. Automate user provisioning and deprovisioning where possible, using identity and access management (IAM) tools.
  3. Maintain a system of record for all access rights. Periodically audit active accounts and reconcile them against current employees and vendors.
  4. Train supervisors and HR personnel to notify IT or security teams immediately upon termination or resignation. There also may be cases where monitoring an employee’s system activity in anticipation of termination may be prudent.

The Takeaway

Wherever your company does business and regardless of industry, the fundamentals are the same: a lapse in basic access control can cause as much damage as a ransomware attack. The KiranaPro incident is a timely cautionary tale. Organizations must view cybersecurity not only as a technical discipline but as an enterprise-wide responsibility.