As Data Privacy Day 2026 approaches, organizations face an inflection point in privacy, artificial intelligence, and cybersecurity compliance. The pace of technological adoption, in particular AI tools, continues to outstrip legal, governance, and risk frameworks. At the same time, regulators, plaintiffs, and businesses are increasingly focused on how data is collected, used, monitored, and safeguarded.

We’re pleased to announce the publication of a comprehensive resource on the Jackson Lewis website:

Navigating the California Consumer Privacy Act: 30+ Essential FAQs for Covered Businesses, Including Clarifying Regulations Effective 1.1.26.

With California’s updated CCPA regulations now in effect as of January 1, 2026, businesses face expanded compliance requirements in several critical areas.

As artificial intelligence (AI) becomes more widely used in hiring and employment decisions, Illinois has taken a significant step to regulate how employers must inform workers about AI’s use. Effective January 1, 2026, House Bill 3773 amended the Illinois Human Rights Act (IHRA) to require, among other things, employer notice when AI influences or facilitates

As we explored in Part 1 of this series, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. Modern smart glasses blend high-resolution cameras, always-on microphones, and real-time AI assistants into a hands-free wearable that can

As artificial intelligence (AI), particularly generative AI, becomes increasingly woven into our professional and personal lives—from personalized travel itineraries to reviewing resumes to summarizing investigation notes and reports—questions about who or what controls our data and how it’s used are ever present. AI systems survive and thrive on information and that intersection of AI and

According to Cybersecurity Dive, artificial intelligence is no longer experimental technology as more than 70% of S&P 500 companies now identify AI as a material risk in their public disclosures, according to a recent report from The Conference Board. In 2023, that percentage was 12%.

The article reports that major companies are no longer

On July 23, 2025, the White House released America’s AI Action Plan, a comprehensive national strategy designed to strengthen the United States’ position in artificial intelligence through investment in innovation, infrastructure, and international diplomacy and security. The plan, issued in response to Executive Order 14179, reflects a pro-innovation approach to AI policy—one that aims

Artificial Intelligence (AI) is transforming businesses—automating tasks, powering analytics, and reshaping customer interactions. But like any powerful tool, AI is a double-edged sword. While some adopt AI for protection, attackers are using it to scale and intensify cybercrime. Here’s a high-level discussion at emerging AI-powered cyber risks in 2025—and steps organizations can take to defend.

In today’s hybrid and remote work environment, organizations are increasingly turning to digital employee management platforms that promise productivity insights, compliance enforcement, and even behavioral analytics. These tools—offered by a growing number of vendors—can monitor everything from application usage and website visits to keystrokes, idle time, and screen recordings. Some go further, offering video capture

As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology