Artificial Intelligence (AI) is transforming businesses—automating tasks, powering analytics, and reshaping customer interactions. But like any powerful tool, AI is a double-edged sword. While some adopt AI for protection, attackers are using it to scale and intensify cybercrime. Here’s a high-level discussion at emerging AI-powered cyber risks in 2025—and steps organizations can take to defend.

In today’s hybrid and remote work environment, organizations are increasingly turning to digital employee management platforms that promise productivity insights, compliance enforcement, and even behavioral analytics. These tools—offered by a growing number of vendors—can monitor everything from application usage and website visits to keystrokes, idle time, and screen recordings. Some go further, offering video capture

As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology

If you are looking for a high-level summary of California laws regulating artificial intelligence (AI), check out the two legal advisories issued by California Attorney General Rob Bonta. The first advisory is directed at consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws. The

This month, the New Jersey Attorney General’s office (NJAG) added to nationwide efforts to regulate, or at least clarify the application of existing law, in this case the NJ Law Against Discrimination, N.J.S.A. § 10:5-1 et seq. (LAD), to artificial intelligence technologies. In short, the NJAG’s guidance states:

the LAD applies to algorithmic discrimination

If there is one thing artificial intelligence (AI) systems need is data and lots of it as training AI is essential for achieving success for a given use case. A recent investigation by Australia’s privacy regulator into the country’s largest medical imaging provider, I-MED Radiology Network, illustrates concerns about the use of medical data to

One of our recent posts discussed the uptick in AI risks reported in SEC filings, as analyzed by Arize AI. There, we highlighted the importance of strong governance for mitigating some of these risks, but we didn’t address the specific risks identified in those SEC filings. We discuss them briefly here as they are risks

While the craze over generative AI, ChatGPT, and the fear of employees in the professions landing on breadlines in the imminent future may have subsided a bit, many concerns remain about how best to use and manage AI. Of course, these concerns are not specific to Fortune 500 companies.

A recent story in CIODive reports