
Worried about DeepSeek and your Privacy?
Yesterday, DeepSeek’s opensource R1 overtook ChatGPT in downloads in the Apple App Store. As R1 became the top rated free application, DeepSeek had to limit registrations, due to numerous large-scale cyberattacks. DeepSeek’s Privacy Policy states “We store the information we collect in secure servers located in the People’s Republic of China.” Many people have indicated concerns about DeepSeek’s privacy practices.

Interim Final Rule on AI Diffusion
On Monday morning, U.S. tech stocks tumbled after an announcement from China: DeepSeek, a relatively unknown AI startup, unveiled an open source ChatGPT-like model called R1. According to DeepSeek, the company spent less than $6 million on computing power for its base model–by comparison, spending hundreds of millions of dollars less than U.S.-based companies such as OpenAI or Google. For years now, the United States has implemented measures to restrict the supply of advanced AI chips to other countries, particularly China. Newly announced regulations also seek to restrict the export of model weights. On January 13, 2025, the Bureau of Industry and Security (“BIS”) under the Biden-Harris Administration released an Interim Final Rule on Artificial Intelligence Diffusion. The rule: i) revises controls on advanced AI chips (a specialized type of integrated circuit optimized for handling AI algorithms’ high computational demands) and creates a new licensing framework, and ii) implements a licensing regime for exporting “model weights” (the parameters that encode an AI system’s core).

New Changes To Children’s Online Privacy Act (COPPA)
The FTC finalized its changes to the Children’s Online Privacy Rule, consistent with the Children’s Online Privacy Act. Online operators that direct their sites or services to children will have to make certain important changes, by the end of Q1.

How to Avoid a $100 Million Mistake
The most foolproof way to create an AI model that produces reliable, correct outputs (thus fostering customers’ use and trust) is to simply build a custom model from scratch. Unfortunately, that’s a bit like saying that the best way to buy the optimal wine for your restaurant is to fly to Italy or France and start your own vineyard—an option that feasibly exists, but is out of budget for most. In response, many Chief Information Officers (CIOs) of leading companies are scrambling to find cost-effective ways to “upskill” broad foundational AI models to make AI work in their business context. We discuss different methods of leveraging foundational AI models.

Top 5 AI Legal Risks for 2025 (And What To Do About Them)
We have identified the top 25 AI legal risks for 2025 and how to mitigate, which clients should use when developing an AI-Contracting Playbook. By proactively addressing and understanding ownership of input data (Risk #1) and outputs (Risk #2), bias (Risk #3), accountability for harms (Risk #4), and the uncertainty of navigating a patchwork of applicable laws (Risk #5), you can better position your organization to unlock the benefits of this digital transformation.

Bias in AI Systems
AI models carry significant risks to produce biased outputs. Bias, when applied to AI models, generally manifests as either cognitive bias or computational bias. Each type of bias carries different harms, and it is important to what (if any) steps your AI vendor takes to mitigate these risks.

Are your employees licensing your company’s confidential information without realizing it?
With the increased prevalence and adoption of AI tools, including as add-ons to many existing programs that enterprises use, companies are facing serious problems with their confidential information being collected and accessed. And no, in some cases, paying extra for an enterprise-level license will not save you.

Who’s Afraid of the Big Bad Gen-AI Model?
Developers use artists’ works to train their generative AI models. Those models can then create new visual works, even in the style of a specific artist. So, is this copyright infringement, fair use, or something else entirely? Here, AMBART LLC examines the claims and motions to dismiss in Andersen v. Stability AI and explains not just the law but also how these AI models are trained.

SEC Imposes Almost $7 Million in Penalties on Four Tech Companies with Half-Truth Cybersecurity Disclosures
The SEC recently charged four companies (Unisys Corp., Avaya Holdings Corp., Check Point Software Technologies Ltd., and Mimecast Limited) for making misleading cybersecurity disclosures, in enforcement actions that saw a total of $6.985 million in penalties. The critical issue here is not that these companies suffered cybersecurity attacks—as breaches today are almost impossible to prevent—but that their cybersecurity disclosures downplayed the impact of the attacks. Each company learned in 2020 or 2021 that the threat actor behind the SolarWinds Orion cyberattacks had accessed their systems without authorization, but instead minimized the impact of the breach and/or the quantity of files and credentials that had been accessed.

Click-to-Cancel or Be Cancelled? It’s no longer a question.
FTC's New Click-to-Cancel Rule for Negative Option Marketing: Understand how the FTC's new Click-to-Cancel rule impacts negative option marketing, automatic renewals, and B2B transactions. Find out key compliance requirements for your business.

SEC Takes a Stand Against Artificial Intelligence (“AI”) Washing: Two Investment Advisories That Exaggerated AI to Attract Investors Settled For Making Deceptive Claims
With their promise of revolutionizing investment strategies and decision-making processes, it’s no wonder that AI-powered models are attractive to both companies and investors. In fact, certain investment advisers realized that they could benefit from AI without even possessing the technology; simply pretending that their investment strategies were driven by a proprietary deep-learning model was enough to create buzz and attract new clients.

The European Union’s AI Act and What It Means for U.S. Businesses
The EU's Artificial Intelligence Act, which came into force on August 1, 2024, is a comprehensive law designed to regulate AI technologies within the EU, focusing on ensuring AI systems are safe, transparent, and respect fundamental rights. It categorizes AI systems based on risk levels—unacceptable, high-risk, and low-risk—prohibiting those applications it considers unacceptable and imposing strict requirements on higher-risk categories. The Act also emphasizes AI literacy, the promotion of trustworthy AI, and the need for clear responsibilities across the AI lifecycle, from development to deployment.

The FCC's Proposed New AI Disclosure Rules: What They Mean for Political Advertising
As Generative AI tools grow increasingly capable of creating realistic images and voices, the Federal Communications Commission (FCC) proposed new rules to enhance transparency in political advertising, particularly regarding AI-generated content. Instead of outright banning AI-generated political ads, however, the rules would require disclosures from cable TV, radio, and some online platforms.
(Image Accessibility Description: Feature photo shows a person wearing a suit with a red tie and a mask, resembling Donald Trump's face, as the person points toward the camera)