This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.
Anthropic
to Pay $1.5 Billion for Noncompliant AI Development
Type: Judicial
Decision
Announced: September
2025
Affected Region:
USA
Summary: AI company Anthropic has agreed to
establish a $1.5 billion settlement fund to compensate authors whose
copyrighted books were used without permission to train its AI models. The
agreement stems from a lawsuit filed by three authors who alleged that Anthropic
knowingly ingested millions of pirated books from illegal sources such as
Library Genesis and Pirate Library Mirror, in addition to scanning physical
books it had purchased.
While the company successfully defended part of the
case by arguing that book scanning and transformative model output qualified as
fair use, the court found it had violated copyright law by knowingly using
pirated content. The proposed settlement values each pirated book at $3,000,
and Anthropic also agreed to destroy the infringing materials.
Legal analysts view the settlement as a landmark
moment. It is the largest copyright recovery claim to date in the US and a
likely precedent for similar lawsuits facing other AI firms. AI companies may
increasingly create 鈥渟lush funds鈥 to manage legal exposure from copyright
claims and misuse of training data.
Analyst Perspective: The proposed settlement is both
a watershed moment and a necessary course correction. While it鈥檚 encouraging to
see Anthropic acknowledge the issue and move toward compensation, what鈥檚 more
telling is the precedent this sets that AI development can鈥檛 continue on a
foundation of unlicensed data use, especially when that use is deliberate. The
industry needs to shift from 鈥渢rain now, settle later鈥 to responsible data
stewardship from the outset.
At the same time, the settlement structure suggests a path
forward that balances redress with operational pragmatism. For companies
operating at the frontier of AI, this is a governance challenge as much as it
is a legal one.
It鈥檚 time for AI firms to treat intellectual property with
the same seriousness they apply to data privacy or security. Compensation
frameworks and licensing agreements should not be afterthoughts or courtroom
outcomes. They need to be part of the architecture of any responsible AI
program.
This case should not be viewed
as a one-off liability event, but as the beginning of a more mature phase in AI
development. One where innovation, ethics, and legality must align by design,
not just by settlement.
Analyst: John Donovan,
Principal 91制片厂 Director 鈥 Infrastructure and Operations
More Reading:
- Source Material:
- Related Info-Tech 91制片厂:
Balancing
Privacy in AI for Cybersecurity
Affected Region:
All Regions
Summary: AI in organizational cybersecurity
applications has become a core component of security programs and initiatives
rather than a supporting tool. The more sophisticated AI becomes, the more data
it will need to improve its functionality. This includes constant processing to
capture real-time data such as metadata and internal communications. This
growing role of AI in cybersecurity has allowed organizations to shift to a
more proactive approach to better secure their crown jewels. This, however,
increases the scale and potential sensitivity of data collection and
processing, which could be a concern for employee privacy.
From behavior telemetry to biometrics, sensitive data
is constantly being collected for AI usage, which begs the question of whether
too much data is being captured and if any privacy issues should be considered.
Key privacy concerns stem from potential algorithmic biases. Biased data can
produce flawed output, which raises further ethical implications if
marginalized groups are unjustly favored and/or targeted. Other key challenges
include potential noncompliance with data protection laws such as GDPR and CCPA
due to the lack of reasoning provided with the output data.
With the growing usage of AI technologies and emphasis
on privacy protections, industry professionals advocate for innovation that
bakes privacy in at conception. From federated learning, which allows models to
be trained locally on each device, to differential privacy, which introduces
controlled noise into data sets, privacy enhancing technologies support
privacy-preserving AI development. These techniques demonstrate the effort put
in place to ensure privacy and the ethics of AI are considered within the
cybersecurity discipline.
Analyst Perspective: The growing digital
ecosystem has shown the importance of balancing security and privacy to support
innovation. The emergence of AI technologies, specifically within the domain of
cybersecurity, shows how intelligent systems can strengthen an organization鈥檚
security posture. However, just like proper cybersecurity guardrails should be
in place, appropriate privacy measures should be implemented to adhere to data
protection laws and consider ethical implications of AI adoption in cybersecurity.
To that end, explainable AI is a growing field committed to ensuring decisions
made by AI are interpretable for users and auditors when assessing why a
certain decision was made.
Further, measures such as developing an AI risk map on
AI security use cases to help determine the risks to data confidentiality and
data integrity will help organizations govern the use of their solutions to
maximize benefits and minimize risks. This would support AI compliance efforts
to ensure your AI use cases, including cybersecurity, align with global
standards and enable leadership to make informed decisions on their investments
toward compliance efforts. Implementing these measures will not only strengthen
the security and privacy of your AI systems but continuously support
organizational goals for innovation.
Analyst: Ahmad Jowhar,
91制片厂鈥疉nalyst 鈥 Security & Privacy
More Reading:
- Source Material:
- Related Info-Tech 91制片厂:
Ontario
Privacy Commissioner Issues First AMP
Type: Binding
Order
Announced: August
2025
Affected Region:
Canada
Summary: On 28 August 2025, the Office of the
Information and Privacy Commissioner of Ontario (IPC) issued an administrative
monetary penalty (AMP) to a doctor and their clinic for the first time. AMPs
were first introduced in January 2024 under PHIPA to fine individuals up to
$50,000 and organizations up to $500,000. Aiming to deter individuals and
organizations from benefiting economically from disregarding the law, the
overall goals of AMPs are to encourage compliance with PHIPA and penalize
entities based on the violations' severity and extent of harm, as well as the compliance
history of the offender(s).
IPC fined the doctor $5,000 and their clinic an
additional $7,500 for violating patient privacy. The doctor was found to have
used their access to health records at a hospital to extract newborn and parent
information. This information was used to solicit medical services by the
clinic they also worked at. A complaint was filed with the hospital by two
families, resulting in the revocation of the doctor鈥檚 privileges and
containment of the breach and later leading to the IPC鈥檚 investigation and
fines.
In this case, the hospital was also investigated and
found to have taken 鈥渞easonable steps to protect personal health information (PHI)
in its custody and control in these circumstances.鈥 This was achieved through
the implementation of appropriate controls to protect PHI, including privacy
policies, procedures, training, and confidentiality agreements.
Analyst Perspective: Some privacy professionals
advocate for the fact that taking individuals and organizations in violation of
PHIPA to court acts as a deterrent for them to change their practices. However,
it is my belief that to truly have an impact on the practices of all entities
entrusted with personal data such as PHI, we must make an example of those that
are found to be noncompliant. In my opinion, AMPs are the most effective
mechanism for delivering a financial and reputational impact to the violating
entities. In addition, they serve as a reminder of the consequences and
deterrent to other individuals and organizations responsible for the
confidentiality of PHI.
With respect to the AMP handed out to the doctor and clinic
in this decision, my initial thought was that the amounts were relatively low.
However, after considering the scope of the breach, the fact that it was
quickly contained, and the overall negative financial and reputational impacts
likely felt by the doctor and clinic, I find myself in agreement with this AMP.
Analyst: Mike Brown, Advisory
Director 鈥 Security & Privacy
More Reading:
- Source Material: ,
- Related Info-Tech 91制片厂:
The Forgotten Privacy Gap: Lessons From the Salesloft Breach
Type: Article
Published: September 2025
Affected Region: USA
Summary: Google鈥檚 Threat Intelligence Group (GTIG), in collaboration with Mandiant, has identified a large-scale data theft campaign involving compromised OAuth tokens linked to Salesloft Drift integrations. OAuth tokens grant access to enterprise systems and the personal and customer data housed inside them. The attack, which occurred between August 8 and 18, 2025, affected as many as 700 Salesloft customers, with data exfiltration observed from Salesforce, AWS, Snowflake, and other platforms.
The attackers leveraged OAuth tokens stored in Drift integrations to infiltrate Salesforce environments, access sensitive objects like Cases, Accounts, and Opportunities, and pivot to harvest additional credentials and cloud secrets. From a privacy perspective, this data can be repurposed for identity theft, fraud, surveillance, and more.
Google responded by revoking tokens, disabling integrations, and notifying admins. Salesloft has removed Drift from AppExchange and worked with Salesforce to deactivate impacted tokens. While Salesforce has downplayed the scale, citing only a 鈥渟mall number鈥 of affected customers, Google recommends all organizations with Drift integrations treat their tokens as compromised.
Further complicating the issue is the stealthy behavior of the attackers, who deleted logs to evade detection, making forensic analysis more difficult. Experts are calling this incident a wake-up call to the risks of API drift, poor token hygiene, and overly permissive third-party integrations in enterprise environments.
Analyst Perspective: This incident is a reminder that privacy cannot be decoupled from identity and access management. The industry has known for years that over-trusted integrations and stale tokens are open doors, and this attack exploited both. Every OAuth token represents a consent proxy, silently granting ongoing access to personal data. Effective privacy governance therefore requires tightening token lifecycles, auditing integration scopes, and embedding privacy principles into API design.
Google did the right thing by escalating the threat beyond Salesforce's initial messaging. Downplaying these events to 鈥渁 small number of customers鈥 ignores how token sprawl and integration creep can silently compromise entire environments.
This incident underscores what we should already know: Integration risk is identity risk, and token management is core security, not optional hygiene. Vendors can talk about platform integrity all day, but if their APIs and authorization flows are wide open to abuse, their products are part of the problem.
Analyst: John Donovan, Principal 91制片厂 Director 鈥 Infrastructure and Operations
More Reading:
- Source Material:
- Related Info-Tech 91制片厂:
Digital Sovereignty: From Risk to Resilience
Type: Article
Published: August 2025
Affected Region: All Regions
Summary: When a country runs mission-critical defense systems on China-based cloud platforms, those systems fall under Chinese law 鈥 that is not a hypothetical IT issue. Data sovereignty ensures data is stored, processed, and accessed under laws of the originating nation, while AI sovereignty secures access to computing power, data sets, and models without overreliance on foreign providers.
Digital sovereignty, a broader concept, encompasses not just data and AI but the entire digital ecosystem. It can be broken into three interrelated dimensions:
- Safeguarding the autonomy and security of a country鈥檚 digital infrastructure
- Maintaining a nation鈥檚 economic independence in the digital domain
- Enforcing a country鈥檚 laws and values in the digital space
Algorithms are moderating speech, determining eligibility for services, and shaping what information citizens see. Without oversight, sovereignty could be exercised through AI without democratic accountability. Opaque foreign access to data is already eroding trust in public, corporate, and social layers.
However, governments do not control digital sovereignty alone. The private sector holds immense power over data and technology, and thus over the levers of sovereignty including defense and intelligence. Therefore, achieving meaningful digital, data, and AI sovereignty will require concerted action from both public sector leaders and private sector executives.
Analyst Perspective: When governments embed sovereignty clauses into contracts and local providers step up with solutions, the country gains agency. When systems are locked into foreign architectures, sovereignty slips away. Leaders and executives who perceive sovereignty as a competitive differentiator are building resilience.
AI raises the stakes, and without clear guardrails accountability can drift into 鈥渂lack-box鈥 algorithmic systems. As such, baking a trust layer into emerging technology development is more important than ever. A risk lens is a useful starting point:
- Treat sovereignty as a category alongside cybersecurity and privacy.
- Embed trust and accountability mechanisms.
- Architect solutions for compliance, security, and resilience.
- Plan for geopolitical risks.
- Frame sovereignty as a part of your value proposition.
Both public and private sector leaders must treat sovereignty as a design principle. It needs to be woven into strategy, operations, and culture.
Analyst: Safayat Moahamad, 91制片厂 Director 鈥 Security & Privacy
More Reading:
- Source Material: ,
- Related Info-Tech 91制片厂:
If you have a question or would like to receive these monthly briefings via email, submit a request here.