91制片厂

Privacy Regulation Roundup

Author(s): Safayat Moahamad, John Donovan, Horia Rosian

This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.

Privacy and AI in Advertising: Start With Ethics and Trust

Type: Article

Published: May 2025

Affected Region: USA

Summary: Artificial intelligence is transforming advertising through enhanced personalization, targeting, and efficiency. The IAPP鈥檚 2025 AI Governance Report notes that 69% of marketers already use AI, with nearly one in five dedicating over 40% of their budget to AI-driven campaigns. However, this rapid integration raises critical issues relating to data privacy, algorithmic bias, hallucinations, and consumer manipulation.

To address these risks, industry leaders like Salesforce and PwC are embedding guardrails such as hallucination detection, human-in-the-loop oversight, and ethical AI training. Regulatory bodies and frameworks, such as the Federal Trade Commission and NIST AI Risk Management Framework, are also guiding responsible use. Technical solutions like Google's Dataset Search and IBM鈥檚 AI Fairness 360 support fairness and transparency. Governance practices also help organizations in upholding consumer trust.

Companies are advised to limit AI use in high-risk decisions (e.g. employment or credit) and ensure clarity around when and how AI is used. Transparency, human oversight, and proactive auditing are emphasized as essential to aligning AI use with societal values.

From large multinationals to self-regulated bodies, organizations are embedding risk-based policies, human checkpoints, and transparent communication strategies to address not just AI鈥檚 technical challenges, but also the public鈥檚 growing concern over when and how they鈥檙e interacting with intelligent systems.

Analyst Perspective: As AI becomes deeply embedded in advertising, the message is clear: the race to innovate must be tempered with responsibility. What鈥檚 striking about this article is the broad recognition that technical sophistication alone isn鈥檛 enough - ethical deployment must be systemic, deliberate, and human-centered.

For me, the key takeaway is the shift from 鈥渃an we do this with AI?鈥 to 鈥渟hould we?鈥

It鈥檚 no longer acceptable to treat bias or hallucinations as edge cases, they are operational risks that demand continuous auditing and multi-disciplinary governance. That means bringing marketers, data scientists, legal, and compliance together throughout the AI lifecycle.

An important aspect here is trust. Consumers are asking more questions about how their data is being used, and vague answers will not be sufficient. Transparency and meaningful consent must be designed into the user journey, not retrofitted.

I see a call to maturity. The tools and frameworks are available, from TensorFlow鈥檚 Fairness Indicators to privacy impact assessments. What is needed now is leadership. Responsible AI isn鈥檛 just a compliance checkbox, it鈥檚 a brand asset and a reputational differentiator.

Analyst: John Donovan, Principal 91制片厂 Director 鈥 Infrastructure and Operations


Neurotechnology and the EU AI Act

Type: Article

Published: May 2025

Affected Region: EU

Summary: The EU Artificial Intelligence Act (AI Act), in the context of neurotechnologies, applies to AI systems used or marketed in the EU irrespective of the provider's location. It includes exceptions for scientific research, military, and personal uses. AI systems are defined as machine-based systems that operate autonomously and adaptively, generating outputs like predictions, content, or decisions.

The Act prohibits AI systems that use subliminal techniques to distort human behavior and subvert free choice, particularly in neurotech like brain-computer interfaces. The use of Emotion Recognition Systems (ERS) in workplaces or educational institutions is banned, except for medical or safety reasons. ERS in other environments is classified as high-risk.

The Act prohibits systems that categorize individuals based on biometric data to infer sensitive information like race, political opinions, or sexual orientation.

Analyst Perspective: The EU AI Act represents a significant regulatory step in addressing the ethical and privacy concerns associated with neurotechnologies. The Act aims to balance the rapid advancements in AI and neurotech with the need to protect individuals' fundamental privacy rights. This is crucial as neurotech has the potential to deeply influence human behavior and decision-making.

By prohibiting subliminal techniques and high-risk ERS, the Act addresses ethical concerns about manipulating human behavior without consent. This is particularly important in contexts like neuro-marketing and workplace monitoring. The broad definitions and high-risk classifications may pose challenges for providers and deployers in determining compliance. Clear guidelines and continuous dialogue between regulators and industry stakeholders will be essential.

As neurotechnologies evolve, the regulatory framework will need to adapt. Ongoing research and real-world applications will likely influence future amendments to the Act. As it stands today, the EU AI Act is a proactive measure to ensure that the development and deployment of neurotech is aligned with ethical standards and respect for human rights.

In a rapidly evolving technological landscape, the EU AI Act serves as a reminder that ethical considerations must keep pace with innovation to ensure a future where technology enhances, rather than undermines, human dignity.

Analyst: Horia Rosian, Director 鈥 Cybersecurity & Privacy, Workshops

More Reading:


How China Regulates AI to Accelerate Innovation

Type: Article

Announced: May 2025

Affected Region: China

Summary: China is stepping up its artificial intelligence governance with a three-month campaign led by the Cyberspace Administration of China (CAC). The initiative targets key compliance failures in AI development and deployment, such as unregistered algorithms, the use of unauthorized data for training, lack of content oversight, and inadequate labeling. In a second phase, the campaign will focus on harmful AI-generated content, including disinformation, impersonation, and risks to minors.

In parallel, China鈥檚 Ministry of Education has introduced new AI education guidelines for primary and secondary schools. These guidelines aim to build foundational AI knowledge, ethical understanding, and cybersecurity awareness in younger generations.

Meanwhile, Hong Kong's Privacy Commissioner for Personal Data (PCPD) is promoting AI compliance through active oversight and guidance. A recent compliance check on 60 organizations found growing AI adoption, especially in customer service, with over half of these organizations handling personal data via AI. The PCPD is encouraging organizations to establish robust AI governance strategies, conduct risk assessments, and align AI use with Hong Kong鈥檚 privacy laws.

These developments signal a maturing approach to AI compliance in the APAC region, one that combines regulation, education, and corporate accountability to manage AI risks and foster responsible innovation.

Analyst Perspective: China鈥檚 AI governance approach represents a compelling paradox, challenging conventional wisdom that regulation inherently stifles innovation. Despite enforcing some of the world鈥檚 strictest AI rules, China continues to lead in AI development, patent filings, and rapid deployment of foundational models. This may be an indication that regulation is not a brake but a blueprint for accelerating innovation.

Unlike Western regimes where AI laws are often delayed, China鈥檚 regulators provide specific rules, swift enforcement, and clear boundaries. Mandatory pre-launch security reviews, criminal penalties, and app suspension authority create a culture of accountability, as well as one of regulatory certainty. This enables developers and enterprises to innovate confidently within well-defined guardrails.

The inclusion of AI education in schools and privacy-focused oversight further reflects a comprehensive, tiered governance strategy. From grassroots education to enterprise compliance audits, China is embedding responsible AI use across society, not just within industry.

This approach from China suggests that well-structured, targeted, and enforceable regulation can act as a catalyst for innovation. In hindsight, the efficacy of AI legislation may depend less on how light or heavy-handed it is, and more on how predictable, focused, and swiftly executed it becomes.

Analyst: Safayat Moahamad, 91制片厂 Director 鈥 Security & Privacy

More Reading:



If you have a question or would like to receive these monthly briefings via email, submit a request here.


Related Content

Visit our IT Critical Response Resource Center
Over 100 analysts waiting to take your call right now: +1 (703) 340 1171