This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.
India’s Privacy Law Introduces Fiduciary Duties
Type: Legislation
Proposed: November 2025
Affected Region: India
Summary: After eight years of drafts, debates, and delays, India's Digital Personal Data Protection Act (DPDPA) is finally operational. The Ministry of Electronics and Information Technology notified the Digital Personal Data Protection Rules 2025 in November, bringing the DPDPA into effect immediately for key provisions and setting an 18-month phased rollout for the rest.
The Act itself was passed on August 11, 2023, but remained dormant for a while. Minister Ashwini Vaishnaw has already signaled intent to amend the law and compress the compliance timeline. The final rules track closely to the draft released earlier this year, with this list of targeted additions:
- Explicit listing of processing purposes in notices
- One-year minimum retention obligation (primarily to service government access requests)
- Narrowly scoped exemptions for children's data in health and safety scenarios
- A hard 90-day deadline for grievance responses
- Immediate operationalization of the Data Protection Board selection process.
The framework remains lean and seemingly principle-led rather than prescriptive. It rejects the global habit of splitting personal data into "normal" and "sensitive" categories and instead applies a single standard to all digital personal data while layering heavier obligations on significant data fiduciaries based on volume, risk, and scale. The terminology is intentional: "data principal" centers the individual, and "data fiduciary" imports trust-law duties that elevate organizational accountability beyond typical controller standards.
Consent managers are baked in as RegTech infrastructure for verifiable consent, withdrawal, and grievance routing. Territorial scope mirrors GDPR, but extraterritorial reach is limited to offering goods or services to data principals inside India. Data transfers are permitted by default unless the central government blocklists a destination or imposes additional safeguards. Children's threshold is set at under 18, with verifiable parental consent required.
Analyst Perspective: The leanness is a feature, not a bug, for companies that have already built GDPR-grade programs. For them, mapping to DPDPA will feel straightforward. The real watch-points are:
- How aggressively the central government uses its retained powers on transfer restrictions.
- Whether the Data Protection Board's independence holds up when complaints hit government entities or politically sensitive cases.
Most clients I speak with are relieved. The DPDPA imposes no sensitivity tiering, no mandatory data protection officer (DPO) for everyone, no adequacy dance for transfers yet. It might be a good idea to start your gap assessment now, lock in consent-manager partners, and run the significant data fiduciary self-assessment. The 18-month clock is already ticking!
Analyst: Carlos Rivera, Principal Advisory Director – Security & Privacy
More Reading:
- Source Material:
- Related Info-Tech 91ÖÆÆ¬³§:
Modernizing
GPDR Compliance to Support an Agentic AI Era
Type: Article
Enacted: October 2025
Affected Region: EU
Summary: The rise of agentic AI has provided various benefits to organizations in leveraging an autonomous and adaptive technology to streamline process and operations. It also imposed various implications in adhering to regulatory requirements such as EU’s GDPR. Although GDPR’s main components of purpose limitation, data minimization, transparency, storage limitation, and accountability are applicable to agentic AI, it’s the operationalization of those principles that has raised concerns.
The operating model for those principles, such as data flow mapping and human-in-the-loop decision-making at key points, could be hindered as more advanced technologies emerge. The possibility of AI agents changing their scope mid-run, or calling an API that has not been assessed for privacy and security impact, causes challenges with static controls. Furthermore, AI agents’ ability to autonomously perform additional tasks without human oversight heightens regulatory risks.
Given these implications, efforts should be made to shift static control-based compliance to a dynamic governance structure that addresses these issues at runtime. Initiatives could include:
- Embedding controls at the AI agents level.
- Implementing goal-change gates to review expansion of scope.
- Generating searchable record of the plan prepared by the agent, the tool calls executed, and the data categories observed.
This would support compliance efforts in adhering to Article 15 by providing real-time information on the personal data being processed and the purpose of the processing. Having controls in place to adapt to the agentic AI era will ensure companies continue to be compliant and demonstrate continuous trust to end users.
Analyst Perspective: The use of emerging technologies has privacy implications that organizations must address to ensure due diligence and compliance. The interoperability of GDPR and the EU AI Act necessitates efforts to foster continuous governance. Having a predeployed set of controls to test AI agents against synthetic and edge-case scenarios could help identify risks such as overcollection of data and support proactive risk assessment. Implementing real-time policy enforcement that can include sensitive category detectors and kill switches for when agents stray out of bounds will prevent unauthorized tool use and mitigate cross-border data transfer risks. Having these measures in place will enable the adaptive enforcement of GDPR compliance when using AI agents, which often introduces third-party integration and autonomous decision-making. This can facilitate compliance considerations being embedded into the agent’s lifecycle, harnessing resilience and trust.
Analyst: Ahmad Jowhar, Senior 91ÖÆÆ¬³§ Analyst – Security & Privacy
More Reading:
- Source Material:
- Related Info-Tech 91ÖÆÆ¬³§:
Why the EU’s Digital Rulebook Matters Far Beyond Europe
Type: Enforcement
Announced:
December 2025
Affected Region: EU
Summary: In the EU, failing to provide required advertising transparency and restricting researcher access to public data can be held a violation of the Digital Services Act (DSA). A European Commission (EC) investigation concluded that the social media platform X (formerly Twitter) had been doing so, along with misleading users with a deceptive blue-check verification program. The EC has issued a €120 million fine, the first of its kind under the DSA, for transparency failures. X now has 90 days to present measures to increase transparency in these practices or risk additional penalties.
Senior US officials have criticized the EU for targeting American tech firms and censorship and have suggested that potential tariff reductions could be on the table if EU scales back its digital rulebook. On the contrary, analysts have emphasized that the EU deliberately avoided content moderation issues in this first case. Instead, it focused on the transparency obligations where X’s noncompliance was found to be apparent.
In a similar investigation, TikTok was able to avoid a fine after the EC accepted its binding commitments to improve ad transparency. This negotiated outcome demonstrates that cooperation can avert harsher enforcement. Although the platform still faces scrutiny in other areas, the coordinated timing of both announcements suggest that the EC intends to drive meaningful compliance with the DSA rather than imposing exemplary fines.
Analyst Perspective: For the EU, the priorities are protecting users, transparency, and accountability, whereas many in the US view such regulation as overreach stifling innovation and free expression. Nonetheless, the fine imposed on X addresses deceptive design and dark patterns. These aren’t disputes over content moderation or suppression but core compliance obligations under the DSA. How this balance plays out could define the future of global digital governance.
The penalty shows that the EU intends to move forward with enforcement of its digital rulebook. Additionally, pursuing a high-profile platform like X for enforcement over design, advertising, and data access practices indicates the EC's resolve despite geopolitical tensions.
Similar to TikTok, it is likely that X will look to secure binding agreements and demonstrate compliance over a period of time. This may encourage other platforms to voluntarily adopt similar transparency and compliance standards. However, amid escalating geopolitical tensions this fine may further add to the broader debate over digital sovereignty, innovation, and free speech.
Analyst: Safayat Moahamad, 91ÖÆÆ¬³§ Director – Security & Privacy
More Reading:
- Source Material:
- Related Info-Tech 91ÖÆÆ¬³§:
If you have a question or would like to receive these monthly briefings via email, submit a request here.