AI Trust & Governance Conclave 2026
Date & Time
Saturday, 23 May 09:30 AM - 03:30 PM | GMT +05:30
Location
K-tech MeitY Nasscom CoE IoT & AI, 27th Main Rd , Bengaluru Urban Karnataka India - 560102
View On Map
Your registration is subject to approval by the host.
About this event
AI Trust & Governance Conclave
Who Governs When AI Decides?
India’s First Leadership Conclave on Responsible AI, Data Trust and Governance
Artificial Intelligence has entered a decisive new phase. AI systems are no longer confined to analytics or content generation; they are increasingly acting autonomously—planning, adapting, and influencing consequential decisions across enterprises, governance, and public life. As AI evolves from tool to agent, the central governance question becomes: Who bears responsibility when AI systems act?
India’s AI ecosystem is rapidly transitioning from experimentation to real-world deployment across public services, enterprises, and digital infrastructure. National initiatives, such as the India AI Impact Summit, reflect strong momentum toward innovation and scale. Yet as AI begins to influence high-stakes decisions, new governance challenges emerge: managing system-level risks, including model reliability, hallucinations, bias, opacity, and lifecycle accountability.
As AI governance scholar Francisco Lara notes, “AI governance is the science and art of turning principles into procedures and institutional structures that secure its beneficial uptake.” This perspective is particularly timely. Principles alone are no longer sufficient; governance must translate into operational mechanisms capable of managing autonomy, uncertainty, and real-world action.
A key emerging trend in this context is the rise of data trust frameworks and trust-centric stewardship models. Beyond traditional data governance, data trust emphasizes not just compliance but the institutional management of data quality, access, ethical use, and shared accountability—often through formal legal and technical structures that enable secure, transparent data sharing and stewardship across stakeholders. These mechanisms aim to make data itself a trustworthy foundation for AI systems, ensuring that data used for training, inference, and autonomous action is reliable, ethically sourced, and auditable, addressing a previously underappreciated risk in AI deployment.
The ethical and legal stakes of this shift are profound. Philosopher Shannon Vallor observes, “AI is more than a technology; it is a philosophy… the law must govern AI.” Yet existing legal instruments—including data protection regimes—struggle to address inference harms, autonomous action, and cascading system effects. India’s Digital Personal Data Protection Act provides a critical foundation for institutional responsibility, but it does not directly govern how intelligent systems act, adapt, or interact at scale.
Operationalizing trust in data and AI governance—through mechanisms such as data stewardship frameworks, transparent data contracts, and cross-organizational trust agreements—can help bridge this gap by aligning data sharing with ethical, legal, and societal expectations. These approaches are rapidly gaining traction as organizations confront the “trust gap” between data usage and accountability in agentic AI systems.
This convening pivots the conversation from AI adoption to AI governance, focusing on operational risk management, institutional responsibility, and regulatory readiness required for trustworthy deployment. India’s AI moment is no longer about whether we adopt AI, but whether we are prepared to govern it—aligning innovation with accountability, enterprise confidence, and global credibility.