Report of the Committee to develop a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the Financial Sector
Reserve Bank of India (RBI) has released the report of the committee to develop a framework for responsible and ethical enablement of artificial intelligence (FREE-AI) in the financial sector.
Committee to develop a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the Financial Sector
In the financial sector, Artificial Intelligence (AI) has the potential to unlock new forms of customer engagement, enable alternate approaches to credit assessment, risk monitoring, fraud detection, and offer new supervisory tools. At the same time, increased adoption of AI could lead to new risks like bias and lack of explainability, as well as amplifying existing challenges to data protection, cybersecurity, among others.
To encourage the responsible and ethical adoption of AI in the financial sector, the committee to develop a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the Financial Sector (Chairperson: Dr. Pushpak Bhattacharyya) was constituted by RBI on December 26, 2024. The committee has submitted its report on August 13, 2025.
7 sutras for AI adoption in financial sector
The Committee has formulated 7 Sutras that represent the core principles to guide AI adoption in the financial sector.
- Trust is the Foundation – Trust is non-negotiable and should remain uncompromised
- People First – AI should augment human decision-making but defer to human judgment and citizen interest
- Innovation over Restraint – Foster responsible innovation with purpose
- Fairness and Equity – AI outcomes should be fair and non-discriminatory
- Accountability – Accountability rests with the entities deploying AI
- Understandable by Design – Ensure explainability for trust
- Safety, Resilience, and Sustainability – AI systems should be secure, resilient and energy efficient
6 strategic pillars for AI adoption in financial sector
Using the Sutras as guidance, the Committee has recommended an approach that fosters innovation and mitigates risks, treating these two seemingly competing objectives as complementary forces that must be pursued in tandem.
Innovation Enablement Framework unlocks the transformative potential of AI in financial services by enabling opportunities, removing barriers, and accelerating AI adoption and implementation in a responsible manner. The 3 key pillars under this framework are –
- Infrastructure – Building the infrastructure needed to support AI innovation.
- Policy – Putting in place agile, adaptive policy and regulatory architecture to encourage responsible AI adoption.
- Capacity – Promoting human skill development and institutional capacity to harness AI safely and effectively.
Risk Mitigation Framework is designed to mitigate the risks of integrating AI into the financial sector. The 3 key pillars under this framework are –
- Governance – Establishing robust governance structures in respect of AI-based decisions and actions.
- Protection – Ensuring strong safeguards for protection from harms.
- Assurance – Instituting mechanisms for continuous validation and oversight of AI systems.
26 recommendations for AI adoption in financial sector
Under the 6 strategic pillars, the report outlines 26 recommendations for AI adoption in the financial sector.
Infrastructure Pillar
- Financial Sector Data Infrastructure – A high-quality financial sector data infrastructure should be established, as a digital public infrastructure, to help build trustworthy AI models for the financial sector. It may be integrated with the AI Kosh – India Datasets Platform, established under the IndiaAI Mission.
- AI Innovation Sandbox – An AI innovation sandbox for the financial sector should be established to enable Regulated Entities (REs), FinTechs, and other innovators to develop AI-driven solutions, algorithms, and models in a secure and controlled environment. Other Financial Sector Regulators (FSRs) should also collaborate to contribute to and benefit from this initiative.
- Incentives and Funding Support – Appropriate incentive structures and infrastructure must be put in place to encourage inclusive and equitable AI usage among smaller entities. To support innovation and to meet strategic sectoral needs, RBI may also consider allocating a fund for setting up of data, compute infrastructure.
- Indigenous Financial Sector Specific AI Models – Indigenous AI models [including Large Language Models (LLMs), Small Language Models (SLMs), or non LLM models] tailored specifically for the financial sector should be developed and offered as a public good.
- Integrating AI with Digital Public Infrastructure (DPI) – An enabling framework should be established to integrate AI with DPI in order to accelerate the delivery of inclusive, affordable financial services at scale.
Policy Pillar
- Adaptive and Enabling Policies – Regulators should periodically undertake an assessment of existing policies and legal frameworks to ensure they effectively enable the AI-driven innovations and address the AI-specific risks. Regulators should develop a comprehensive AI policy framework for the financial sector, anchored in the Committee’s 7 Sutras to provide flexible, forward-looking guidance for AI innovation, adoption, and risk mitigation across the sector. The RBI may consider issuing consolidated AI Guidance to serve as a single point of reference for regulated entities and the broader FinTech ecosystem on the responsible design, development, and deployment of AI solutions.
- Enabling AI-Based Affirmative Action – Regulators should encourage AI-driven innovation that accelerates financial inclusion of underserved and unserved sections of society and other such affirmative actions by lowering compliance expectations as far as is possible, without compromising basic safeguards.
- AI Liability Framework – Since AI systems are probabilistic and non-deterministic, regulators should adopt a graded liability framework that encourages responsible innovation. While REs must continue to remain liable for any loss suffered by customers, an accommodative supervisory approach where the RE has followed appropriate safety mechanisms such as incident reporting, audits, red teaming etc., is recommended. This tolerant supervisory stance should be limited to first time / one-off aberrations and denied in the event of repeated breaches, gross negligence, or failure to remediate identified issues. (Red teaming is an exercise, reflecting real-world conditions, that is conducted as a simulated adversarial attempt to compromise organizational missions and / or business processes to provide a comprehensive assessment of the security capability of the information system and organization.)
- AI Institutional Framework – A permanent multi-stakeholder AI Standing Committee should be constituted under RBI to continuously advise it on emerging opportunities and risks, monitor the evolution of AI technology, and assess the ongoing relevance of current regulatory frameworks. The Committee may be constituted for an initial period of five years, with a built-in review mechanism and a sunset clause. A dedicated institution should be established for the financial sector, operating under a hub-and-spoke model to the national-level AI Safety Institute, for continuous monitoring and sectoral coordination.
Capacity Pillar
- Capacity Building within REs – REs should develop AI-related capacity and governance competencies for the Board and C suite, as well as structured and continuous training, upskilling, and reskilling programs across the broader workforce who use AI, to effectively mitigate AI risks and guide ethical as well as ensure responsible AI adoption.
- Capacity Building for Regulators and Supervisors – Regulators and supervisors should invest in training and institutional capacity building initiatives to ensure that they possess an adequate understanding of AI technologies and to ensure that the regulatory and supervisory frameworks match the evolving landscape of AI, including associated risks and ethical considerations. RBI may consider establishing a dedicated AI institute to support sector-wide capacity development.
- Framework for Sharing Best Practices – The financial services industry, through bodies such as Indian Banks’ Association (IBA) or Self-Regulatory Organisation (SROs), should establish a framework for the exchange of AI-related use cases, lessons learned, and best practices and promote responsible scaling by highlighting positive outcomes, challenges, and sound governance frameworks.
- Recognise and Reward Responsible AI Innovation – Regulators and industry bodies should introduce structured programs to recognise and reward responsible AI innovation in the financial sector, particularly those that demonstrate positive social impact and embed ethical considerations by design.
Governance Pillar
- Board Approved AI Policy – To ensure the safe and responsible adoption of AI within institutions, REs should establish a board-approved AI policy which covers key areas such as governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection measures, AI disclosures, model life cycle framework, and liability framework. Industry bodies should support smaller entities with an indicative policy template.
- Data Lifecycle Governance – REs must establish robust data governance frameworks, including internal controls and policies for data collection, access, usage, retention, and deletion for AI systems. These frameworks should ensure compliance with the applicable legislations, such as the Digital Personal Data Protection Act (DPDP Act), throughout the data life cycle.
- AI System Governance Framework – REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning. Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage. REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision-making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications.
- Product Approval Process – REs should ensure that all AI-enabled products and solutions are brought within the scope of the institutional product approval framework, and that AI-specific risk evaluations are included in the product approval frameworks.
Protection Pillar
- Consumer Protection – REs should establish a board-approved consumer protection framework that prioritises transparency, fairness, and accessible recourse mechanisms for customers. REs must invest in ongoing education campaigns to raise consumer awareness regarding safe AI usage and their rights.
- Cybersecurity Measures – REs must identify potential security risks on account of their use of AI and strengthen their cybersecurity ecosystems (hardware, software, processes) to address them. REs may also make use of AI tools to strengthen cybersecurity, including dynamic threat detection and response mechanisms.
- Red Teaming – REs should establish structured red teaming processes that span the entire AI lifecycle. The frequency and intensity of red teaming should be proportionate to the assessed risk level and potential impact of the AI application, with higher risk models being subject to more frequent and comprehensive red teaming. Trigger-based red teaming should also be considered to address evolving threats and changes.
- Business Continuity Plan (BCP) for AI Systems – REs must augment their existing BCP frameworks to include both traditional system failures as well as AI model-specific performance degradation. REs should establish fallback mechanisms and periodically test the fallback workflows and AI model resilience through BCP drills.
- AI Incident Reporting and Sectoral Risk Intelligence Framework – Financial sector regulators should establish a dedicated AI incident reporting framework for REs and FinTechs and encourage timely detection and reporting of AI-related incidents. The framework should adopt a tolerant, good-faith approach to encourage timely disclosure.
Assurance Pillar
- AI Inventory within REs and Sector-Wide Repository – REs should maintain a comprehensive, internal AI inventory that includes all models, use cases, target groups, dependencies, risks and grievances, updated at least half yearly, and it must be made available for supervisory inspections and audits. In parallel, regulators should establish a sector-wide AI repository that tracks AI adoption trends, concentration risks, and systemic vulnerabilities across the financial system with due anonymisation of entity details.
- AI Audit Framework – REs should implement a comprehensive, risk-based, calibrated AI audit framework, aligned with a board-approved AI risk categorisation, to ensure responsible adoption across the AI lifecycle, covering data inputs, model and algorithm, and the decision outputs. Supervisors should also develop AI-specific audit frameworks, with clear guidance on what to audit, how to assess it, and how to demonstrate compliance.
- Internal Audits – As the first level, REs should conduct internal audits proportionate to the risk level of AI applications.
- Third-Party Audits – For high risk or complex AI use cases, independent third-party audits should be undertaken.
- Periodic Review – The overall audit framework should be reviewed and updated at least biennially to incorporate emerging risks, technologies, and regulatory developments.
- Disclosures by REs – REs should include AI-related disclosures in their annual reports and websites. Regulators should specify an AI-specific disclosure framework to ensure consistency and adequacy of information across institutions.
- AI Toolkit – AI Compliance Toolkit will help REs validate, benchmark, and demonstrate compliance against key responsible AI principles such as fairness, transparency, accountability, and robustness. The toolkit should be developed and maintained by a recognised SRO or industry body.
FREE-AI vision
A financial ecosystem where the encouragement of innovation is in harmony with the mitigation of risk.
IndiaAI Mission
- The IndiaAI Mission is the Government of India’s flagship program to build a cohesive, strategic, and robust AI ecosystem.
- In the Union Budget 2024, the Government had approved an allocation of ₹10,372 crore for the IndiaAI Mission.
- The financial infusion, slated over the 5 years, was poised to catalyse various components of the IndiaAI Mission, including pivotal initiatives like the IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, IndiaAI Application Development Initiative (IADI), IndiaAI FutureSkills, IndiaAI Startup Financing, and Safe & Trusted AI.
- The IndiaAI Compute pillar focuses on creating a high-end, scalable AI computing ecosystem to deliver Compute-as-a-Service for India’s rapidly growing AI startups and research community.
- The IndiaAI Foundation Models pillar underscores the importance of building India’s own LLMs trained on Indian datasets and languages, to ensure sovereign capability and global competitiveness in generative AI. A funding model combining grants and equity support has been introduced, offering 40% of compute costs as grants and taking 60% as equity (via convertible debentures). 4 startups (Sarvam AI, Soket AI, Gnani AI, and Gan AI) have been selected in the first phase to develop India’s foundation models.
- AIKosh, the IndiaAI Datasets Platform, is envisioned as a unified data platform integrating datasets from government and non-government sources. Launched in beta in March 2025, AIKosh prioritizes data quality scoring, robust search and filtering, Jupyter notebooks for analytics, and secure, permission-based access for contributors.
- The IADI is designed to foster the development and adoption of at least 25 impactful AI solutions that can drive large-scale socio-economic transformation.
- The IndiaAI FutureSkills pillar aims to democratize AI education and build a robust talent pipeline across the country. The program will support 500 PhD fellows, 5,000 Master’s students, and 8,000 undergraduates through targeted funding. Research fellowships for PhD scholars are aligned with the Prime Minister’s Research Fellowship, offering support of up to ₹55 lakh per fellow. Additionally, more than 570 AI and Data Labs are planned nationwide.
- The IndiaAI Startup Financing pillar addresses the critical need for risk capital across the entire lifecycle of AI startups, from prototyping to commercialization. This includes the IndiaAI Startups Global program, launched in collaboration with Station F (Paris) and HEC Paris, which aims to support 10 Indian AI startups in expanding into the European market.
- Safe and Trusted AI pillar seeks to balance innovation with strong governance frameworks to ensure responsible AI adoption. Recognizing India’s diverse social, cultural, economic, and linguistic landscape, this pillar focuses on developing contextualized instruments of AI governance.
- The Government of India has set up an Artificial Intelligence Safety Institute (AISI) for responsible AI innovation. This Institute, incubated by IndiaAI Mission, has been set up as a hub and spoke model with various research and academic institutions and private sector partners joining the hub and taking up projects under the Safe and Trusted Pillar of the IndiaAI Mission.
- India is set to host the AI Impact Summit in February 2026, building on its role as co-chair of the AI Action Summit and continuing its leadership in shaping global AI discussions.
References
Government of India. (2024, March 07). 'Cabinet Approves Over Rs 10,300 Crore for IndiaAI Mission, will Empower AI Startups and Expand Compute Infrastructure Access'. Retrieved from https://www.pib.gov.in/PressReleasePage.aspx?PRID=2012375
Reserve Bank of India. (2025, August 13). 'FREE-AI Committee Report - Framework for Responsible and Ethical Enablement of Artificial Intelligence'. Retrieved from https://www.rbi.org.in/Scripts/PublicationReportDetails.aspx?UrlPage=&ID=1306
Reserve Bank of India. (2025, August 13). 'Report of the Committee to develop a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the Financial Sector'. Retrieved from https://www.rbi.org.in/Scripts/BS_PressReleaseDisplay.aspx?prid=61018
Comments
Post a Comment