Artificial Intelligence (AI) is rapidly transforming healthcare systems worldwide, and India is no exception. From AI-powered diagnostic tools and predictive analytics to robotic surgeries and personalized medicine, AI has the potential to significantly improve healthcare delivery, accessibility, and efficiency. However, the rapid deployment of AI technologies in India’s healthcare ecosystem raises complex legal, ethical, and policy challenges. Unlike jurisdictions such as the European Union, India does not yet have a comprehensive standalone legal framework specifically regulating AI. Instead, governance is fragmented across sectoral laws, ethical guidelines, and policy initiatives, leading to regulatory uncertainty and enforcement gaps.
This research article critically examines the evolving regulatory landscape governing AI in healthcare services in India. It analyses existing legal instruments such as the Information Technology Act, 2000, the Digital Personal Data Protection Act, 2023, and the Medical Devices Rules, 2017, along with policy initiatives like the National Strategy for Artificial Intelligence and ethical guidelines issued by the Indian Council of Medical Research (ICMR). The article identifies key legal and policy challenges, including data privacy concerns, lack of accountability, algorithmic bias, absence of transparency, and regulatory fragmentation. Special attention is given to privacy risks arising from the use of sensitive health data and the adequacy of India’s emerging data protection regime. The study concludes that while India has adopted an innovation-friendly, “light-touch” regulatory approach, a more coherent and sector-specific legal framework is necessary to ensure safe, ethical, and equitable deployment of AI in healthcare. It recommends the development of comprehensive AI legislation, stronger institutional oversight, enhanced data protection safeguards, and a balanced approach that fosters innovation while protecting patient rights