Gujarat High Court summons Meta, X, Google over AI misuse—government cites platform non-compliance

The Gujarat High Court has issued notices to Meta, Google, and X (formerly Twitter) in response to a public interest litigation petition seeking regulatory curbs on artificial intelligence misuse, court filings show. The Union and Gujarat state governments informed the bench that these technology platforms have repeatedly failed to comply with lawful notices, citing delays and procedural obstruction as persistent obstacles to enforcement action.

The PIL petition, details of which remain partially sealed, focuses on the unregulated deployment of AI systems by major social media and search platforms operating in India. Regulators have long expressed concern about algorithmic bias, deepfakes, and AI-driven disinformation campaigns that amplify across these networks. The case underscores a mounting friction between India’s regulatory apparatus and Silicon Valley’s largest technology companies, each operating under different legal frameworks and compliance timelines.

This legal action represents a critical inflection point in India’s approach to AI governance. Unlike the European Union’s comprehensive AI Act or emerging frameworks in the United States, India has relied primarily on existing legislation—the Information Technology Act 2000 and intermediary rules—to address AI-related harms. The Gujarat High Court’s intervention suggests judicial impatience with the current regulatory status quo and the pace at which platforms have voluntarily adopted safeguards. Whether courts will establish binding precedent for platform accountability remains an open question with significant implications for the technology sector’s operating environment in South Asia’s largest economy.

Government counsel emphasized that Meta, Google, and X have exhibited a pattern of non-compliance with official notices, delaying responses and invoking procedural complexities to evade scrutiny. The court’s decision to issue summons indicates the bench found these explanations insufficient. The move follows escalating tensions between New Delhi’s Ministry of Electronics and Information Technology (MeitY) and these platforms over content moderation, data localization, and now algorithmic transparency. Previous instances have seen platforms temporarily restrict functionality or face threatened restrictions when compliance disputes erupted.

The technology industry has argued that overly stringent AI regulation could stifle innovation and investment in India’s emerging tech ecosystem. Industry bodies contend that global platforms already operate under multiple regulatory regimes and that harmonizing rules across jurisdictions proves operationally difficult. Conversely, civil society organizations and government bodies counter that without clear accountability mechanisms, AI systems will continue amplifying harm—from election-related misinformation to targeted harassment campaigns—with inadequate recourse for affected users.

The court’s action carries broader regional significance. Bangladesh, Pakistan, and other South Asian democracies closely monitor Indian legal precedents on digital governance. A Gujarat High Court ruling establishing platform liability for AI misuse could reshape compliance expectations across the region and signal to global technology companies that South Asian regulators are prepared to enforce accountability through judicial channels. The case also reflects India’s growing assertion of regulatory autonomy in digital markets, pushing back against the narrative that technology companies answer primarily to Western governments and shareholders.

What unfolds next will test whether Indian courts can compel meaningful algorithmic transparency and whether platforms will comply substantively or resort to technical and procedural delay tactics. The court’s next hearing will likely establish timelines for platform responses and may signal whether additional remedies—financial penalties, temporary service restrictions, or content removal orders—remain available to enforce compliance. Observers should watch for whether the bench issues detailed directions on AI auditing standards and whether these expectations align with global norms or chart an independent Indian regulatory path.

Vikram

Vikram is an independent journalist and researcher covering South Asian geopolitics, Indian politics, and regional affairs. He founded The Bose Times to provide independent, contextual news coverage for the subcontinent.