When the outputs of a joint research collaboration are commercialized across borders — who sets the terms? University researchers collaborate across jurisdictions all the time, yet no established protocol governs what happens when that work produces commercial products. This is slowing down important AI work right now — but the fix does not require new agreements. It requires updating existing ones. 

Take this example. A faculty member at an American research university is working on a machine learning project with an Indian colleague — a natural language processing model designed to track public health discourse across Indian regional-language news outlets. Their grant comes from the National Science Foundation, under a memorandum the NSF signed with India’s Anusandhan National Research Foundation in February 2025. 

How are decisions around a collaboratively built technology made, and how are the benefits distributed? No bilateral protocol exists to answer either. 

They assemble a shared dataset drawn from Hindi, Tamil, and Bengali news archives covering vaccine hesitancy and disease surveillance reporting. They train a model. They publish their findings. They present at a conference in India.

This is exactly the kind of collaboration both governments described when they launched the TRUST initiative at the White House (Transforming the Relationship Utilizing Strategic Technology), which holds that reducing regulatory barriers will deepen joint research pipelines in AI, quantum computing, and semiconductors.  

Eighteen months in, the line between research and commerce starts to blur. The university’s technology transfer office wants to license the model.  A startup expresses interest, fine-tunes it, and builds it into a product. 

At that point, these researchers may owe royalties to India. But royalties are only part of the question. How are decisions around a collaboratively built technology made, and how are the benefits distributed? No bilateral protocol exists to answer either. 

Ten months after the TRUST initiative announcement, India’s Department for Promotion of Industry and Internal Trade released a 115-page working paper proposing mandatory royalty payments from any AI developer that trains its models on copyrighted Indian content. The scope covers OpenAI, Google, and every major American company building large language models. It also covers the university lab down the hall.

Nobody wrote down what happens when the outputs of an NSF-funded research collaboration are commercialized in a jurisdiction that requires royalties on AI training data. 

A Problem We Have Solved Before

Thirty years ago, universities sharing biological materials across borders faced a version of the same question: when do the outputs of a research exchange enter a commercial pipeline — and under whose rules?  

A tissue sample passed freely between two labs could end up in a patented drug worth hundreds of millions of dollars. The countries that contributed the original material had no way to know when they were owed something.

This is not simply a question of when research turns commercial. It is that two arms of the same government made competing commitments without reconciling them, and no one has told the institutions which one applies.

The resolution came from the Uniform Biological Material Transfer Agreement, published by the U.S. National Institutes of Health. The process took years and was resisted by institutions that preferred ambiguity over accountability. What emerged drew a line between research and commerce. 

Two labs could keep doing their scientific work. But once a project crossed into licensing, product development, or transfer to a for-profit entity, the originating institution had to be notified before that step happened. Not after. Before. 

The legal teams handled licensing separately. The agreement placed those questions in a defined process, with a joint review step before any commercial arrangement was finalized.

It became the global standard for biological material transfers and is still in use today. The U.S. and India need something like it for AI training data. The institutional infrastructure to build one already exists. 

Two Commitments That Do Not Fit Together

When the TRUST initiative was announced in February 2025, the NSF and India’s Anusandhan National Research Foundation signed a memorandum of cooperation. The goal was fewer regulatory barriers and deeper research pipelines between the two countries.

Ten months later, India’s commerce ministry released its working paper proposing a new collective licensing body to collect a share of global revenue from AI systems trained on Indian copyrighted works. Royalties would kick in once the AI system generates commercial revenue. India extended its public consultation deadline to February 6, 2026. The framework is still being drafted.

The TRUST initiative came from India’s Ministry of External Affairs. The working paper came from the Ministry of Commerce. They came from the same government, and neither produced a shared definition of when the output of a joint AI research are used to develop a commercial product that is subject to royalty obligations. 

Institutions trying to act on the NSF memorandum do not have the guidance they need to move forward. This is not simply a question of when research turns commercial. It is that two arms of the same government made competing commitments without reconciling them, and no one has told the institutions which one applies.

Infrastructure or Extraction?

To Washington, AI is infrastructure — build fast, reduce friction, settle intellectual property questions later. The TRUST initiative, the U.S.-India AI Opportunity Partnership signed in February 2026, and Microsoft’s $17.5 billion commitment to Indian AI development all run on the same logic: scale first, address intellectual property once the technology is already in the market.

India’s commerce ministry starts from a different place. For a country with a large digital creative workforce, this is a fairness question. The proposed structure is not punitive. A blanket license would cover all training use through a single payment, sparing developers the cost of negotiating rights with individual authors. Royalties would only apply once the AI system earns commercial revenue.

The problem is not that either position is wrong. Rather, neither government has told research institutions which one governs a joint project mid-stream. 

A model trained on Indian creative works in a university lab this year can be a commercial product in eighteen months. The training data does not change. Its legal status does.

What a Bilateral Protocol Would Look Like

The biological materials agreement worked because it was precise about transitions. 

The Export Administration Regulations offer the same logic: the Fundamental Research Exemption excludes open-publication research from export control requirements, and it lapses the moment a sponsor imposes publication restrictions or access controls. Institutions follow the trigger rather than argue about the timeline.

India’s proposed hybrid model maps onto both precedents without requiring major restructuring. Open access is employed for training purposes and royalties are supplied at commercialization — the logic is already compatible with what the U.S. needs. 

None of this requires new institutions. It requires a document.

What is missing is a written protocol specific enough for a technology transfer office to act on before it signs a collaboration agreement.

That protocol should include three things. 

First, the protocol requires a research safe harbor. This means that joint projects under the NSF-ANRF memorandum operate under open-access terms as long as the work stays in the research domain. This includes published results, conference presentations, and shared datasets for noncommercial use. 

Second, the protocol requires a defined commercialization trigger. This means that the safe harbor lapses when a licensing deal is signed, when a for-profit entity takes over development, or when the model generates its first commercial revenue. 

Third, we need a bilateral body with the authority to make the call when the parties disagree about whether the threshold has been crossed.

The Infrastructure Already Exists

The U.S.-India Science and Technology Cooperation Agreement, signed in 2005 after difficult negotiations over intellectual property, was designed to accommodate new protocols as the relationship developed. The Indo-U.S. Science and Technology Forum, jointly funded since 2000, already runs a U.S.-India AI initiative and has two decades of experience connecting researchers and policymakers from both countries.

The elements that travel are straightforward: a research safe harbor with a defined trigger, a commercialization threshold both sides accept in advance, and a bilateral body with the authority to settle disputes.

Adding an AI data-use protocol to the 2005 agreement uses existing infrastructure rather than requiring either government to build something new. 

The obstacle is not institutional capacity — it is the political cost of being the first to acknowledge that AI ambitions and intellectual property obligations need to be reconciled in writing. 

The Forum is also the natural home for the joint review panel. It already has the bilateral mandate and the technical relationships to make commercialization determinations before they reach a court.

What Fills the Gap Right Now Is a Courtroom

The ANI v. OpenAI case before the Delhi High Court shows what happens without a bilateral framework. India’s largest news wire alleges OpenAI used its content to train ChatGPT without authorization or compensation. The court is working through whether storing copyrighted content for AI training counts as infringement under Indian law, and whether Indian courts even have jurisdiction over a company whose servers sit in the United States.

Whatever the court decides will shape the legal environment that American researchers enter under any NSF-India collaboration. 

The Delhi High Court will rule before any bilateral framework is finalized. Whatever it decides will be harder to negotiate around than a draft working paper.

A Template That Travels

Other governments are watching. 

The EU’s AI Act already requires providers of general-purpose AI models to publish training data summaries, with enforcement authority beginning August 2026. Brazil’s AI legislation reflects the same underlying tension between innovation access and creator compensation.

Most developing countries navigating these frameworks lack the leverage to negotiate terms directly with major AI developers. A working U.S.-India agreement gives smaller countries a concrete template to adapt. The template is proof that data sovereignty concerns and AI development access are not mutually exclusive.

The elements that travel are straightforward: a research safe harbor with a defined trigger, a commercialization threshold both sides accept in advance, and a bilateral body with the authority to settle disputes. None of this requires new institutions. It requires a document.

The U.S.-India AI Opportunity Partnership was signed February 20, 2026. India’s consultation on the working paper remains open. Both governments are still shaping their final positions. The window to get this right is measured in months, not years.

Javaid Sofi is a researcher and policy analyst at Virginia Tech specializing in the governance of artificial intelligence and comparative policy implementation. His research examines how countries translate AI policy commitments into institutional practice.