AI is now a distribution channel. It’s time you control the terms

Like search or social, AI is a publishing pathway, one that must be shaped by your publishing standards, rights, and mission.

AI referrals to major websites jumped 357% year-over-year, in June, topping 1.13 billion, according to TechCrunch quoting SimilarWeb. AI search tools are already distributing news, reshaping visibility and content consumption, but not always revenue nor attribution.

And as a Nieman Lab analysis confirmed, these platforms cite trusted journalistic sources on key topics and verticals. In fact, half of the most-cited publishers in AI responses have already struck licensing deals with at least one AI company. The rest? They’re being used anyway, often without consent, without clarity, and without compensation.

This is why we believe every publisher should treat AI as a distribution channel, just like syndication, search, or social, but one that needs to be structured, licensed, and aligned with your vision of journalism. Those deals can be made.

What kind of AI use are we talking about?

AI content usage isn’t one-size-fits-all. That’s why at Nordot, we’ve created a framework to let publishers define which types of AI engagement they’re comfortable with, and to license accordingly. 

Here's a quick breakdown:

RAG (Retrieval-Augmented Generation)

AI systems (like Perplexity or Arc) pull real-time snippets from live web content to answer user queries. Your article may appear as a citation or snippet in the response.

  • Often includes attribution
  • Licensing is possible, and some partners are already under structured agreements.
  • Without formal agreements, this traffic often yields no visibility or compensation

Through Nordot’s opt-in licensing framework, we make sure RAG-based discovery is more than exposure, it’s a paid, trackable opportunity.

This also aligns with what we discussed in our previous post on traceability as infrastructure. If your content is going to be transformed, you deserve to stay visible, and credited.

Attribution-Based Features

Think: AI-generated summaries, search previews, or article highlights, all pointing back to your original piece. This is high-visibility, lower-risk usage when done right.

  • Clear attribution maintained
  • Opportunity for audience reach and discovery
  • Fair compensation is possible, as seen with ProRata’s model, which ties payment to actual usage and value.

These use cases reinforce why we built opt-in models for attribution-led features where we ensure that your content is properly attributed and fairly compensated. When your content powers the interface, you should be credited and compensated.

Transformative AI Use

This includes remixing your articles into other formats, text-to-slideshow, text-to-video, podcast-to-text, audio digests, topic clustering, or restructuring to reach new platforms and audiences.

  • This is where both risk and reward are high
  • This type of use requires clear licensing terms and a strong understanding of how each distribution platform operates

We’ve covered the value of format fluidity in our blog on liquid content and modular strategy. The key here is: transformation shouldn’t mean losing control.

Model Training (LLM Feeding)

This is the most sensitive category. It means allowing some or all of your content to be used to train large language models. The potential payoff can be substantial, but so are the implications.

  • You should never be included in training without your clear consent
  • Usually bring the best valuation of your content but terms matter

We only offer this option to publishers who explicitly opt in. No default inclusion. Ever. 

Some in the industry are exploring surface-level fixes, like pay-per-crawl mechanisms, to address AI usage. But as TechRadar recently noted, these models are “built to fail” unless they’re part of a broader rights and licensing framework. They don’t address the real questions: who can use your content, how, and under what terms? That’s why publishers need more than temporary defenses, they need structure.

You choose the use case, we handle the complexity

At Nordot, we’ve created a new opt-in framework that gives publishers real say,  and real flexibility, in deciding how their content is used in AI and B2B environments in order for us to help them protect their content.

You can:

  • Select only the AI use cases you’re comfortable with
  • Approve or decline each new partnership before anything is shared
  • Get visibility into the licensing terms and usage reporting
  • Change your participation settings at any time

It’s already active, and helping publishers prepare for the next wave of monetization and discovery while protecting their journalistic creation and credibility at all cost.

Publishers didn’t ask for their content to be pulled into AI engines. But now that it is, there’s a clear opportunity: to move from passive inclusion to structured participation, and to turn invisible usage into value.

We’re here to help you make that shift with tech, licensing structures, partner vetting, and hands-on support to navigate what’s next. Nordot's brand new AI Rider can get you started.

Bertrand de Volontat

Bertrand de Volontat is VP of EMEA at Nordot, where he helps publishers make the most of their content across platforms and borders. With a background in both journalism and media business, he led editorial operations at upday and launched several digital media ventures in the past.

Back to Publisher's Playbook

Curious to Learn More?

Let's Connect!