If your content strategy is designed for Spanish-speaking markets and you're betting on appearing in ChatGPT Search responses, there's one fact that changes how you should be working on content.

Peec AI, an AI search analytics company, analyzed more than 10 million prompts and 20 million fan-out queries from Chat GPT and found something that is not yet on the radar of most marketing teams: 43% of the background searches generated by ChatGPT are done in English, even when the user wrote their question in another language.

For markets such as Spain, Mexico, Argentina or Colombia, this has direct implications for which brands and what content ends up being cited in the responses.

What are fan-out queries and why do they matter

When a user asks a question in ChatGPT Search, the system doesn't respond with just what it already knows. It generates a series of background searches, called fan-out queries, which it sends to search providers to find up-to-date sources.

First, rephrase the original query into one or more specific searches. Then, based on the initial results, you can generate additional, more targeted searches.

OpenAI's public documentation describes this process, but it doesn't explain how the system decides what language to do those intermediate searches in. That's exactly what Peec AI investigated.

The critical point is this: fan-out queries are the pre-citation filter. Before ChatGPT decides which sources to cite, it decides which sources to consult.

If background searches are done in English, the sources under consideration are predominantly in English, and local sources are left out of the process before the selection begins.

What the Peec AI analysis found

Peec AI filtered its dataset to include only cases where the location of the IP matched the language of the prompt.

Prompts in Polish from Polish IPs, in German from German IPs, in Spanish from Spanish IPs. Cases with mixed signals were excluded.

The results within that filtered dataset showed that 78% of the non-English prompts included at least one fan-out query in English. No non-English language in the dataset was below 60%.

Turkish was the most affected, with 94% of its prompts generating fan-outs in English. Spanish was the lowest in the group analyzed, with 66%, but it is still a significant proportion.

The pattern that emerges from the analysis is consistent: ChatGPT tends to start fan-out queries in the language of the prompt, but it adds searches in English as it builds the response.

Concrete examples of how this affects results

Peec AI included in its report several cases that illustrate the effect in practice.

A prompt in Polish from a Polish IP asking about the best auction portals produced a response that omitted or left Allegro.pl, which according to Peec AI is the dominant e-commerce platform in Poland, in favor of eBay and other global platforms.

A German prompt asking about German software companies produced an answer without a German company.

The most illustrative example is the prompt in Spanish about cosmetic brands. Peec AI showed the real fan-out queries generated by ChatGPT. The first search was done in English. The second one was done in Spanish, but added the word “global”, a qualifier that the user never used. The system interpreted a prompt in Spanish from a Spanish IP as a request for global, not local, brands.

These are examples of Peec AI tests, not data representative of all ChatGPT Search behavior. But the pattern is consistent with the bias toward English shown by the aggregated numbers.

Why this is different from traditional citation factors

Over the past year, several studies have analyzed what factors predict whether ChatGPT cites a source. SE Ranking published an analysis on citation signals. The Tow Center investigated attribution accuracy. These studies assume that the content is already under consideration and analyze what makes it more or less citable.

What Peec AI's analysis shows is a previous problem: the language of fan-out queries can filter which sources are considered before the selection begins. If your content is in Spanish and the background search is done in English, you don't make it to the selection phase.

That changes the diagnosis for content teams in Spanish-speaking markets. It's not just a matter of domain authority, content structure, or citation signals. It's a question of whether the language of the content excludes you from the process right from the start.

What content teams can do with this

Evaluate whether it makes sense to have English versions of strategic content

For brands that operate in Spanish-speaking markets but compete in categories where English-language content dominates globally, having English versions of the most important articles can be the difference between entering the fan-out process or not.

It's not a universal strategy, but for categories where ChatGPT clearly favors global sources in English, ignoring that channel comes at a cost.

Monitor what fan-out queries ChatGPT generates for your key topics

Tools like Peec AI allow you to see what background searches ChatGPT generates when asked about topics relevant to your business.

Understanding what language these searches are being done in and what sources are appearing is the starting point for knowing if there is a structural visibility problem or if your content is already being considered.

Include English terminology within Spanish content

An intermediate alternative is to ensure that the content in Spanish includes technical terms and proper names in English when appropriate.

If ChatGPT fan-out in English searching for “best auction portals Poland” and your content in Polish only mentions “best auction portals”, the semantic coincidence is lower.

Using the terminology that the system is probably looking for increases the chances of being considered.

Don't give up optimizing for local search

Peec AI's analysis describes a system bias, not a rule.

Spanish was the language with the lowest proportion of fan-outs in English in the dataset, with 66%. This means that the remaining 34% of the prompts in Spanish generated background searches exclusively or mostly in Spanish.

Well-optimized content for Spanish-speaking audiences is still relevant, especially for queries with clearly local intent.

What we don't know yet

OpenAI has not publicly explained how ChatGPT decides the language of its fan-out queries. It's not clear if the English bias is an intentional design decision or an emergent system behavior.

It's also not clear if OpenAI has plans to adjust this to better reflect local markets.

The Peec AI dataset comes from its own platform, not from a sample representative of real ChatGPT user sessions. The analyzed prompts reflect the use cases of Peec AI customers, who are primarily SEO and marketing teams. That may overrepresent certain types of queries and categories.

What is clear is that fan-out queries are a real and documented ChatGPT Search mechanism, that the language of those searches affects which sources are considered, and that content teams in non-English-speaking markets have an additional factor to monitor in their AI visibility strategy.

Tu marca merece ser visible. Creemos juntos una estrategia impactante