Talking about GEO strategy is no longer just about local SEO or ranking well on Google. At TU, we approach it from a much broader perspective: ensuring our content, services, and products are visible and accessible wherever searches happen—whether on traditional search engines, social platforms, or in responses generated by artificial intelligence.
Our goal is to be present at these new digital touchpoints, even before users arrive at our website. That’s why we’ve redesigned our GEO strategy with a more transversal mindset, adapted to what’s coming next. And we’re doing it consciously: prioritizing high-impact actions, focusing on technical detail, and continuously experimenting.
This isn’t an academic or encyclopedic piece. It’s a reflection of what we’re doing—what has worked, what hasn’t, and why we’re trying new things.
Actions executed in Q2: laying the groundwork
During Q2, our focus was clear: prepare TU to be visible not only on classic search engines but also on new response engines driven by generative AI.
1. Publishing the llms.txt file
One of our first technical moves was the creation of the llms.txt file. Its role is to guide platforms like ChatGPT, Perplexity, and Gemini on how they should crawl and use our content. It’s a new and still unexplored space, but we wanted to be proactive.
2. Entry on Wikipedia
In May, we successfully published TU.com’s entry on Wikipedia, including improved definitions and links to our products. This wasn’t a coincidence: according to a recent analysis by Ahrefs, Wikipedia is the most cited domain by AI assistants like ChatGPT and Perplexity. Being listed there not only provides reputation but also direct visibility in generative AI responses.
3. FAQ modules and anchor text optimization
We added FAQ blocks to many of our top-performing blog posts and reviewed all internal anchor texts to strengthen internal linking. This micro-optimization work helps both search engines and AI assistants better understand our site’s structure and content relationships.
4. Google Discover strategy
We’ve been fine-tuning our content to make it more eligible for Google Discover: visually attractive images, emotional headlines, relevance to trending topics… It’s still a work in progress, but we’re already seeing early traction.
5. Image weight optimization
We completed a massive image compression effort across the site, reducing file sizes without sacrificing quality. This has significantly improved loading speed, especially on mobile—a key factor for both technical SEO and user experience.
6. Bing indexing
A strategic move was to ensure all our URLs are properly indexed on Bing. Why? Because Bing powers many of ChatGPT’s responses. Ensuring visibility on Bing is, indirectly, securing visibility in AI-generated results. We’ve also started using Bing Webmaster Tools to monitor crawling and performance.
Actions planned for Q3: beyond the search engine
In Q3, we’re going further focusing on visibility in environments where ranking isn’t just about keywords but about being part of the models’ training data and the authority of the sources they quote.
1. Publishing on Quora
2. Creating custom GPTs
We’re building custom GPTs specifically trained with TU’s knowledge. These cover strategic areas such as CRO, blogging, digital identity, and more. We’re deploying them across multiple AI environments: ChatGPT, Gemini, Claude, Bing…
Subscribe to our newsletter!
Find out about our offers and news before anyone else
The goal? Ensure that when a user gets an AI-generated answer about our topics, it reflects our voice, tone, and expertise.
3. Domain verification for generative AI
We’re exploring how to verify our domains on platforms like ChatGPT and Perplexity—like how we use Google Search Console. This would give us greater control over how our content is presented in AI-generated responses and improve indexing accuracy.
4. Enriching context in conversational agents
One of the most important learnings in this phase has been the need for deep personalization. It’s not enough for an AI agent to answer about a product—it needs to know who is speaking, from which domain, with what perspective, and what brand voice.
That’s why we’re feeding our agents with:
- Author and voice context (e.g., Guillem, SEO/CRO/ASO specialist at Telefónica Innovación Digital)
- Source domain (tu.com) as the official and authoritative reference
- Brand voice and tone guidelines
- Internal hierarchies of products, services, and blog structure
We want to ensure that when an AI references our content, it does so with a clear, consistent, and human tone that truly represents TU.
Reflections and key learnings
Improving brand visibility today has little to do with SEO from five years ago—or even last year. The last few months at TU have challenged how we think about positioning, methodology, and even the concept of “being found.”
Here are some of the key lessons we’ve learned (and are still learning):
AI doesn’t index—it cites. A major shift in mindset came when we realized we’re no longer just optimizing Google. We need to be present in the sources that train LLMs: Wikipedia, Quora, GitHub, Stack Overflow, technical publications… If we’re not in those sources, we don’t exist in AI responses. That’s why we’ve made such an effort to show up where it matters.
Technical SEO is still essential. Despite the shift toward AI, technical SEO hasn’t lost its value. Files like llms.txt, image optimization, site speed, and Bing indexing are silent, but vital. If content isn’t served properly, it doesn’t matter how good it is—AI models rely on structured, clean, accessible information.
Brands can be trained, too. AI doesn’t just generate answers—it builds narratives. And those narratives come from patterns. If we don’t actively define our brand’s narrative, AI will build it for us. That’s why we invest in context, tone, structure, and authority across all platforms. We want every AI-generated mention of TU to reflect who we really are—even when the user doesn’t explicitly search for our name.
Measure, test, scale… and track properly. Every action is measured. Some show quick results (like image optimization); others take time (like Discover or GPTs). What matters is agility: test fast, learn fast, adjust fast, scale what works. We monitor everything in Looker, using Google Analytics to track users’ first traffic source, especially those coming from AI tools. To do that, we’ve configured a custom regex that captures traffic coming from domains such as:
gemini.google.com|chatgpt.com|perplexity|perplexity.ai|claude.ai|chat.deepseek.com|chat.qwen.ai|blackbox.ai|chat.mistral.ai
This allows us to filter sessions that originate from these environments and analyze:
- Which pages are linked from AI platforms?
- What journeys do AI-referred users follow?
- How long are these sessions, and what kind of engagement or conversion do they drive?
This insight helps us make data-driven decisions about how we optimize visibility in AI environments.
Visibility isn’t always visible. Many GEO efforts go unnoticed. They’re not campaigns. They’re not homepage takeovers. They don’t show up in banner stats. But they’re the reason we appear in AI answers, long-tail searches, and exploratory queries.
Linkbuilding still matters—especially for AI. Despite being declared “dead” many times, linkbuilding has proven to be effective in a new way. We’ve seen our content surface in Google’s AI Overviews thanks to external high-authority links. This confirms something we’ve long suspected: authority not only impacts traditional SEO but also plays a role in whether a piece of content is “worth citing” by AI systems.