If you’ve seen the SBB winter visual for the “Winter Schnupper-GA,” you’ll recognize it instantly: a crisp, idyllic snowy scene, a cheerful “customer” in the foreground, and that familiar Swiss winter glow that makes you want to hop on a train and pretend deadlines don’t exist. Then your eyes catch the tiny label: “AI generated”. And suddenly, the mood shifts.
Because that person wasn’t there. That moment never happened. The “real” winter experience the image suggests is a synthetic one—constructed, not captured.
This is why the SBB campaign sparked debate far beyond the usual advertising chatter. The issue isn’t that SBB used AI. The issue is what they used AI to simulate: a human, an emotion, a relatable slice of everyday life—exactly the kind of visual language audiences instinctively read as “real.”
The controversy isn’t “AI vs. no AI.” It’s “documenting reality vs. inventing it.”
Advertising has always staged reality. We all know that. Lighting, styling, casting, location scouting—brands manufacture moments all the time. But even a staged photo still contains something fundamentally different from a generated image: a real person stood in front of a real camera at a real place on a real day.
With generative AI, you can skip that entire chain. You don’t need an actual customer, a mountain, a photographer, or a weather window. You need a prompt and a capable operator. That’s a huge creative and operational unlock—and also a reputational risk, depending on the message you’re trying to deliver.
In the SBB case, critics argued that the brand crossed a line: it didn’t just stylize reality; it replaced it. Markus Mallaun, a photographer and AI specialist, publicly criticized the motif and described it as a “brand experience that never took place.” His core point is simple: when you use AI to create emotionally charged “customer moments,” you can undermine credibility—especially for brands that rely on trust and public confidence.
“But we labeled it.” The limits of micro-transparency
SBB’s defense is understandable and, on the surface, reasonable. They emphasized that they have guidelines for AI use, that AI can simplify processes and reduce costs, and that they still plan and execute real photo shoots. In other words: a balanced mix of classic production and AI-supported creation.
They also leaned on transparency: the image is marked “AI generated.”
Here’s the problem: transparency isn’t a checkbox. It’s an experience.
A tiny label in the corner is technically honest—but psychologically weak. Most viewers don’t read disclaimers; they read scenes. And the scene is built to feel like a candid, authentic winter moment. That’s why the debate caught fire: people didn’t feel “informed,” they felt subtly manipulated. Not because the brand lied outright, but because the visual grammar implied something real.
If you want transparency to work, it needs to be proportionate to the claim your creative makes. When the emotional promise is “this could be you,” a whisper of disclosure can feel like an afterthought.
The real risk: not outrage, but erosion
Most brand damage doesn’t arrive as a dramatic boycott. It arrives as something quieter: a gradual loss of distinctiveness and trust.
The online reaction to SBB’s AI motif clustered around two recurring themes:
- Uncanny discomfort. Some people simply found the image “off.” AI faces and lighting can still trigger that uncanny valley response—especially in campaigns that aim for warmth and relatability.
- Generic sameness. Others found the visual bland: the kind of polished, airbrushed winter perfection you could swap into almost any category—banking, insurance, telecom, you name it.
Both reactions matter. Because if audiences perceive your brand as artificial or interchangeable, you’re no longer competing on creativity—you’re competing on credibility. And credibility is expensive to rebuild once you’ve discounted it.
The most damaging question people asked wasn’t “Why did you use AI?” It was: “If this is fabricated, what else is?”
That’s the trust problem in one sentence.
Why brands do this anyway: speed, scale, and cost pressure
Let’s not pretend there aren’t legitimate reasons to use AI imagery. There are. And most marketers can list them in their sleep:
- Faster iteration and testing
- Easy adaptation for multiple formats and placements
- Versioning across language regions (highly relevant in Switzerland)
- Lower production costs
- Fewer logistical dependencies (weather, travel, casting, permits)
SBB’s public rationale sits firmly in that logic: simplify workflows, reduce costs, keep a mix of real shoots and AI.
From an operational standpoint, it’s hard to argue against efficiency. But efficiency isn’t the same thing as effectiveness—and neither guarantees brand safety.
If a campaign saves money but sparks a “this feels fake” narrative, it can end up costing more in long-term brand equity than it saved in production.
The key strategic question: What does your audience expect to be real?
Different categories come with different “reality expectations.”
If you’re selling fantasy—games, entertainment, playful consumer products—audiences often welcome artificial worlds. If you’re selling trust—public services, mobility infrastructure, finance, health—audiences bring a higher sensitivity to authenticity cues.
SBB is not just another consumer brand. It’s a national institution for many people. That context amplifies the stakes. A synthetic customer in a synthetic winter scene doesn’t just feel like a creative shortcut—it can feel like a mismatch with what the brand represents: reliability, public value, real life, real travel.
This is why the “AI generated” label doesn’t settle the debate. It may answer the question “Is it AI?” but it doesn’t answer the deeper one: “Should this have been AI?”
A practical rule: Use AI where it’s more honest than a photo
Here’s a guideline that can keep you out of trouble without banning AI outright:
Use AI where it clearly communicates interpretation, not documentation.
AI works best when it doesn’t pretend to be a real moment. It shines when it behaves like what it is: a creative tool for stylization, exaggeration, or visual metaphor.
AI is usually a good fit for:
- Clearly illustrated or stylized visuals (where no one expects reality)
- Surreal, humorous, or conceptual worlds (where artificiality is part of the point)
- Backgrounds, textures, set extensions, storyboards, pre-visualization
- Rapid prototyping and internal creative exploration
AI is risky when it’s used for:
- “Happy customer” moments that imply authenticity
- Employee portraits or “real people” storytelling
- Event recaps, documentary-style claims, testimonials
- Emotional realism meant to feel candid
Mallaun’s critique lands precisely here: don’t synthesize humanity as a shortcut—especially when the ad’s job is to build emotional trust.
If you use AI, don’t let it look like an accidental compromise
Another underrated factor in the SBB discussion is craftsmanship. AI is not automatically “cheap-looking,” but it can become cheap-looking fast if it’s produced without taste, art direction, and technical expertise.
There’s a growing category of content people now call “AI slop”: overly smooth faces, generic lighting, soulless perfection, and a vibe that screams “stock image, but we made it ourselves.”
If the audience senses that your AI use is driven primarily by budget cuts—rather than a creative idea that needs AI—you risk signaling something you never intended: that the brand is reducing effort.
Even if that’s unfair, perception shapes reality in advertising.
What marketers should take away from the SBB case
This campaign is bigger than one winter motif. It’s an early, very public case study in how AI changes brand communication norms.
Three lessons stand out:
1) Authenticity is becoming a premium asset.
As perfect synthetic imagery becomes cheap and abundant, real moments become more valuable—not less. The “imperfections” of reality start functioning as proof.
2) Transparency needs to scale with emotional ambition.
If you’re using AI for a subtle background extension, a small note may be enough. If you’re using AI to simulate a human experience, you need more than a label—you need clear context, intent, and a tone that doesn’t blur the line.
3) The brand promise must match the production method.
If your message leans on trust, closeness, and everyday life, synthetic people can introduce friction. You might still use AI, but you’ll need to frame it in a way that supports the brand—not undermines it.
In the end, the most useful creative question isn’t “Can we generate this?” It’s:
“Do we want to show a winter—or do we want to claim one?”
Because audiences can feel the difference. And once they start questioning your reality, they stop listening to your promise.
