When “Ethical AI” Isn’t Actually Ethical
Last spring, Adobe made headlines for all the wrong reasons. The company that had positioned its Firefly AI tool as the “ethical alternative” to competitors like Midjourney was caught red-handed. During the process of training the Firefly AI model, some of the images came from competitor Midjourney: the very company Adobe had criticized for using questionable training data.
The irony was thick. Adobe had built its entire marketing strategy around being the responsible choice for enterprise customers who were “very concerned about using generative AI without understanding how it was trained.” Yet behind the scenes, they were cutting the same corners they publicly condemned. The question isn’t whether or not consumers will find out how we’re using AI: it’s when, and whether we’ll be ready for that conversation.
As marketers, we’re caught in a perfect storm. Consumer skepticism toward AI is high, regulatory scrutiny is increasing, and our industry’s credibility is already fragile after years of data privacy scandals and “black box” advertising algorithms. The numbers tell the story: 86% of Americans say transparency from businesses is more important than ever before, according to Sprout Social research. Meanwhile, 81% of consumers must trust a brand to buy from it, per the Edelman Trust Barometer. We’re asking consumers to trust us with AI-generated content at precisely the moment when trust has become our scarcest resource.
This matters especially for brands targeting the New American Middle, a pragmatic subset of American consumers who value authenticity. They’re not anti-technology, but they are anti-manipulation. And they have long memories for brands that burn them.