3 ethical AI questions every brand leader should be asking

Six years ago, the commercial production process for Fortune 500 companies, tech innovators, and global giants meant six-figure budgets, and months of research, scripting, and voice actor castings. Every campaign was a marathon of design thinking and strategic storytelling. Today, however, with the help of AI tools, those very steps can unfold in a fraction of the time, and a quarter of the cost. For marketing and communications leaders, the landscape has drastically shifted overnight.

The most innovative brand leaders have always thrived on speed. What allowed them to exist beyond the curve was their ability to stay ahead of the story, and see around corners before anyone else could. This has always been important, but the velocity at which we’re witnessing ideas go from ideation to execution is different––and alarming. Every week seems to introduce a new AI tool that promises to do things smarter, faster, and better for half the price.

The constant pressure to adopt or be left behind is palpable. In fact, according to Marketing Week’s 2025 Language of Effectiveness survey, 57.5% of marketers currently use AI to generate campaign content and creative ideas. Yet, 85% of those surveyed by Adweek say they feel pressure to keep up with the latest tools. The question that keeps arising for many leaders isn’t what’s next, but instead, at what cost?

ETHICAL INTELLIGENCE: A BRAND DIFFERENTIATOR

Debates about AI are often argued in extremes, either as magic wands or existential threats. What’s missing from that conversation is the middle ground. A place where brand leaders can lean into true stewardship, and where human values and intuition can meet machine precision. It’s the space where empathy meets foresight.

The future of influence won’t be determined by who adopts the next big tool first, but by who uses it responsibly. Ethical intelligence is the muscle every leader needs to strengthen to discern which AI tools to trust, and how best to use them. Because, when you rely on a chatbot or content platform, you’re not just trusting its outputs, you are trusting its creators’ ethics, awareness, and intentions. Leadership in this new world of storytelling understands the cost, and therefore asks the harder questions: Who does this tool serve? And who could it harm?

To build ethical intelligence in storytelling and content creation, brand leaders should anchor their choices by asking three questions: 

1. Empathy: Have we considered how technology impacts the communities it touches?

Large language models still struggle to detect the cultural nuances that build audience trust. This often shows up in subtle ways, like failing to capitalize “Black” and “Brown” when referring to ethnic communities, a detail that carries deep significance. At my agency, for example, we refrain from using “chief” for executive roles or “pipeline” to describe processes, out of respect for Indigenous communities. Language evolves daily, and the nuance of storytelling can’t be replaced by technology. The more we automate narratives, the greater the risk of eroding the human nuance that builds trust for audiences and consumers. Instead, we should look to culturally-attuned tools that are created or informed by the audiences you speak to, such as Aisha, an AI-powered guide informed by the Black experience. 

2. Transparency: Are we being clear about how and why AI is shaping our stories?

Consider recent headlines about Sora, OpenAI’s AI app and video generator that puts deepfake capabilities into users’ hands. A product like this tells us that authenticity and source are no longer a barrier or concern. I’ve witnessed these risks firsthand when my son created an AI-produced video of me getting my driver’s license (a milestone that never actually happened). Curious, I posted on my Instagram close friends’ list to see if anyone could spot the inauthenticity. No one did. Instead, my DMs were filled with congratulatory messages.

While this example can be considered harmless, the broader consequences can be far more serious. In the wrong or ill-informed hands, AI-generated content can perpetuate inequity and racial stereotypes if left unchecked. Take the case of Liv, an AI-powered “digital influencer.” Marketed as a breakthrough in representation, Liv was created by an all-white male development team to personify a Black, queer woman. Lacking authentic oversight, the bot inevitably fell into harmful caricatures reminiscent of the “Mammy” stereotype from early American media.

As scholar and author, Ruha Benjamin, observed in her book Race After Technology: Abolitionist Tools for the New Jim Code, “Technology is not creating the problems. It is reflecting, amplifying, and often hiding preexisting forms of inequality and hierarchy.” Liv became a case study in the urgent need for accountability and diverse perspectives in the development and deployment of AI-driven narratives.

3. Equity: Are we creating in ways that protect human dignity over data dominance?

It’s worth asking what this constant reliance on technology is doing to our minds. People are doing so much cognitive offloading of their thinking that it’s reducing their critical thinking skills in ways that don’t bounce back, notes X. Eyeé, AI expert and CEO of the consultancy Malo Santo.

As AI-generated content becomes more advanced, many leaders are using it to expedite proposals, campaigns, and creative productions. When it comes to data, the direction has been about volume. Yet some organizations are taking an opposing stance by embedding clauses into their contracts to restrict AI use. Not because they reject efficiency, but because they are signaling a pillar of their values that speed should never come at the expense of authenticity.

In the future, transparency will be at the forefront of the most innovative companies. Where AI already plays a role in your workflows, be upfront about it with your team, clients, stakeholders, and audience. The next generation of brand leadership will be shaped by those who prioritize ethics and integrity in every decision about the way AI is used, and set a new standard for responsible innovation.

Rakia Reynolds is a partner at Actum and founder/executive officer at Skai Blue Media,

source https://www.fastcompany.com/91467864/3-ethical-ai-questions-every-brand-leader-should-be-asking


Discover more from The Veteran-Owned Business Blog

Subscribe to get the latest posts sent to your email.

Published by Veterans Support Syndicate

Veterans Support Syndicate is a partner-centric organization that unites with diverse networks to elevate the quality of life for U.S. veterans nationwide. Leveraging deep collaborative efforts, they drive impact through Zen Force, a holistic virtual team providing mental health advocacy and resources. They also champion economic independence via VetBiz Resources, supporting veteran entrepreneurs through launch and growth. Together, they ensure those who served receive the support they deserve.

Discover more from The Veteran-Owned Business Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading

Design a site like this with WordPress.com
Get started