AI: Fear, Identity & Moving Forward
Insights from the intersection of research, anthropology, organizational culture, and collaborating with the world's largest tech, lifestyle and media brands.
Fear. Anxiety. Existential dread… synonymous over the past year with AI.
AI isn't technically that NEW… the foundations of AI have been in place since the late 1950s, yet this next generation of commercialized artificial intelligence feels so world-altering and new that the slate has been wiped clean. Or as Mohan Nair, a mentor and author on innovation recently said: "There are no AI experts just explorers."
What is it, exactly, that we are so afraid of?
Exploring new frontiers will always bring about trepidation. Some leap to fear of job loss, loss of control over technology, and apocalyptic futures. A root cause less often discussed is fear about what AI means for our sense of identity.
Look no further than the shared sense of identity and core values behind who "we" are as researchers & insights professionals. Our field generally is careful, judicious, ethical, risk-averse, and accuracy-driven. These skills have served us, and the businesses we serve, incredibly well. We are a critical part of de-risking investments and decisions, giving our colleagues clear and accurate visibility into data, informing perspectives on fast-changing landscapes, and shaping the future of products and experiences. The stakes are high for passionate, data-minded leaders and knowledge workers to get things right and do right.
AI, nascent in its current form, is not yet fully in line with those core values.
AI is not always right, hallucinates from time to time, and can be a black box.1
We don't yet have a strong command of how it really works: its limitations, implications, the biases it brings, or even how to best use/prompt it. All of which reduces our confidence.2
There has been a groundswell of "AI Products" created in the last year. With a boom comes a bust - according to HBR & Gartner, many of these are predicted to fail.3
We've seen costly backlash in the media from companies who have gotten it wrong. 4
Getting it wrong can cost companies millions
The fear of backlash mentioned above should not be undercut: especially in the market research & insights industry. No one wants the Samsung leak to be their fault. Healthcare and fintech are bound to worry more given the sensitivity of the data. Yet, even fields that are experimenting and implementing AI more actively have skepticism and concern. Consider recent examples in the tech, media, and entertainment sectors.
During the recent Hollywood strikes (which came with an expensive fallout) the overuse of AI/tech was a key issue.
Last week at a media industry event, an executive spoke about how the stakes were "too high" for using AI anywhere near their "crown jewels" (live TV events, award-winning TV shows).
This month we got a front-row seat to the fear of the ethical implications of AI on full display with Open AI's board drama.
And in the world's leading tech companies, similar internal infighting about ethics, accuracy, and commercialization is ongoing. It's not just creating alignment problems; it's also creating chaos in workplace cultures.
The commercialization of AI is bristling against our sense of identity and values.
Addressing that truth and being prepared to talk about it and navigate it as an industry is the first step in moving forward. Because of our identity, we will find ourselves threading the needle of fear and opportunity. Fear is in tension with a genuine curiosity and desire to learn and evolve. As researchers and leaders who are shaping businesses and brands, even with the skepticism we carry, we must meet the challenge of engaging our curiosity.
If we can cultivate a shared commitment to look for ways we can transform how we talk to data, new doors will open.
And yes… with change in the world inevitably also comes changes in us: and our identity. We must be prepared to evolve and adapt our identity to make space for what is next. But we can take human agency over this change, be mindful and intentional of how we let this new frontier shape and influence us, but most importantly, be active in our intentions to also shape and influence the technology itself.
Leaving you here with a few additional tips for framing your engagement with AI right now:
DON'T LET FEAR CLOUD CURIOSITY. More people are experimenting and integrating AI than you realize: even small lifts with AI (transcription, captions, text editing) can supercharge teams and take redundant, low-pay-off tasks off plates and make space for more human-powered thinking.
WORK PROACTIVELY THROUGH ORGANIZATIONAL POLITICS: Many corporate teams have a fear of dabbling in the space (or advertising their experimentation) because they worry about triggering new legal and compliance review cycles, causing project delays, or stepping on toes. Don't let some of these things prevent you from moving forward - start setting up your partnership pathways now. Future you will be thankful.
IDENTIFY AREAS OF IMPACT: Prioritize bringing AI into the things that allow us to make better use of our time towards meaningful work. The small lifts can be just as meaningful as the big splashy and thorny ones.
WORKFLOW INTEGRATION: The AI player who delivers the best integrations into daily-use platforms will win - people don't want the risk of adopting and trialing new apps, businesses, and solutions only for them to become obsolete or absorbed.
THE VALUE PROP OF "ALL AI" IS A TOUGH SELL: people are still looking for assurances of the role of the human touch. Skepticism and verification via human review are still essential: as my colleague Richard Scionti, who leads AI integration at CMB says, the future is all about finding "the best intersections of AI and HI (Human Intelligence)"
AN OPEN QUESTION ON BIASES – How will AI reshape and impact biases? No one is confident about what new biases AI will introduce or alter, how to best manage them, or the tradeoffs and relationships of the Human vs. AI biases. There is no easy answer, and it may be years until we see one. While we live in the grey, we need to be committed to ongoing dialogue and research in this area.
RETAIN SOME OF YOUR CRITICAL SKEPTICISM, especially in sales pitches and bubble-gum and duct-tape commercialization. As Mohan reminds us, no one is an expert in a nascent category. Don’t feel guilty for your skepticism, your human intuition is an important part of shaping the future.
Want more on this topic or talk about use cases? Happy to chat more and dig into where and how we are using AI in the research process right now.
MORE READING: There are many different sources on these topics, but here are a few if you’re interested in digging deeper:
AI Chatbot Hallucination (CNN) - many perspectives on this topic, but a good intro.
Human Absorbed Biases from AI (Scientific American) - a great piece as we undertake exploring the dynamic dichotomy of human:tech biases.
Keeping Your AI Projects on Track (HBR): “Most AI projects fail. Some estimates place the failure rate as high as 80%—almost double the rate of corporate IT project failures a decade ago."
Achieving Next-Level Value from AI (Forbes) "Gartner, Inc. has estimated that 85% of artificial intelligence (AI) and machine learning (ML) projects fail to produce a return for the business."
Companies Increasingly Fear Backlash Over AI Work (Wall Street Journal) “The rapidly evolving technology has opened up a nearly limitless number of use cases for businesses, but also a new set of risks—including public backlash and damage to a company’s reputation.”