Google Gemini ad controversy: Where should we draw the line...
After widespread backlash, Google pulled its “Dear Sydney” Gemini ad from Olympics coverage. The ad featured its generative AI chatbot tool, Gemini, formerly known as Bard. The advertisement featured a father and his daughter, a fan of United States Olympic track and field athlete Sydney McLaughlin-Levrone. The controversy brings up key questions about the preservation of human skills, and the ethical and social implications of integrating generative AI tools into everyday tasks. The question here is where the line should be drawn between AI and human involvement in content creation, and whether such a dividing line is necessary at all.
Impact on Creativity and Communication
This advertisement sparked widespread backlash online about the growing role of generative AI tools and their impact on human creativity, productivity and communication. Critics argue that relying on AI for tasks traditionally done by humans will undermine the value of human effort and originality, leading to a future where machine-generated content overshadows human output.
Integration of AI in Daily Activities
AI tools are effectively integrated in almost all aspects of our daily activities, from entertainment to financial services. Generative AI has appeared to become more contextually aware and anthropomorphic, meaning its responses and behaviour are more human-like. Many people are struggling to strike a balance when it comes to using these tools.
Challenges and Considerations
On one hand, given enough human oversight, advanced models of ChatGPT and Gemini can deliver cohesive, relevant responses. However, AI-generated content lacks a unique, human touch. To better understand the implications of AI-generated content on human communication, and the issues that stem from them, it’s important to adopt a balanced approach that avoids uncritical optimism and pessimism.
The Elaboration Likelihood Model of Persuasion
The elaboration likelihood model of persuasion suggests there are two routes of persuasion: the central route and the peripheral route. In the context of AI-generated content, there is a risk that both creators and recipients will increasingly rely on the peripheral route. This could lead to a surface-level engagement without deeper consideration, undermining the quality of communication and human connections.
Future Implications and Recommendations
As hiring managers receive an increasing number of AI-generated applications, they are finding it difficult to uncover the true capabilities and motivations of candidates, resulting in less-informed hiring decisions. Our collective line of inquiry needs to shift towards exploring a state of interdependence, where society can maximize the benefits of AI tools while maintaining human autonomy and creativity. Achieving this balance is challenging and begins with education that emphasizes foundational human capabilities such as writing, reading, and critical thinking.
Clarifying the limits of AI integration is equally important. This may involve avoiding AI usage in personal communication, while accepting its role in organizational public communication, such as industry reports where AI can enhance readability and quality. It is crucial to understand that our collective societal decisions will have significant future impacts, and fellow researchers need to deepen the exploration of the interdependence between humans and AI.