The use of AI tools is increasingly influencing how humans interact. These tools are commonly used for tasks like checking grammar, copy-editing or translating, amongst many others.
How do they fit within a facilitation model focused on learning from differences and creating understanding through authentic engagement and empathy?
How to use AI responsibly to promote dialogue?
With its limitations, AI can be a tool for facilitation - amongst other tools - if used responsibly and with a purpose. You can use AI tools for example to:
Prepare for a session
Provide additional ideas
Provide additional discussion frames and angles
Support the various language abilities in a group
When preparing for a session, AI tools can be great for ideation. For example, you can get ideas for ice-breakers, diverse discussion frames or perspectives.
The use of AI tools such as ChatGPT is less recommended during sessions, although they may serve as a source of additional ideas during active facilitation. It is important to distinguish between an idea and an intervention: while AI tools can provide suggestions for additional ideas, they should not be relied upon to deliver direct facilitation interventions during sessions.
For instance, AI-generated summaries of discussions lack the multiple layers of human communication that are essential to capture. If you use these summaries without applying your active listening skills, they focus solely on the technical aspects of what was said, missing the deeper, core elements of human communication.
What to avoid when using AI?
-
Trusting the content blindly
-
Solely relying on AI to lead meaningful human communication
-
Inserting AI-suggested interventions into facilitation without assessment and vetting
-
Reading out loud suggestions from AI without your authentic facilitator voice
Is AI neutral or multipartial?
What is paramount to remember is that technology is not a neutral tool - neither is AI. AI has background and biases - just like the humans that created it do - but it lacks the self-awareness to recognise and work on those biases.
AI also does not have all the ‘facts’ correct - and what tools like ChatGPT respond to you, may be different to another user with the same prompt. For example, on the eve of the US presidential election in November 2024, ChatGPT shared that it was a close race between Biden and Trump, clearly getting this widely known fact of the candidates wrong.
In short, AI can’t be considered neutral or multipartial, so when using the tool for facilitation in a facilitation model that rests on these two premises, this is crucial to keep this in mind.