It’s almost becoming a cliché, but it seems that you need to start any post on ChatGPT or another similar A.I (Artificial Intelligence) tool by sharing what it has suggested you should say!
I’ve not done that, although there is one part of this blog that has been informed by using the tool – more on that below.
I’m not going to lie – I’m a little freaked out when thinking about what new A.I. tools will mean for us, but as David Karpf said in this really helpful seminar on the implications of ChatGPT for political campaigning, this is potentially one of the most game-changing technologies that we’ve seen in campaigning in the last decade so it’s not something that you can just pretend to ignore.
So what does it mean for campaigners?
Here are 5 quick thoughts from a brief exploration of ChatGPT;
- It’s not going to take our jobs for now – I love this article from Future Advocacy which looks at how ChatGPT performs when presented with regular activities that will come across the desk of a campaigner.
It’s quite clear that the tool currently isn’t able to replicate the insight and knowledge that comes from a curiosity about how change happens which is the key to being a successful campaigner.
For example, it’s not able to pick up on the nuance and subtlety that is often needed in the development of campaign strategies. And because it uses the information that it’s been fed with, it’s not able to come with innovation or novel approach – ask it to develop a campaign strategy and it’ll share a range of ideas you’d probably think about – although it’ll do that in second not hours.
And when it comes to messaging – it’s rather good at producing cliches, but those can quickly demonstrate a lack of authenticity that’s often needed for communications to cut through.
It has some clear limitations – for now. - But it could certainly help in automating some processes for campaigners – there are probably a whole set of regular tasks where it could be helpful for campaigners.
Think about the challenge that some MPs have with campaigners producing the same pro-forma campaign email to them – could ChatGPT help to take information from a constituent and then produce a unique email response? (see this from Rally who tried just that).
Or the optimisation of adverts on social media platforms – it’s very easy to see how ChatGPT could help to produce different advert copy that could accelerate the optimisation process.
Or could ChatGPT be used to help in filtering and replying to responses to submissions to survey, applications for volunteer roles, or grant applications? - And that raises some important discussions about ethics that need to happen – this is a whole post in itself – about the inherent bias in the tool, the transparency of the outputs, and the motivates of those behind ChatGPT, but practically this is a conversation that is already emerging in the publishing industry where publishers are starting to be clear where they are and aren’t going to be using ChatGPT.
Campaigners are also going to need to be clear about how they’re using ChatGPT and be transparent about that with those they’re sharing the outputs with. As I said above, this article is all my own work, but I did ask ChatGPT for some help on the title of the blog. - This is where it probably is useful for now – as a starting point for research or ideas – as a tool it can be great as a starting point if you’re stuck and looking for ideas – like brainstorming title for a blog post, or an email subject line, or providing the initial outline of what you might want to include in a briefing note, or summarising an existing policy report or debate in Parliament, or helping you to organise a complex data set – the outputs might not be perfect, and certainly need a human to look over the output, but it’s a powerful tool to help to get you started.
- And it’s certainly going to fuel misinformation and disinformation – everyone has access to A.I tool, and it’s almost certainly the case that it’s already being used to pursue disinformation – see this for some examples of how it’s already being used in the US. If this is what the tools are able to develop and generate in just 6 months, so it’s likely that as it becomes more ‘smart’ that – for example, it’s already the case that A.I can take audio of someone speaking and turn it into a speech generator which you can program – so you can imagine the rise of deep fakes of politican supposedly saying something they’ve never said.
That’s going to raise some important challenges for campaigners which we can’t just ignore – about how you both deal with the effects of disinformation campaigns but also help to encourage critical thinking, and support activities that help to counter polarisation in society.