I give various versions of this talk to students and other scientists. Get in touch if you’d like me to give it to your organisation – it usually gets lots of good discussions going.
The fun thing this year is that TikTok is a mainstream thing now, and Twitter is dying. Going to be an interesting year.

My name is Doug McNeall, and if you don’t know me, I’ve worked as a climate scientist at the Met Office here in Exeter since 2008. Since 2019, I’ve also been a part-time lecturer here at the University, attached to the Global Systems institute.
I’m a statistician and climate modeller. My work is in understanding and quantifying uncertainty in our climate models, how that relates to our uncertainty about the Earth system, and what that means for our understanding of future climate change and its impacts.
But that’s pretty dry, and I’m also interested in the communication of climate change, and particularly of climate science. So, I’ve been asked to give this lecture on my experiences communicating climate change via social media. I hope you’ll find it interesting, and perhaps useful.
I’m fully aware of the irony of an Xennial (that’s the microgeneration between gen X and Millenials – roughly, born between the first set of Star Wars Films) lecturing an audience comprising mostly of gen Z on the use of social media,
I’ve given versions of this lecture to my peers and scientific colleagues before so if what I say is old news, my apologies.
What’s my message?
If I were to put that whole talk into a single statement, i’d say that:
I think it’s important for a healthy scientific culture and a healthy society that scientists talk about their science on social media.
And I think that that’s particularly important in the context of climate change. Climate change will impact every single aspect of everyone’s lives, for the rest of their lives. Uncertainties can be large, and our science advice has potentially high consequences.
The COVID-19 pandemic afforded us scientists a unique opportunity to observe how important scientific information, misinformation and disinformation is disseminated, digested and acted upon in real time, and in a crisis. It’s been like watching the public discourse on climate change sped-up 100 fold.
It’s given us a good idea of the benefits of a well-informed public discussion on a crisis, or alternatively, it’s shown us the damage that can be caused – deliberately or otherwise – by bad information, or bad-faith actors.
I’m not saying that science has all of the answers for how to solve the climate crisis, or the covid19 pandemic. Far from it. But I am saying that science has a unique and valuable contribution to offer in establishing a common reality for a society to decide how to act. And today, social media is a big part of how that conversation happens.

I’ll talk about the social media landscape, a bit of history of how it developed in a climate context.
I’ll offer a few tips for talking about climate and climate science on social media, and some of the tactics used by those who might not be operating in good faith.
I think disinformation is a huge threat, and one of the tools for countering it is teaching people how to recognise how it is spread.
Finally, I’ll talk about what I believe needs to happen next to make social media a more useful tool for discussing climate change and other societal challenges.

Talking about climate on social media is actually relatively new, compared to the length of time we’ve known about the serious consequences of climate change.
Just after I was born, in 1979, the Charney report nailed down climate sensitivity – the warming we’d expect given a doubling of CO2 in the atmosphere – to about 3 degrees C, with a range of 1.5 – 4.5 degrees C. This is surprisingly unchanged to this day – the most recent IPCC report released last year estimated the same 3 degrees C, with a slightly narrower range of 2.5 – 4 degrees C.
Mass media in the Anglosphere finally really started serious coverage in 1988, when James Hansen testified to the US senate that we’d already seen warming, and that he was 99% certain that it was caused by humans. The fact that Washington was in an extreme heatwave at the time probably helped.
Early online (pre-web) discussion of climate change tended to be between scientists and other interested parties on Usenet, an early internet discussion forum.
Coverage of climate change (or “global warming” as it was known then) increased through the nineties, as the IPCC was born, started to release regular reports. The UN Framework Convention on climate change, and the COP climate conferences got going.

This graph shows the volume of newspaper reporting on “climate change” or “global warming” from the Media and Climate Change Observatory at the University of Colorado.
There was a surge in media coverage around 2007, when the IPCC released its fourth report, and Al Gore released the film An Inconvenient Truth.
Many of the dominant social media platforms we know today were launched around this time, or just before. But in the mid 2000s, online discussion of science was mainly on blogs, with RealClimate representing the views of scientists, and a number of popular “climate activist” and “climate denial” sites.
Many of the tactics that have become familiar in the online climate (and political) debate were developed here. For example, during the “climategate” “scandal” in November 2009 (see this spike on the graph), a huge cache of emails were stolen from servers at the prestigious Climate Research Unit at the University of East Anglia. They were mined for out-of-context quotes to make the scientists look bad, and to sow doubt on the credibility of climate science. The story was picked up by the UK print media, and from there news outlets around the world. The affair was credited with being instrumental in derailing the important Copenghagan climate conference talks in 2009 (although it’s unclear what influence it really had, and there is good evidence the talks would have failed anyway). This incident was eerily similar to the Hillary Clinton email scandal in the 2016 US presidential election.

Around this time, social media platforms like Twitter and FaceBook replaced the blogs and news website comment sections as the most common places to discuss climate change. Twitter was important early on, but facebook and YouTube became increasingly popular vectors.
I got on twitter in 2010, and started engaging with other climate scientists, and arguing with climate skeptics.
Twitter had lots of advantages for a decade – it was where the nerds hung out, but also the politicians and, usefully, the journalists. That brought opportunities for climate skeptics AND scientists, as I’ll discuss later on.

What did the conversation about climate change “look” like? The Carbon Brief commissioned this awesome study in 2016, looking at the shape of the conversation on twitter. Nodes represent users, and the lines represent tweets and interactions, with darker lines representing more interactions and stronger relationships.
In this visualisation, you can start to see the beginnings of “filter bubbles”, or “echo chambers”, where groups with similar views start to talk more exclusively to each other. This happens in the real world too, and is a often a consequence of the normal ways we interact. This effect seems to be supercharged by social media structures and algorithms though, in ways that can seriously distort the societal conversation about any contentious topic.
If you look really closely, zoom in and zoom in, you can see …. My boss, who has loads more followers than me.

So, Twitter is undergoing some … changes.
Somehow, the social media platform affectionately known to it’s users as “the hell site” has got worse. If anyone is not aware, Twitter was bought by billionaire Elon Musk for too much money, and he’s fired a lot of the people working on keeping users safe from abuse or harmful content. This will most likely degrade the user experience and put off advertisers, at least in the short term.

This is in the context of the explosive growth of TikTok, passed 1 billion users, and seems to be where many of generation Z are getting their climate information.
Social media is ubiquitous. It’s replaced mainstream media as the dominant way that some sets of people consume news, and therefore is perhaps the primary way that people hear about climate change these days.
Interestingly, research shows that large, traditional media outlets (like newspapers, TV networks) are still big players on social media, and that brings about opportunities too.
As each generation comes along, they’ll want their own platform where their parents don’t hang out. They’ll work out what they want from it, and how to be successful.
But the constant here is change – the advice to the next generation to come along and use social media necessarily becomes more general and vague, as the platforms and rules change under your feet.

So, given that constant change, can I offer any advice to people who would like to use social media to talk about climate change or climate science?
I’ve written this list with my colleagues in climate science in mind, but there are some constants that might be useful to anyone engaging in the climate space.
I’ve done this lecture for a few years now, and when I started it was mostly cute ideas to get views on twitter and now it’s more about why we should restructure social media to be less dangerous for society.

First, know what you want to achieve. This will be different for you than me, but you’re goals are just as valid.
I engage because it’s interesting, and people are interested. It’s fun, and rewarding. I get gossip. There are benefits to my career – I’m more well known than I should be. I’ve built a network of like-minded (and sometimes differently minded) colleagues outside (and even inside) my organisation. It flattens hierarchies.
For me, communicating evidence and (in a small way), helping to equip our society with the tools it needs to make decisions on climate change are the some of the reasons I talk about climate. And because of those goals, there are certain things that I think about when i’m online.
But i think it’s my duty as a publicly funded scientist and civil servant to share my knowledge. Good science is open, and transparent. Technically, my role is to do the science that informs the UK government, but I think there is a recognition that broader communication is both useful and necessary.

As former UK government chief scientific adviser Sir Mark Walport said:
“Science isn’t finished until it’s communicated.”
It’s more than PR, showing that scientists are human, have human feelings and biases, but that the *process* of science can generate reliable information, is an important part of gaining and maintaining public trust, particularly when it comes to high consequence scientific work.

Scientists are in a privileged position, in that they consistently score among the most-trusted professions. With that trust comes an opportunity, but also a responsibility. Maintaining trust in science as a useful process is a long game, and it’s a lot easier to lose trust than build it.
Be honest. Don’t lie.

As a civil servant, I’ve a pretty strict set of rules when it comes to engaging online or in the media, called the civil service code. I need to be (and seen to be) politically impartial, to act with integrity and honesty, and to be objective. Those sound pretty familiar to a scientist!
In practice, this means not criticising government policy, and staying pretty strongly focused on scientific results. The idea is that we can be “policy relevant but not policy prescriptive”. (We saw the same advocated during the pandemic by the CMO and chief scientist).
This can be frustrating, but the upside is that it allows us to draw boundaries between personal preferences and the science we communicate online. This can insulate the scientist from a lot of abuse and conflict online.
it’s possible though that this “objectivity” criterion, which is (at least nominally) shared by major broadcast media like the BBC, has helped keep the communications environment we’re working in less polluted, polarised and contentious than in other countries. That in turn might also have helped keep climate change more accepted across the political spectrum, less politically divisive, and therefore made it easier for politicians to take climate action.
One downside is that the requirement to be “objective” can directly conflict one of the ways in which social media can be more effective – by *being* subjective. Being Human, having a point of view. I think there might always be a tension here.
An interesting development is that as climate change science has become more accepted as reality, what is seen as “objective” (and therefore acceptable for scientists to communicate) has changed.
When I started out, it was unacceptable to say “we need to reduce carbon emissions”. This statement was seen as overly political.
Now, the damages of climate change are here, and obvious. Policy options have narrowed, and talking about emissions reduction policy is no longer so controversial.

Further, there is more of a societal expectation that as a scientist you can and in fact *should* be an advocate. As a climate scientist, why would you try and be overly objective and detached about deaths caused by climate change, or about damage to ecosystems? Ignoring the human side of a changing climate might damage the reputation and trustworthiness of a scientist, rendering them cold and detached.
I think this is why we see climate denialists consistently doing two things. They sow doubt about science, to stop it becoming “accepted”, and they claim that some basic facts – like accepting the fact that climate change exists – are political rather than scientific statements.
One of the reasons that Greta Thunberg works as a communicator is because she has an innate *authenticity*. She is part of a generation that will be deeply impacted by climate change, through no fault of their own. She is perceived as too young to be interested in power for its own sake, and she’s a seemingly fearless communicator.
The attacks against her – she’s a child, she’s mentally unstable – she has no agency of her own, she’s being used and misled by nefarious actors) are an implicit acknowledgement of that authenticity, and an attempt to negate it.

I think that disinformation, polarisation, echo chambering, the promotion of denial, and the attack on trust in authoritative information are key threats to a healthy communication environment, and to our society.

Without wanting to sound like a conspiracy theorist, there is good evidence that all of these things have been, and are being deliberately used by powerful interests to distort the discussion of climate change online.
I don’t think that there’s some mysterious cabal controlling everything, and these techniques don’t work if people aren’t receptive to these kind of messages anyway, but there is a lot of evidence that these are explicit strategies used by effective communicators.

I highly recommend the BBC podcast series How they made us doubt everything for a forensic look at how large oil companies used the same strategies (and in many cases, exactly the same people) as the tobacco industry to delay action on climate change for decades. That podcast is built upon a large body of work, for example by Naomi Oreskes and Erik Conway in Merchants of Doubt.
For example, post war and in the 50s, evidence started to gather that smoking caused poor health, and that it might even cause cancer. The tobacco industry used a “white coat strategy”, that involved getting doctors to endorse it’s products. This morphed into hiring doctors and prestigious and seemingly credible scientists to publicly question the evidence linking smoking to cancer.
These same strategies were employed for the link between fossil fuels and climate change. Emphasising and exaggerating uncertainties in the science, and doubt in the general population. Making people fear that their livelihoods would be threatened by taking the necessary steps to combat climate change.
These companies developed a playbook for mass media communications, and this was adapted and applied to social media as it took off.
Since I’ve been giving a version of this lecture, the COVID pandemic happened and it turns out the THE VERY SAME PEOPLE have been active again, using the very same tactics. The overlap between climate denial and covid denial is astonishing. Usefully, this has alerted many people to the tactics that they use, and even the BBC now has a dedicated disinformation reporter, In Marianna Spring.

There are many disingenuous actors on social media, and they’re using tried and tested techniques to spread misinformation and disinformation, in order to further their agendas. From troll farms and brigading, to sock puppets, to paid stooges, alongside techniques invented to manipulate broadcast media in the twentieth century, transferred to social media in the twenty first.
Astroturfing, for example is creating fake grass-roots publicity campaigns. There are a lot of pro-fracking campaigns in the UK for example, where it is unclear where the funding and organisation are coming from.
The difference with social media is that, working in a contested area of science, you will probably interact with them. In a previous era, a small number of people might reach a certain level of experience, do a bunch of media training, and then debate a denialist on television.
Today, a junior Covid scientist might tweet a seemingly innocuous result from a scientific paper, and be verbally abused, credibly threatened, or have their identity and address revealed online.
I don’t think there is currently enough recognition of the psychological and mental health toll that can be exacted when scientists engage online in a “controversial” topic. Institututions should recognise this, and offer training, monitoring and support. There are steps you can and should take in order to protect your own health though, before these resources arrive.

I used to engage with what one might call “climate deniers”, largely because I was interested in them, and what they were doing. There was an idea that although you might not be able to “win” an argument with them, a silent audience would observe and note your rational arguments. There is such a firehose of misinformation these days, that I no longer think that’s a viable strategy (though others still do).
I’ve stopped engaging in twitter debates that look like this, and that might indeed be one of the aims of the people using these strategies – to waste your time and emotional energy until you leave the playing field to them.

It became clear to me that 1) you can tell pretty quickly in an interaction is a genuine discussion in “good faith” and 2) If you block a few people who are not engaging in good faith, your negative interactions go down very quickly.
You might get punished by the algorithm though, and have less people view your communication.
That might be OK, if you’re attracting fewer toxic interactions!
So, today I advise using the block function fairly liberally. They just want your attention, to distract you from what you are doing, and communicating. The vast majority of these people are not conducting a genuine exchange of ideas in good faith.

It’s pretty clear that political polarisation of climate change has been an explicit tactic, and interesting to note that it’s probably been done by both sides.
Democratic voters in the US are more likely to vote for pro-environment legislation.

If you cast the *science* of climate change as political, you engage people’s political reasoning (culture emotion etc.), rather than their rational reasoning. You *increase* uncertainty and doubt, because politics is about discussion and bargaining, and there often isn’t a “right” answer.
This politicisation doesn’t always make for good policy – for example, mask wearing (and response to the virus in general) was heavily politicised in the states, leading to some very high infection and death rates.

What to do then, knowing all of these tactics?
As a kid, I used to wonder why people went up for headers. I didn’t know much about football. There’s a big tussle, someone gets an elbow in the face, and then the ball squirts out in a random direction anyway. Later, I realised that it was necessary to deny the opposition a free header. It was important not to let them just take control.
I don’t mean that we necessarily need to engage climate sceptics (or their equivalent) in direct debate, only that we shouldn’t cede control of the space. We shouldn’t give them a free header.
People deserve access to good information, and we have wealth. We should, as a matter of course, share it as widely and as engagingly as we can.

Prof. Katherine Hayhoe is a great climate communicator, and she makes the point that just talking about (e.g. climate change) is perhaps the most important thing you can do. And that’s because you as an individual can reach audiences that others can’t. We need everyone. Diversity is important. It’s clear that not only the message, but the messenger is important.
Katherine, for example, is an evangelical Christian as well as a professor of Atmospheric Dynamics at Texas Tech. Her husband is a pastor. She can reach an audience of politically conservative Americans that, ironically for a group that represents so much power and mainstream culture, have received some very poor information about climate change, and what it means.
This is part of what of what social scientists the “cultural cognition” model of public understanding of science.

It’s easy for me. I’m a white, European, straight, mid-career, researcher who identifies as the same gender I was assigned at birth. I’ve tacit and explicit support from my employers, and the time and space to engage without constantly looking to the next funding round. This is as close to playing the game on “easy mode” as it gets. Many climate scientists look like me. I have a duty to create space for my colleagues who might find it harder, who have been marginalised or overlooked, to engage. I’ll advocate for better support for them, and do what I can to promote their communication efforts.
What’s good?




The noticeable part of social media is the part where something you create goes viral. Everybody gets to learn a new thing. Ed’s spiral. The Reading water tweet.
Here’s a tactic. Social media hasn’t fully replaced traditional media, and traditional media outlets are still large players on social media. Disinformers often use social media to cycle through and test out many ideas, see what gets traction, and try to get their content picked up by mainstream outlets. You can use this tactic too!
For example, one of my tweet threads went micro-viral, and was picked up by the broadcast meteorologist, Laura Tobin. Laura is doing great work bringing the science and impacts of climate change to a large and mainstream audience on Good Morning Britain. She and her team at ITV worked some of the thread into a series of videos that got much a much larger audience than the original thread.
Just as important as going viral is the small scale work that doesn’t get famous. Building a community. Giving a journalist background. Creating space for others.

I love this.
TikTok #climatechange content
Honestly, there is so much good climate change content on TikTok that I could just spend this whole lecture playing good examples. TikTok encourages people to be themselves, to be funny, or interesting, to be creative and irreverent, to share good information rapidly and concisely and with *unbelievably* good editing.
But, the algorithm that I’ve just outlined pushes a lot of really bad content too. One thing I’ve noticed is that outright *denial* of climate change is much less of a problem on TikTok than other platforms. It seems that the younger users on TikTok are much more engaged with climate, sustainability than older generations, and this makes a lot of sense, as they will be the most impacted.

However, there is a *lot* more “apocalyptic” content, where the impacts of climate change are exaggerated, or their timeline is sped up to the next few years. A lot of this content is clearly from well meaning young people who are being informed that climate change is disastrous and inevitable, without them having meaningful agency. Frankly, I find it disturbing that many people are unnecessarily worried, or at least they have no hope where there is some.

With the explosion of video came the primacy of the algorithm. TikTok is better at “surfacing content to users who don’t follow you”, because its algorithm laser-targets content to people who will engage with it.
Rabbit holes are *actively promoted by the algorithms embedded in the social media platforms*. They’re optimised for engagement, not truth, because they want to sell you more adverts. They’re frighteningly good at it.
In this impressive investigation by the Wall Street Journal, the journalists created 100 “bot” accounts, and gave them “interests” which dictated which videos they lingered over.
They discovered that the tiktok algorithm took between about 40 minutes and two hours of time on the app to discover the interest of the bots.
The algorithm starts by serving users a range of very popular videos.
The algorithm only needs dwell time – the amount of time a user lingers over the video – to work out those interests. They don’t necessarily serve you content you *want*, only what you *watch*.
As a result, app users are pushed further and further down rabbit holes, and towards more extreme (and rarely watched) videos *by design*.
Again, I’m no social scientist, but I think there is a real risk that the ubiquity of these opaque algorithms might have a real impact on the reality that people experience.
TikTok is different from Twitter, because an estimated 90-95% of content a user sees is chosen by the algorithm, rather than shared by the people they follow. It’s still easy to generate your own filter bubble on twitter of course, but on tiktok it seems to come built in.
Things can go really viral (ask me about this) – but it’s not always beneficial, or even healthy.
It also makes for a less consistent social media experience, and it makes putting together an organic, trusted network of information providers harder for the user. I think it might be more susceptible to disinformation, or to manipulation by those that provide that disinformation. There isn’t much research on this yet, at least in climate.
Ultimately, it gives more power to the social media provider, and less to those that have steadily built up a carefully curated profile or network.
I think we could and should advocate for more transparent algorithms, and more accountable social media providers.
My Takeaway

As an individual, you do what you can. Don’t feed the trolls. Try a lot of things, look after yourself and get a support network. Go up for the header, take the occasional elbow in the face, but you’ll need a team.
If you’re more senior or experienced, or representing an organisation you should be looking after that team. Create the space for others to engage.
And as a community we should be advocating for more transparency, and more accountability from the huge social media companies that have the power to shape our shared reality.
[Thanks for listening]