After reading the latest lurid reports about Facebook, hearing from one of its executives always comes as quite a shock.
The company stands accused of amplifying the worst aspects of human behaviour in order to turn a profit, but its spokespeople are profoundly devoted to taking the heat out of things.
Years of controversy have made them experts at being boring.
Proving this on Thursday was Antigone Davis, the company’s global head of safety, who was appearing before the parliamentary committee looking into the government’s proposed new online safety law.
She spoke down the line from what looked like the world’s cleanest kitchen, just as she did last month to the US Senate.
I’ve interviewed Ms Davis in the past and she came across as deeply earnest and committed to her work. But despite everything I tried, I learned nothing from her.
I asked questions until the Facebook PR poked me in the back to get me to stop but came away without anything that could be described as an answer.
The members of the committee didn’t fare much better.
Earlier this week, they heard from Facebook whistleblower Frances Haugen, who has leaked documents detailing the social network’s failures on everything from eating disorders to extremism. Facebook, she said, was “making hate worse”.
The MPs’ attempts to raise these allegations were met with the deadest of dead bats. The tone was set early when committee chair Damian Collins asked about internal research conducted by Instagram suggesting that many teenage girls felt worse after using the app.
Were young people having posts about self-harm or eating disorders pushed at them by the app, Mr Collins asked.
“We have policies against that content,” Ms Davis replied.
There was a lot of talk about policies. Asked by the SNP’s Bob Nicholson why Facebook had removed certain pages promoting human trafficking, Ms Davis replied: “We take down those pages because they violated our policies.”
In fact, the leaked documents showed that Facebook only removed the pages because Apple said it would remove its app from the App Store if it didn’t.
Ms Davis did acknowledge that “our AI is not perfect” and that Facebook relies on people who “flag this content for us”, but the real decision, she insisted, was taken by Facebook, acting in accordance with its policies.
From the way Ms Davis was talking, it could almost seem as if the policies in themselves were the solution. But of course that skipped over all the important questions. Was Facebook actually able to uphold its policies? Were the policies the right ones anyway?
Sadly, we didn’t get to find out, because Ms Davis didn’t bring any new information beyond the material that Facebook puts out already, which doesn’t provide anything close to the full picture.
For instance, its transparency reports trumpets the fact that its AI safety tools spots 97% of hate speech before it is flagged by a human.
What it doesn’t do is tell us how much of the total hate speech on the social network the AI finds. In her evidence on Monday, Ms Haugen claimed that number – which most people would think of as the real number – is between 3-5%. In other words, Facebook’s vaunted AI is missing most of the problem.
At the end of the session Mr Collins asked Ms Davis if that was correct.
She told him that hate speech only makes up 0.05% of the material on Facebook, which you may observe is an answer to a different question.
The vagueness extended to other areas. Who exactly was responsible for social problems on Facebook? “These decisions are across the company.” Are you the top person? “I am the global head of safety.” Where does the buck stop? “We’re a big company.” And so on and so on.
This puts lawmakers in a difficult position. On the one hand, they want to tell Facebook and other social networks what to do about pressing (and press-worthy) public issues. On the other hand, they don’t have the facts to specify exactly what to do.
Ms Haugen says her leaks show what is really going on at Facebook, but the company says they paint a partial picture. It is quite possible that both statements are true.
Meanwhile, according to the whistleblower, the median user of Facebook isn’t seeing its worst side, which is reserved instead for the most vulnerable. For instance, Ms Haugen claimed that someone who was newly isolated – perhaps because they were recently widowed or they’d moved to a new city – was much more likely to be presented with misinformation.
The same thing is true of abuse, which is much worse for certain groups than others. In particular, it hits people who are in the public eye. It is, you might say, a tax on prominence, especially if you are female or non-white.
These problems might be invisible to most people, but they affect us all. We don’t want vulnerable individuals twisted by misinformation. We don’t want talented individuals to shy away from prominence because they fear what might happen as a result.
Follow the Daily podcast on Apple Podcasts, Google Podcasts, Spotify, Spreaker.
But how do we measure this harm? And what is the best way to stop it, while keeping the benefits of frictionless communication?
It feels as if we have been asking these questions for a long time now. Based on Thursday’s hearing, I have the sense we will be asking them for a good while longer.
More Stories
5 Reasons Why Everyone Should Look Forward to Save Earth Mission’s Takeoff Event
Save Earth Mission’s Takeoff Event Countdown Starts: Get Ready to Witness History
The Save Earth Mission: A Global Movement Towards a Sustainable Future