Having recently watched a BBC Horizon program exploring Facebook, from the inside of the organisation, what I found particularly interesting was not their assessment of the challenges they face, but the BIG issues that were not touched on at all or very minimally, either by the program makers or Facebook itself.
Clearly everyone at Facebook means well—no question about that. They are all fully immersed in the idealism of the early Internet and are truly trying to do good, to make it easier for people to connect around the world, in a positive way. There is also the new realism that has hit them that there are a lot of criminals and others who for various reasons are trying, every day, to subvert their platform. Some of these people are actively attacking and hacking the system which Facebook staff are trying to combat, and certainly, we at Freetimers recognise this exact experience with the constant attempts to hack our client websites. The online world ain’t fun when you’re at the sharp-end.
Facebook staff also have some recognition of the scale of the problems they are having and are trying hard to keep up. Because of the immensity of the problems, to deal with it all, they are primarily resorting to algorithmic controls, where their AI makes judgments and then automatically takes actions such as closing accounts or removing unacceptable content.
The trouble with this way of working is that the effect is entirely authoritarian. The algorithm makes a decision, and no matter how many people are affected, it has made its decision and that is that. We have some direct experience of this where they closed one of our accounts without notice, and it proved impossible to speak to anyone at Facebook to get it reinstated. We tried everything—but after weeks of attempting to get through (and probably 30 communications), we only ever got one pat response, and could not get over that hurdle. Our account was completely legit, and we proved it over and over again.
So this heavy-handed approach is one consequence of their algorithmic approach to everything. But this is actually a minor issue compared to the other two I want to mention here.
The first is the un-thinking imposition of Facebook’s ideas of ‘what is good’ for everyone on their platform. What people think in California for example, is not the same around the world and in other cultures. I may even agree with the intentions, but these ‘doing good’ things are just distributed worldwide, interjecting one culture’s view into the centre of others. So, cultural imperialism I guess. Now for me, the problem is not that I don’t agree with much of the ‘doing good’ that they want to do, but it’s the fact that no one seems to be thinking about the implications their direct input into people’s lives in other cultures may be having, and the massive cultural disruptions that could, and I reckon, are, resulting from them.
Now Facebook is not the first example of mindless irresponsible cultural imperialism—it’s been happening, well, forever. What is different with Facebook is its scale and its ability to directly inject its ideas into people’s personal lives, bypassing any and all social, legal or cultural controls they might otherwise be living within in their daily lives. Sounds to me like the definition of subversion!
The second BIG issue I want to mention here, and the one that probably concerns me the most, is how the AI algorithms work to direct content to people, to specific individuals uniquely, like you and me. Putting this in very basic terms, what the algorithm does is look at everything a person does, everything they say, everyone they talk to and who talks with them, every news item or story they read, any political discussions they have, etc, etc. The algorithm then analyses this data, and subsequently directs other content to the users of a similar nature, similar people, similar news, groups with similar viewpoints, ads targeted at their desires and needs, and so on. This is all done with the best intentions; make it so the users like what they see, enjoy their experience and then use the platform more, encourage others to use it, etc. All completely understandable, and from a business point of view, right.
Now I take a step back and take a look at exactly what the cultural and system-related implications are on users. So what does the algorithm do? It reinforces. It takes what a person does, and it reflects it back at them and gives them more of the same. It also does this to the exclusion of non-reinforcing content. A Facebook user will, generically, see a reflection of their own views coming back at them, reinforcing and in doing so, legitimising them.
It’s like it puts you inside a bubble of your own making, and when you look at the inside of the bubble, you see your self-reflection. This is exactly the opposite of what life and the real world does, directly flying in the face of millions of years of evolution and the accumulation of practical experience and knowledge by people and cultures coming up every day against a reality they have little or no control of and which will often contradict what they may have thought previously. Science is an example of this; you make a theory, you test that theory with experiments, and if the reality-the experimental results- contradict the theory, the theory is wrong. Human cultures have gone through this same process, sifting through what works and what doesn’t, and over millennia building up knowledge that is practical, and to a great extent, true.
With Facebook, this exposure to reality is being diminished because you are living (while using Facebook) within a bubble of your own making, and this is happening in an almost subliminal way, so most users will not be aware this is happening. Facebook’s popularity also means that many users spend much of their social lives within the platform, and Facebook is doing everything it can to increase people’s usage. With users receiving back positive reinforcements of what their thinking is, this produces a positive feedback loop. I see it like ever-decreasing circles where the positive feedback loop increasingly sanitises a user’s world-view to a smaller and smaller bubble, shielding them from experiences which contradict that view.
This is an extremely dangerous situation, and it would seem that very very few people have any idea it is going on. Talk about subverting democracy! Democracy requires ‘an informed public’ to work. What Facebook’s environment is doing is creating the opposite, one where alternative viewpoints are limited and an individual’s perspective rules, regardless of what is going on in the outside world.
Now you can say ‘ this is just a theory’. The trouble with social and cultural systems is you can’t run controlled experiments so providing ‘proof’ is sometimes not easy. One has to look at trends, do research on representative population samples, etc. But let’s look at what we can see it happening, and in part, what Facebook itself is recognising as problem areas.
What we can see is that social media is used by terrorist and extreme organisations for recruiting and indoctrinating and that this has increased hugely over the last 15+ years. We also see abhorrent and aberrant behaviours like paedophilia hugely on the increase. We also see across the western world and particularly in the USA, a massive polarisation of world views, new left versus new right. Climate change acceptors versus climate change deniers. The deniers flying in the face of a nearly complete consensus of science and objective results and measurement. How can this be? Because people’s views are being reinforced, not challenged by confrontation with a reality that conflicts with their views. When I speak to people on both sides you see they both operate within very separate communities. (It reminds me of how cults tend to cut people off from anyone ‘outside’, so the cult’s ideology cannot be challenged.).
Now in this article, I’m pointing out that there are serious fundamental issues that appear to be created by these reinforcing algorithms, which basically tell people what they want to hear, not what they don’t, limiting their exposure to uncomfortable facts and contradicting viewpoints. Any psychologist will know that people are attracted to positive reinforcement, and the cognitive dissonance caused by a confrontation with contradictory viewpoints and realities will be avoided if possible. So psychologically as well, there is science that supports we have a problem.
What are the solutions? Ban reinforcing algorithms? Maybe. Certainly, if we carry on being blindly pulled down into ever-decreasing self-reinforcing bubbles, I think we’ll be in a situation that has never existed before in human evolution, and it has the potential of destroying democracy, and maybe, us.
Forgive them, Lord, for they know not what they do…