Соцсети взламывают нам мозг Как они управляют миллиардами людей АЛГОРИТМ Серия 1

Show video

This documentary was made for informational and educational purposes. The film’s creators did not pursue the goal of defaming the honor, dignity, or business reputation of the persons mentioned in it. Where does progress lead us? For a long time it was believed that it could bring us to one of the two mutually exclusive extremes. Utopia... or dystopia?

But today it is obvious that these options are two sides of the same coin. Technologies that make life more comfortable easily blend with Big Brother’s control. And the opportunity to connect instantly with billions of people somehow leads to depressive cyberpunk.

But if you think about it, every technological breakthrough in history has concealed some kind of curse. And it’s not surprising — given that the fruits of progress are enjoyed by humans — who have not changed much over the past hundreds of thousands of years. Yes, we have surrounded ourselves with futuristic toys, but we use them to play games from the Stone Age, experiencing the same old primitive needs. With the help of technology, we subjugate nature and overcome the obstacles that it puts in our way.

But at the same time, we do not notice new, digital shackles that fetter us. The Algorithm. Who Controls You? Every day, for more than two billion people, the Internet starts with this page. This is Facebook News Feed. Like any other social network, it shows updates from friends, videos, and articles. But that’s not the whole story.

Fedor Timofeev, host A machine learning-based algorithm is responsible for the generation of the feed. It predicts what each specific person likely would be interested in. Predictions are based on everything that the user subscribes to. The feed is influenced by the posts that the user interacts with most often; by his likes and reactions, and comments.

Or by something he glanced at, if it was a photo or video. Yes, the algorithm understands even that. It’s also considered which of the friends is tagged in the post, how recent it is, and how likely the user is to share it with someone. Based on the results of this mathematical analysis, each publication is assigned a score. And the higher it is, the more likely such a post will interest the user — which means it should be added to the feed. Marc Faddoul, IA Researcher at UC Berkeley, former Facebook AI employee Say, you have your ex that you're connected to on Instagram or Facebook.

And you tend to want to follow whatever they're doing. Pretty soon it knows: “Oh, well, every time there is content from that person you click on it, so I just push a notification naturally when that person puts something out.” You didn’t need to tell the algorithm that they were your ex or someone you knew. The algorithm just exploits your bias. Simultaneously, the algorithm filters the so-called clickbaits and blatant disinformation. Well, at least it tries to do that.

And at the final stage, it "twists" the variety of content: so that there is no noticeable prevalence of images, or, conversely, text. This looks like the personalized model that Internet pioneers were dreaming of. It lets you ge t what you need the most from the whole variety of content. But in whose interests does this sorting work, if users do not pay for it themselves? Well, as they like to say in Silicon Valley, if a service doesn’t ask for your money, it most likely trades in on you. Mark Weinstein, social media / privacy expert Facebook’s members think that they’re the customers, but they’re not. The advertisers, the marketers, the governments, the politicians — those are Facebook's customers.

The members and the products that they're selling, they’re selling access to those members. And Facebook’s whole motivation for being in business is to manipulate the emotions and purchase decisions of people to keep them engaged. So that they would do something that makes money for Facebook.

Analysts estimate that by 2024 the online advertising market will surpass the half-trillion dollar mark. In the largest US market, more than half of all advertising revenue is shared by just two companies: Google and Facebook. How did they become the best advertising platforms on Earth? Well, so far only these two companies can guarantee advertisers that their ads will be shown to as many people as possible, and those people will be interested in them. Robert Epstein, Senior research psychologist, Ph.D. of Harvard University You know, if you can push your company up one little notch in Google Search results, that could be worth a lot more money. 50% of all clicks go to the top two items. 95% of all clicks go to the first page of search results. Mark Bauerlein, professor of English at Emory University If you told an editor of Wired Magazine in 1994 that there would be one search engine that would dominate the entire planet...

He would say: “What are you talking about?! We’ve got 15… 20 search engines!” And now? Google! It’s all Google! The thing is that the very same criteria by which your feed is generated, also determine your behavioral model. Meaning, there are indirect attributes, according to which a comprehensive profile can be compiled on you. It will include your preferences and interests, your temperament, your psychological state, and finally your sexual orientation. and even phobias and neuroses, which you may not even suspect yourself.

Marc Faddoul They will need only a couple of dozens of these Facebook pages to guess personality traits better than a co-worker. And after 100 pages you get to the level of a family member. And after 200-300 pages you get to the level of a spouse. However, contrary to stereotype, social media do not sell this data. They sell a guarantee that you will behave the way the customer wants – you will click on the link, watch the video, or sign the petition.

The algorithm will find the consumer for even the most inconspicuous content if its promotion was paid for. Advertising and promoted content is at the heart of everything social media do. This is their main source of income and motivator. A social network doesn’t care if you want to gain intellectual benefits from it, engage in self-education, or enrich your inner world.

Their main goal is to make you scroll as long as possible so that you constantly come back and invite your friends. Actually, it is quite similar to the way financial pyramids operate. Sergey Grebennikov, Deputy Director of RAEC Companies just don't care. Their main goal is to make money on users.

Do you think that owners of social networks try to make me, the user, comfortable? No, they only think about new ways to make money off me. I don't know why everyone is embarrassed to talk about it. All companies were created to make a profit. This is business. And that’s why they use us as victims. We’re just wallets for them. That's it.

Robert Epstein Technology which we think is free, free, free… Free search engines, free email, etc. These are just surveillance platforms. From a business perspective — they’re just surveillance tools, that's it. And in order to make money on us, the algorithm must, by all means, induce dependency in us. Which could be compared to drug addiction. Because in essence, our brains are being hacked.

Likes existed on the Internet before Facebook, just in a different form. The once-hip social news aggregator Digg is considered the first website to use them. In English, “dig” (only with one G) means both "to dig" and "to like". So, users could click the button of the same name under the publication they liked and at the same time, "dig up" — so to speak — interesting content in the feed. Today, Reddit, and its numerous clones, including the Russian-speaking Pikabu, function according to this principle. In 2005, video portal Vimeo was inspired by Digg's example and a button with a heart appeared there.

Two years later, it migrated to the old-school social network FriendFeed, and from there to Facebook — as an icon with a thumb. In addition to that, Facebook founder Mark Zuckerberg bought FriendFeed to get rid of a competitor. The developers involved in the implementation of likes sincerely believed that they were filling the cold digital world with kindness and warmth. Well, as a result, they got humanity hooked on the drug of public approval. Daria Radchenko, digital anthropology expert at RANEPA Likes aren’t about the things we enjoy. They give a clear signal: “I see you.

You’re just like us. You are one of us, and you do everything right. John is a great guy — so be like John.” It is very important for us to get the approval of others. Human beings are social animals. Areg Mkrtchyan, clinical psychologist, Ph.D. in Psychology All these likes, all these formal emojis and stuff like that — they just block the development of our emotional sphere. And just like human intelligence, our emotional sphere needs to be developed. So online, my variability of emotional reactions seems to — well, how shall I put it? — shrink or become flat.

I have to either like something or ignore it. And that's it. I just have nothing else. Dopamine is at the heart of social media engagement mechanisms. It’s the pleasure hormone with which the brain rewards a person for doing things that are beneficial from an evolutionary point of view. Exercise, sex, delicious food — we are rewarded for all of this with a dose of pleasant sensations so that we can repeat these activities and make them a habit. Successful social interactions are guaranteed to produce dopamine. It happens, for example, when you make someone laugh during a conversation, receive a message from a loved one, or hear praise from your boss.

Well, social services are, in fact, an unlimited resource for such self-affirmation. Every like, notification, and mention you receive there can lead to the release of dopamine. Sergey Grebennikov I deleted the Instagram icon on one of my phones; in fact, it was my main phone. Then I had a withdrawal for several days, trying to find that Instagram icon. I wanted to check how many people have liked my posts and if there was something new. And a week later, this withdrawal subsided. After that, I realized that I actually had some free time — two hours of free time a day to read a book.

So I can say with confidence that dependence on social networks is real. However, to maintain this addiction, you need to balance positive and negative stimuli. In the 1930s, American psychologist Burrhus Frederic Skinner was conducting experiments on mice. He found out that they start pressing buttons more actively when they don’t know for sure if the pressing will lead to feeding or to punishing (in this case — electrocution). People are affected by this element of unpredictability and sense of luck in the same way. These are the perfect dopamine triggers that all gambling and casinos are based on.

When winning is possible, but not guaranteed, addiction emerges. Not from the reward itself, but specifically from its anticipation. Today, it is enough to reach out to your smartphone to get a dose of pleasure from a social contact. Our brain realizes how easy it has become, and turns this behavior into a habit — to the delight of web designers and programmers, who are trying to achieve the very same effect. Mark Bauerlein Silicon Valley, in designing video games and websites hires psychologists who are experts in what's called persuasive design. What persuasive design means is: “How do we create an object that will encourage addiction to it?” They want kids to become addicts. That’s how they make their money.

There are special techniques for increasing this addiction. For example, after registering on a social network, you are initially notified only for important reasons: when someone liked you, added you as a friend, or commented on your post — thus developing a conditioned reflex in you. And so you learn to always expect something interesting behind the red circle with the number.

But later this list will expand, and at some point you’ll start receiving announcements of events, invitations to join groups, and notifications about new Stories. And if you are suddenly marked in a photo, they may even send you an email. But the actual photo, of course, will not be included. To see it, first you will need to go to your account — and there in the corner, the next dose of dopamine will be burning red. Moreover, to awaken that mouse-like excitement in you, Instagram makes the distribution of likes more unpredictable. Sometimes it deliberately delays their delivery — so at first you are a little upset that your publication isn’t popular enough. But then all the accumulated likes will be sent to you at the same time — and against the background of your lowered expectations, sudden success will produce more dopamine than usual.

Mark Bauerlein This is why the people in Silicon Valley, — and you may know this — they don’t let their kids go online very much. They send their kids to private schools like Waldorf, which have no technology. Steve Jobs famously didn’t let his kids use the tools that he was creating himself. They know to keep their kids away from this stuff. And they are the inventors of them! However, tricks alone are not enough for engagement. Content is also important.

As we have already found out, social networks do not receive money from us directly and, in general, aren’t interested in providing us with any useful educational publications. But users for the most part get hooked on something completely different, making it even easier to engage them. It is estimated that on Twitter, fake news spreads six times faster than real news. Conspiracy theories and radical polemics are traditionally in high demand on YouTube.

That’s because truth and objectivity are boring. Robert Epstein People trust output from computers. People trust output from algorithms even though they have no idea what algorithms are. They think that that kind of output is inherently objective.

They believe it must be true because it's coming from a computer. So what they don't see is the human hand. Extensive research shows that the biases of programmers get written into the code they write. Marc Faddoul There's a natural tendency for humans to engage more with sensationalist, extreme & novel content, and fake news or conspiracy theories tend to tick all of these boxes. And therefore, when you see a conspiratorial title or fake news, they surprise you. “What is this information that I did not expect?” Therefore, someone is going to click on it, and as you click on it you're going to teach the algorithm: “This is generating engagement”. So the algorithm is oblivious to whether it's conspiratorial or fake news, or whether it’s truthful — they don’t care about it.

The algorithm only cares about “did I generate the click or no?” Based on this, recommendation systems algorithms specifically guide users down the rabbit hole of questionable content to increase their time on the platform. That brings us to another problem. The thing is, when people form their feed, they tend to surround themselves, often unwittingly, only with like-minded users. As a result, they isolate themselves from any counter-arguments, especially concerning political controversy. This is the so-called echo chamber effect, due to which critical thinking atrophies and polarization of opinions occurs.

It is not surprising when everyone considers their point of view to be ultimately correct. Mark Bauerlein When you get a value, a norm, a belief repeated over and over again without challenge, people forget that it’s a norm. “No! It’s just the truth! It’s just the way things are, it’s reality!” Karen Kazaryan, Senior analyst at RAEC Humans have distortions of perception, or cognitive biases. And we tend to feel better when something latently confirms our point of view.

When we have some deep conviction inside us and suddenly we read something that supports this conviction, we say to ourselves: “Cool, I thought so! I just knew it!” Yes, the algorithm is designed to give us more and more and more of these confirmations. And ultimately we create this comforting information bubble ourselves. We don’t want to read something that we don’t like. This is what makes people return to their cozy online world repeatedly, where everyone constantly agrees with them. And as long as this kind of addiction exists, there is absolutely no need for social networks to change the order of things. Moreover, they began questioning if this addiction could somehow help them manipulate people.

In January 2012, 689,003 users, without knowing it, took part in a large-scale psychological experiment. The Facebook algorithm began to show more positive posts to half of these people, while for the rest, on the contrary, it began to select upsetting and annoying posts. It soon became clear that the emotional background of users began to change according to their feed. In theory, such manipulations can provoke deep depression in psychologically vulnerable adolescents, and then target them with advertising of brand products to boost their self-esteem. Besides, people are more likely to make reckless and impulsive purchases when they feel depressed.

After analyzing the results of this experiment, scientists detected in the online environment the so-called emotional contamination – a well-known symptom to psychologists. Daria Radchenko The phenomenon of emotional contamination definitely exists in human communities. People gravitate towards the opinions, emotional statuses, and behavioral practices of the so-called reference groups. We want to be one of these people; we want to behave the same way as they do. We want to smile when they smile or, on the contrary, be serious. Mark Weinstein On Facebook you get emoticons. They are so carefully engineered that they understand exactly what you're thinking or feeling just by the emoticon that you use. So they can then curate content specifically to the emotion that you're displaying.

Or they want to manipulate and change your emotion. Users can also be manipulated for political purposes. In fact, Facebook did just that, two years before its 2012 study. It was established that users reacted more actively not to a regular banner ad with an appeal to vote, but to a modified version of it, which showed the users’ Facebook friends who had already voted. Those tailored banner ads were shown only once in the news feed — and still it made possible to attract 340 thousand more voters to the elections than was initially predicted. Sergey Plugotarenko Not only social media, but all media definitely dream about the power to control people's minds. This will give them the opportunity to properly profile and shape the behavior of society.

Obviously, those people who have access to large audiences should think about the impact they have on society. For example, — what if I make a little adjustment in my algorithms? How will it affect the behavior of an entire nation? It is really intriguing, right? Well, the temptation is endless, and who can resist it? The biggest political scandal in the history of Facebook was related to the British consulting firm Cambridge Analytica. The company managed to breach the code of the social network due to a loophole. Then, using an inconspicuous questionnaire, it collected personal data on 87 million people. This happened because respondents automatically approved a dialog box that asked for access to all of their friends' profiles. Cambridge Analytica cooperated with the campaign headquarters of the US presidential candidates from the Republican Party: Ted Cruz and Donald Trump.

Having thoroughly studied the selection of “stolen” users, the company's targetologists began to work with two categories of people. First, they focused on those who have not yet decided on their candidate. Daria Radchenko Political technologists weren’t going to work with people who held the opposite point of view, with those who absolutely didn’t accept Trump.

Their main goal was to target those who were in the middle, opinion-wise. When you try to convince people who hold some radical point of view, that they are wrong.it is likely to end with a negative effect. On the contrary, those people will be reaffirmed in their beliefs, and therefore, polarization will only increase. Robert Epstein Even as of 2015, upwards of 25% of the national elections in the world were being determined by Google Search engine.

And that's because a lot of elections are very close, and because if there's a bias in Google Search results, it shifts the votes of undecided voters. An enormous number of people — easily 20% or more of undecided people up to 80% in some demographic groups. With no one knowing that they're being manipulated. And the second category of people who were perfectly suited for such targeted manipulation consisted of conspiracy theorists. Yes, those who sincerely believe that global warming was invented by reptilian aliens, Bill Gates implants microchips into everyone, the Earth is flat, and the Moon landing was filmed in Hollywood.

Well, people, who believe in such stuff, can be convinced of anything. Daria Radchenko Conspiracy theorists are not even very good at recognizing spoof articles on their favorite topics. You can bait them with a completely laughable parody text about, say, aliens capturing the U.S. Capitol building, — and most of them will say: "Yes, I always knew that!" Ultimately, their opinions will only be strengthened. Marc Faddoul They could tailor political advertising to the psychological profile of the people they were profiling. This particular individual is anxious naturally, and therefore I am going to put out a fear-mongering ad that is going to provoke this kind of sentiment in them.

These gullible people became the ideal audience for fabricated, incriminating evidence against Hillary Clinton, who ultimately lost the election. This brought up the talk about a serious crisis of democracy, which, apparently, was corrupted, since a single algorithm could force people to cast their votes, guided by distorted or blatantly false information. Marc Faddoul During the US campaign, Twitter forbade political advertising on their platform to diminish the risk of political manipulation, which was quite a courageous move because there's a lot of ad revenues linked to that. They had already put up a lot of flags and warnings before, whereas on Facebook and YouTube they have just been promoting Donald Trump's content, benefiting from the high engagement that he generated.

But here's the question: maybe the algorithm's villainous role has been exaggerated? After all, it’s not Big Brother or the Matrix yet. It's just a soulless machine that analyzes giant archives of human behavior on the web. The algorithm traces some connections and patterns there, and then simply reproduces those of them that lead to audience engagement. In other words, all negative consequences stem not from some cunning manipulation but from the controlled indulging of users’ own impulses. Marc Faddoul Social media is still a reflection of society. I think there is a bias in the reflection. It amplifies certain parts of the debate more than others, but it still reflects the public debate.

People argue on the Internet not because they are possessed by some digital demons. In fact, they are preconditioned for the dispute and also constantly receive new causes. And they are just happy to get personal. There’s a reason algorithms are compared to black boxes. Because it’s possible to get the audience engaged after you put them through such a box. But you will never be able to predict how exactly the black box achieves this. In fact, nobody knows this, not even the social networks themselves.

Inside the box is a Brownian motion of human passions, which, at first glance, defies any systematization. But the computer knows better. Marc Faddoul An engineer on any of these platforms is not going to be able to point you at a reason for why a specific piece of content has been suggested in a specific context. There are so many considered parameters that overall we are unable to explain one specific decision that was taken by the algorithm. The engineers are going to be able to tell you more at a higher level why it is that this kind of content may be more generated, more promoted, or why this kind of input has an impact.

Facebook eventually got off with only a fine for leaking personal data. It temporarily banned political ads and began hiring entire departments of content moderators to avoid such mishaps in the future. However, the initial check is still carried out automatically, and this often leads to glitches. For instance, in 2018, Facebook didn’t like the picture of a smiling Emmanuel Macron. The French President posed with young residents of the island of Saint Martin, one of whom was shirtless. The social network’s algorithms regarded the photo as "sexual content" and deleted it. For some reason, they did it only a couple of months later.

In the fall of 2020, Facebook labeled a photo of onions in an ad released by Canadian farming company as “overtly sexual”. The company’s manager published this verdict online, suggesting that someone had their imagination run wild. YouTube is not immune to similar mishaps. Its algorithm once blocked for 24 hours the channel of the Croatian chess player Antonio Radic, also known as Agadmator. The fact that the channel had 1 million subscribers didn’t change anything.

The blocking happened because in Agadmator’s videos, "whites" constantly "threatened blacks" and vice versa. From the computer’s point of view, it was an undeniable incitement to racial hatred. Karen Kazaryan If every time YouTube deleted the account of someone more or less famous, it would make the news, the news feed would consist exclusively of such cases. YouTube's content moderation policy became the talk of the town. It is really obnoxious. It is really opaque. It can be very difficult for users to achieve the truth, especially, if they are not very famous or very popular. There are numerous scammers who file complaints about content, claim copyright for it, and so on. There are too many cases to list...

Sergey Grebennikov Today, social networks have such a tremendous impact on the audience that no one can compete with them. If YouTube or Instagram for some reason don’t like one of their bloggers, they can simply remove him from the prioritized content — and that's it. Yes, seriously! No matter what content the influencer creates, no one will see it.

There were also incidents verging on political censorship. In March 2021, Facebook blocked articles by some Russian media outlets, including Lenta.ru, about the detention of supporters of Ukrainian radicals in Voronezh, considering this information to be false. By the way, Facebook continues to cooperate with StopFake. Even after the New York Times reported on the connections of one of the organization’s leaders with the ultra-right and nationalists. And on the eve of Victory Day in 2020, Facebook suddenly began to massively delete posts with the famous picture of Soviet soldiers raising a red banner over the Reichstag building.

Obviously, even human involvement doesn’t guarantee perfect moderation. And no matter how many moderators you hire, they still won’t be able to quickly check everything that is uploaded to the network. However, social media themselves do not care much about this. In America, there is a special law — the so-called Section 230, according to which online sites are not responsible for the extreme content being uploaded to them.

That’s the reason, in 2017, YouTube was in no hurry to delete whole conglomerates of exposed "children's" video channels, where minors were brought to tears and abused in various ways. It’s the same story with clips that show gory incidents, suicides, etc. On the one hand, such content undoubtedly harms the credibility of the video platform, and to salvage it, those materials should be erased.

. But on the other hand, they attract the audience, and with it, advertising royalties — which means that the clean-up can be postponed... In 2015, TV reporter Alison Parker was gunned down by her former colleague during a live broadcast. The video of the incident was, of course, blocked. However, later it was repeatedly uploaded again by conspiracy theorists who claimed that the murder was faked, and that the father named Andy, who mourned his daughter, was just a hired actor. And although the so-called digital footprints of blocked videos are constantly being improved, preventing them from re-uploads, moderators are still deluged with an endless stream of complaints about new videos with animal cruelty, suicide attempts and violence against women and children. Unsurprisingly, more and more moderation center employees begin to suffer from depression and panic attacks, consequently being diagnosed with post-traumatic stress disorder (PTSD). Areg Mkrtchyan We’ve discovered very interesting phenomena, analyzing information about terrorist attacks.

Initially, it was believed that PTSD was observed only in people who were directly involved in traumatic events, — the affected participants. But then it turned out that practically the same symptoms of the same intensity can also be observed in those persons who have regularly watched terrorist attacks in the media. People really underestimate the influence of information and therefore when at some point they feel the objective symptoms of psychological malaise, or mental malaise, they can’t really identify its origin.

The working conditions of the moderators are horrible. There are 15 thousand moderators on Facebook, and each of them, on average, looks through one and a half thousand extreme publications every week. During the day, they are allowed to leave their workplace for only an hour — and this break includes lunch and trips to the bathroom. Muslims are forbidden to pray in the workplace by the authorities, who consider it a waste of time. Depressed moderators on a paltry salary deal with stress by taking drugs and having promiscuous sex while no one is watching.

In March 2018, 42-year-old former U.S. Coast Guard officer Keith Utley suffered a heart attack during his moderator shift. It was not possible to save him. According to his colleagues, Utley, just like them, was constantly worried that he would be fired because of the low accuracy of his moderating decisions.

That’s not surprising, considering that each such decision should take only 30 seconds. These people are indirect victims of the era of algorithms. But they are not the only ones — far from it. Science fiction has always drawn a clear line between utopia and dystopia.

However, reality turns out to be much more complicated and confusing. You need to look closely to notice that the truly beneficial fruits of progress sometimes hang from long-rotted branches. And only those who run this garden can live carefree. As for the others, they are, to one degree or another, destined to be used as fertilizer... In the next episode: Algorithms destroy human lives... and children's psyches.

Why do social media feeds breed jealousy and fear of missed opportunities? And why does the endless search for romantic partners condemn people to loneliness? Meet the algorithms — they are your new masters.

2021-10-12

Show video