Facebook and User Harm
From BigTechWiki
Jump to navigationJump to search
Allowing Illegal Actions for Ads
- Facebook did not take action against human trafficking groups that were on their platform despite being made aware of their presence. A Polish trafficking expert said that he had identified human trafficking on Facebook, but 18 months later there had been no implementation of systems to find and remove trafficking posts
- Facebook in fact allowed the human traffickers to spend $152,000 on Facebook ads for massage parlors and deactivated a system that detected human trafficking networks on their platform
- Facebook’s policies prohibited employment ads from offering free travel and the covering of visa expenses, but such ads were still able to be published on the platform
- At times, Facebook held back from reforming the negative aspects of its platform over fears that it could hurt their business
- Facebook chose not to fully remove accounts linked to the drug cartel, ‘Jalisco Nueva Generacion’, after an employee brought the accounts to attention. The employee had just untangled the cartel’s activities throughout the platform by reviewing their private messages and public posts
- Facebook did designate the cartel as one of the “dangerous individuals and organization” on their platform. This designation should have led to the cartel’s posts being automatically removed, but they weren’t. An internal investigation team at Facebook had asked another team to make sure that a ban on the cartel was enforced, but that team did not follow up on the job
- The cartel posted graphic images of murder, torture and severed limbs, yet Facebook didn’t take action against them for 5 months. 9 days after Facebook became aware of the cartel’s presence on their platform, the cartel created a new Instagram account and posted videos of instances of people being shot in the head, bags of severed hands and torture.
- Despite the clear violation of the platform’s rules, the cartel’s Instagram account remained active for at least 5 months before being taken down
Failures to keep children safe
- Instagram and Facebook have failed to keep children safe on their platforms, with high levels of child pornography and “grooming” of underage users. Facebook was responsible for the majority of reported instances of child pornography or trafficking in 2018, and Instagram was cited as the leading platform for child grooming in the U.K. in 2019.
- During the COVID-19 pandemic, Instagram failed to respond to complaints about predatory behavior, citing a lack of resources and arguing that they could not “prioritize all reports right now.”
- In 2020, despite warnings from children’s advocates and FBI Director Christopher Wray, Facebook encrypted all of their messaging services, including Facebook Messenger, across all platforms. This move made it more difficult for either the company or law enforcement to identify and combat child abuse on the platforms.
- Facebook’s previous experiment in a social media app designed for children, “Messenger Kids,” failed to keep children safe. A technical flaw allowed children to be in group chats with unauthorized strangers, violating one of the core safety features of the app.
- Facebook claimed that the app was vetted by “experts” prior to its launch, but failed to disclose that many of the experts received funding from the company. Facebook failed to do outreach to many prominent children’s advocates, including Common Sense Media and Campaign for a Commercial Free Childhood.
- Facebook manipulated underage users into paying for services within games, calling the practice “friendly fraud.” Children as young as five years old were tricked into making purchases in the games, without realizing Facebook had stored their parent’s credit cards. This led to clawback rates of 9%, well over the Federal Trade Commission’s red flags for deceptive business practices.
- Internal Facebook communications showed that employees were aware of the practice and proposed reforms; however, Facebook rejected the reforms because they would hurt revenue. Nearly three years after the first internal communication of “friendly fraud,” clawback rates remained the same, suggesting that Facebook did not address the problem.
- Instagram consistently failed to keep younger users safe, with pervasive online bullying on the platform that takes advantage of the ease of building anonymous accounts, the virality of cyberbullying posts, and the complete lack of oversight on comments and direct messages.
Facebook and Mental Health
- RESEARCH CONSISTENTLY FOUND THAT FACEBOOK AND SOCIAL MEDIA USERS REPORTED LOWER WELL-BEING
- A large body of literature linked Facebook use with detrimental outcomes such as decreased mental well-being. A meta study on scientific papers on social media’s influence on mental health found social media use was linked to increased levels of psychological distress, thoughts of self-harm and suicide and poor sleep. One in eight Facebook users reported that their use of the platform harmed their sleep, work, relationships and parenting.
- Passive use of Facebook – browsing but not engaging on the platform – led to worse outcomes on well- being. People who spent a lot of time passively using Facebook reported feeling worse afterwards. Selective confrontation with other’s success on Facebook could trigger repetitive negative thinking regarding ones imperfections.
- HEAVY USE AND PASSIVE USE OF FACEBOOK LED TO THE WORST OF CONSEQUENCES FROM SOCIAL MEDIA USE
- It was found that the amount someone used Facebook was no. 1 variable that predicted depression among a study’s participants and those with lower well being used Facebook more. Problematic use of Facebook was associated with lower well-being. Making matters worse, those with low subjective happiness were more susceptible to overusing Facebook. Facebook users with some level of mental vulnerability were more at risk for problematic outcomes from their use of the platform.
- Using Facebook for reasons other than social engagement created decreased well-being. People who read Facebook for 10 minutes a day were in a worse mood than those just posting or talking to friends. People who reported higher levels of Facebook use experienced higher emotional and stronger needs to be connected.
- Overuse of Facebook skewed user’s perspectives of themselves, the world around them and their social bonds. Those who overused Facebook felt that other people were happier than them, experienced high levels of loneliness and withdrew socially. Facebook addiction was found to negatively affect life satisfaction. People who used Facebook for a long time reported feeling that others were happier than them. Students using Facebook for long durations reported enhanced loneliness. They also reported aggressing less with the idea that life was fair. The problematic use of Facebook led to an avoidance of real social relations.
- DECREASED USE OF FACEBOOK AND SOCIAL MEDIA HAD A CLEAR BENEFIT FOR PEOPLE’S WELL-BEING
- Users who deactivated their Facebook and social media accounts felt greater life satisfaction and more positive emotions than continued users. It was found that people’s life satisfaction increased significantly when they quit Facebook. They had more positive life satisfaction and positive emotions than Facebook users. The increase in well-being resulting from social media deactivation increased levels of subjective well-being by approximately 24-50% as much as standard psychological interventions. Deactivation of social media also led to a statistically significant decrease in depression and loneliness. A study of inpatient patients at a mental health center found that patients using Facebook during their treatment reported higher levels of negative mental health and recovered more slowly than non-users.
- ADOLESCENTS WERE MOST AT RISK FOR MENTAL HEALTH ISSUES STEMMING FROM FACEBOOK AND INSTAGRAM. THEY’RE THE USERS THE PLATFORMS WANTED MOST.
- 22 million teens in the U.S. logged onto Instagram every day. Roughly half of Facebook’s users between the age of 18 and 24 checked Facebook upon waking up.
- Adolescence was marked by the development of personal and social identity, a core experience of maturation exploited by Instagram. Adolescence was a period of personal and social identity development, and much of that development had become reliant on social media. It was posited that because adolescents had a limited capacity for self-regulation and were vulnerable to peer pressure, they were susceptible to the adverse effects of social media.
- Instagram’s focus on only presenting one’s best moments could send teens spiraling toward eating disorders and depression. The features that were core to Instagram, like sharing only one’s best moments or looking perfect, were a threat to adolescent’s mental health. It was thought that the addictive product could send teens towards eating disorders and give them an unhealthy sense of their bodies as well as cause depression. Adolescents heavily using social media were more likely to report mental health issues.
- A GENERATION’S MENTAL HEALTH WAS ON THE LINE BECAUSE OF INSTAGRAM
- Suicidality was a tragic outcome from teenage Instagram use. 13% of British teens and 6% of American teens who reported suicidal thoughts traced the desire to kill themselves to Instagram. Teens blamed Instagram for increases in the rate of anxiety and depression.
- Social comparisons, which was at the base of the Instagram experience, led to a distorted self-image among teens. 40% of teen Instagram users who reported feeling unattractive said the feeling began on the app. 32% of teenage girls said that when they felt bad about their bodies, Instagram made them feel worse.
- Instagram was linked to greater self-objectification and decreased body image. Frequent use of image-based social media platforms like Instagram was linked to greater self-objectification.
- FACEBOOK WORKED AGAINST IMPROVING ITS PRODUCTS IMPACT ON MENTAL HEALTH, DESPITE BEING AWARE OF ITS DANGER
- Instagram knew that strong negative emotions, such as negative social comparison, kept users’ attention longer than other emotions. But Facebook shut down a team focused on user well-being in 2019.
- Facebook saw older users as a threat to its business, so it focused on finding every way possible to attract younger uses, even those as young as 6 years old. Facebook saw an aging user base as an existential threat to the long-term health of its business. The platform considered it a “particularly concerning trend” that young users were spending less time on the site and called the lack of use a “significant risk.” Facebook had teams of employees laying plans to attract preteens to their platform. The head of Instagram told employees that the platform’s goal was to be “a place where young people define[d] themselves and the future.” An internal 2018 Facebook document said the platform could not “ignore” the fact that kids as young as six years old were using the internet.