When the Before Times collapsed this spring, stalled by the revelation that the coronavirus should stay here, the physical world gave way to a changed plane. Changes came quickly: the chatter in the café became quieter, hand sanitizer disappeared from the shelves, and we even used a medical-grade hand-washing technique because we thought it might be enough to keep our bodies virus-free. And then everything was turned off.

When the quarantine was hit, our physical life was minimized. But our digital lives were ready for this moment curated over the past decade. During this time, social media companies have become the largest companies in the world and the dominant forces in our daily lives.

We signed up in 2020 because we actually needed social media this year. We got taped to our phones because what did we have left?

We had close circles of family members and friends, only a few of whom we could physically see. But really, we just had our digital life. EMarketer estimates that in 2020 the average U.S. adult was spending 23 extra minutes per day on their smartphone and 11 more minutes per day on social media alone. In the third quarter of 2020, Pinterest’s daily active user base increased 37% year over year, Twitter increased 29%, Snapchat increased 18% and Facebook increased 12%.

As the economy shifted online, Silicon Valley companies saw their stock prices rise. But they also saw a strong backlash from Washington that flooded the tech sector with inquiries and complaints. Even Madison Avenue demanded that social platforms act more responsibly.

The moderation of the content was intensified

The Covid-19 pandemic brought a range of health-related and political misinformation. As a result, social platforms have introduced stricter policies to protect their users, such as: B. Labeling and removing inaccurate posts, and providing reliable resources from trusted sources about the pandemic and the elections. As it turns out, social media companies take access to accurate information seriously when it comes to life and death.

When Twitter’s misinformation and incitement to violence guidelines were applied to President Donald Trump’s account in late May, and the company began labeling his tweets as irregular or inaccurate, the entire industry responded. Snapchat stopped advertising Trump’s account, Twitch suspended the president, Reddit banned hate speech, and Facebook was temporarily boycotted by more than 1,000 advertisers for its inability to keep hate speech and misinformation off its platform.

Misinformation is still a widespread problem on almost every social media platform, but after years of claiming they are not “media companies,” tech companies – including Facebook – have finally taken on a more practical role in monitoring their platforms. Although guidelines have changed, platforms are reluctant to share their data with researchers. Hence, it is difficult to quantify how much safer or how much better the information will be after these changes.

Washington collapsed

While Democrats and Republicans have different reasons to be skeptical of Silicon Valley, each party has turned its anger on big tech. In the social media arena, the Democrats largely believe that platforms are not doing enough to contain the spread of hatred, extremism and misinformation, while Republicans have baselessly asserted that platforms have a liberal tendency towards content moderation. Even so, the techlash in Washington is in full swing.

Mark Zuckerberg from Facebook, Sundar Pichai from Google and Jack Dorsey from Twitter testified in the last few months before the congress about the moderation of content as well as allegations of anti-competitive behavior for Zuckerberg and Pichai. The government has now filed several lawsuits against Facebook and Google for antitrust violations. Facebook’s is due to its dominance in social media, while Google is focused on its dominance in search and digital advertising.

Continue reading