7 stories
·
0 followers

[AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships [x-post]

1 Share
Published on August 6, 2022 9:48 PM GMT

[Crossposted from the EA Forum. Particularly relevantly to LessWrong, I'm interested in funding more rationality groups at universities.]

Open Philanthropy recently launched two new fellowships intended to provide funding for university group organizers: the University Group Organizer Fellowship (apply here), and the Century Fellowship (apply here). This post is intended to give some more color on the two programs, and our reasoning behind them. In the post, we cover:

  1. What these programs are and are not
  2. Current thoughts on who we should be funding
  3. What we’d like to achieve with these programs
  4. Some of our concerns about this work

We’d also like this post to function as an AMA for our university student-focused work. We’ll be answering questions August 4 - August 6, and will try to get through most of the highest-voted questions, though we may not get to everything. We welcome questions at any level, including questions about our future plans, criticisms of our funding strategy, logistical questions about the programs, etc.

If you’re a university group organizer or potential university group organizer and want to talk to us, I (Asya) will also be hosting virtual office hours August 5th and 6th (edit: added some time August 7th) – sign up to talk to me for 10 minutes here.

[Note about this post: The first two sections of this post were written by Asya Bergal, and the latter two were written by Claire Zabel; confusingly, both use the first-person pronouns “I” and “my”. We indicate in the post when the author switches.]

What these programs are and are not

[The following sections of this post were written by Asya Bergal.]

The University Group Organizer Fellowship

The University Group Organizer Fellowship provides funding for organizers and group expenses for part-time and full-time organizers helping with student groups focused on effective altruism, longtermism, rationality, or other relevant topics at any university.

Unlike CEA’s previous program in this space:

  • We’re interested in funding a much wider set of student groups that we think are on the path to making sure the long-term future is as good as possible, e.g. groups about longtermism, existential risk reduction, rationality, AI safety, forecasting, etc.
    • For reasons discussed below, we think it’s possible that groups not focused on effective altruism (particularly those focused directly on risks from transformative technology), could also be highly effective in terms of getting promising people to consider careers working on the long-term future, by appealing more to people with a slightly different set of interests and motivations. As far as we know, there are currently relatively few efforts of this kind in the student group space right now, so there’s particular information value from experimenting. (And we’re excited about some new efforts in this space, e.g. the Harvard AI Safety Team.)
    • We also continue to be interested in funding effective altruism groups, including those with a primarily neartermist focus.
  • We’re interested in funding any college or university, including universities outside of the US or UK.[1]
  • We’re not providing mentorship, retreats, or other kinds of hands-on support for groups as part of these programs (at least for now).

More on the last point: while we ourselves are not providing this kind of support, we have and expect to refer groups we fund to others who are, including CEA’s University Group Accelerator Program, the Global Challenges Project, EA Cambridge, and other one-off programs and retreats.

Our current aim is to provide a smooth funding experience for strong group organizers who want to do this work, while mitigating potential negative impact from organizers who we don’t think are a good fit. We don’t plan or intend to “cover” the groups space, and actively encourage others in the space to consider projects supporting university groups, including projects that involve running groups, or providing more hands-on support and funding themselves. That being said, while we think there is room for many actors, we also think the space is sensitive, and it’s easy to do harm with this kind of work by giving bad advice to organizers, using organizer time poorly, or supporting organizers who we think will have a negative impact. We expect to have a high bar for supporting these projects, and to encourage the relevant teams to start by proving themselves on small scales.

We think there are a number of projects that we think could be valuable but have no clear “owner” in the student groups space right now, including:

  • Creating high-quality resources for non-EA groups
  • Running residency programs that provide in-person guidance to new group organizers at the beginning of the year
  • Running events for group organizers
  • Running events where promising new group members recommended by group organizers can meet professionals
  • Finding individuals who could become strong organizers of new groups

Individuals looking to start projects supporting university groups can apply for funding from our team at Open Phil through our general application form

The Century Fellowship

The Century Fellowship is a selective 2-year program that gives resources and support (including $100K+/year in funding) to particularly promising people early in their careers who want to work to improve the long-term future. We hope to use it (in part) to support exceptionally strong full-time group organizers, and to make community building more broadly a more compelling career path (see below).

Current thoughts on who we should be funding

These criteria are the ones that I think are most important in a group organizer (and consider most strongly when making funding decisions):

  1. Being truth-seeking and open-minded
  2. Having a strong understanding of whatever topic their group is about, and/or being self-aware about gaps in understanding
  3. Being socially skilled enough that people won’t find them highly offputting (note that this is a much lower bar than being actively friendly, extroverted, etc.)

Secondary “nice-to-have” desiderata include:

  • Taking ideas seriously
  • Being conscientious
  • Being ambitious / entrepreneurial
  • Being friendly / outgoing
  • Having good strategic judgment in what activities their group should be doing
  • Actively coming off as sharp in conversation, such that others find them fun to have object-level discussions with

Notably, (and I think I may feel more strongly about this than others in the space), I’m generally less excited about organizers who are ambitious or entrepreneurial, but less truth-seeking, or have a weak understanding of the content that their group covers. In fact, I think organizers like this can be more risky than less entrepreneurial organizers, as they have the potential to misrepresent important ideas to larger numbers of people, putting off promising individuals or negatively affecting community culture in disproportionate ways.

Overall, I think student organizers have an outsized effect on the community and its culture, both by engaging other particular individuals, and more broadly by acting as representatives on college campuses, and we should accordingly have high standards for them. I similarly encourage student leaders to have high standards for the core members of their groups. I think groups will generally do better by my lights if they aim to have a small core membership with lots of high-quality object-level discussions, rather than focusing most of their attention on actively trying to attract more people. (I think it’s still worth spending substantial time doing outreach, especially at the beginning of the year.)

All of the above being said, my current policy is to be somewhat laxer on the “truth-seeking” and “strong understanding” criteria for a given organizer when:

  • There are one or more other core organizers that do well on these criteria, and those organizers are excited about having this organizer on board;
  • The organizer is working in a primarily operational, non-student-facing capacity for their group; or
  • The group is located in an area that is geographically and culturally remote (e.g. is in a country or region with little or no activity aimed at improving the far future, is physically distant from any EA hub, has few English speakers). I think it might make sense to be laxer here because:
    • Relevant ideas are less likely to have spread in those areas, so the upside of just making people aware of them is higher;
    • It’s less likely that a better organizer would come along in the next few years; and
    • Since these organizers are in areas that are culturally distant, they have a weaker effect on community culture.

So far, most of the organizers who have applied to us have met the conditions above —  we’ve offered funding to 47 out of the 78 organizers who we’ve evaluated directly as part of the University Organizer Fellowship program.

What we’d like to achieve with these programs

[The following sections of this post were written by Claire Zabel.]

Experimenting with other kinds of groups

In addition to growing and expanding EA groups, we’re excited to see student groups experiment with other longtermist-relevant formats, such as AI safety reading groups or groups on existential risk or applied rationality. I think there are a couple reasons this is promising: 

  • I think most people working on longtermist projects got interested in doing so via EA-ish philosophical reasoning, and have poured a lot of time, effort, and money into building out that pathway (from being interested in this kind of reasoning to working on a longtermist priority project); that chain of reasoning probably seems particularly salient and compelling to them/us, and they/we are well-equipped to reiterate it to others.
    • However, I think if most of us were presented afresh with a convincing empirical narrative about the risks from potentially imminent transformative AI (or other longtermist concerns) without having “gone through” EA first, I wouldn’t expect them to independently converge on the view that this is the best recruiting strategy; in fact, I think it’d seem pretty niche and overly conjunctive, and it’s more likely that most people would just focus on the specific cause and try to raise awareness about it among relevant groups.
  • People might be very interested in reducing existential risks for reasons other than the sort of consequentialist-leaning philosophical reasoning that has often underlain interest in EA-longtermism (e.g. see here and here for arguments for this approach), and we want to support and encourage folks with those motivations (and styles, beliefs about comparative advantage, etc.) to join onto longtermist priority projects.
    • Or, people might sign on to the philosophical reasoning, but be more interested in exploring particular cause areas or areas of skill development, and feel they wouldn’t get much value from more general EA groups
    • E.g. I think a large fraction of people (though certainly not all) already feel that human  extinction , or an entity that doesn’t care about the values and preferences of humans and other sentient life on Earth gaining total irreversible power, would obviously be extremely bad, and if it were plausibly going to happen in the next hundred years or so, that would be a good and noble thing to try to prevent, no fancy philosophy needed. (This is supported by the fact that a huge number of fantasy and sci-fi books and movies centrally involve attempts to prevent these outcomes.)
      • Analogously, though EA has been in non-longtermist cause areas like farm animal welfare and global health in recent years, those cause areas historically drew in many people who are not hardcore EAs and who add a ton of value in the space.
  • Our sense is that groups are generally strongest when the organizers are particularly knowledgeable about and interested in the subject matter. Organizer interests vary, so we hope that support for more varied kinds of groups will allow a larger number of very strong groups led by passionate and highly-engaged organizers to emerge.
  • I think effective altruism has strong associations with cost-effectiveness analyses and helping the global poor (or, earning to give). But these associations don’t seem like obviously the best ones for longtermist priority projects.
    • We’re funding this from Open Philanthropy’s longtermist budget (funding to support projects motivated by the longtermist view), and we think that, given the abundance of funding in the space right now and potentially short timelines on which it must be spent to have the desired impact, and the limited number of people motivated to work on longtermist priority projects, cost-effectiveness isn’t always the most useful framing, though we nonetheless do attempt to do cost-effectiveness analyses of common or large uses of funds.
  • Other kinds of groups with different associations might be less confusing and avoid some optics concerns (I think it intuitively makes a lot more sense to people that an AI safety-focused group would e.g. pay organizers well (compared to an EA group), since I think AI safety is associated with the highly-paying tech sector and doesn’t make an implicit claim about being the best use of money).

Making community-building a highly compelling career path

I want to make community-building a highly compelling career path, commensurate with the impact I think it’s had historically. 

  • Evidence from some research we’ve done suggests that a fairly large fraction of people working on the projects we prioritize the most highly attribute a lot of credit to a group they were in while in college/university for them being on their current path. By the metric I think is best-suited to this kind of question, our survey respondents allocate to EA groups 6-7% of the total credit they give to all EA/EA-adjacent sources (meta orgs, pieces of content, etc.) for them being on the paths they’re on, and about ⅔ to ¾ of that credit goes to student groups specifically.
  • The work that group organizers do is often very demanding, both intellectually and emotionally. They’re often pushed to, at a young age, master a variety of challenging topics well enough to skillfully lead conversations about them with very bright young people, mentor peers from a variety of backgrounds and with a variety of different interests and personalities, manage other peers, organize large and complex events, and deal with tricky interpersonal issues. They are asked to do all of this without formal full-time managers or mentors, often while balancing other important responsibilities and life priorities. Strong organizers are very talented people, often with a variety of other opportunities to do projects that might offer more prestige and job security. We have high expectations for these organizers, and want to compensate them fairly for that, as well as support them and their groups to try different kinds of projects and events.
  • We think that, in addition to compensation, providing longer-term support will draw more people who would be good fits to this path, and encourage people to stay in the space when it’s a good fit for them.

With the Century Fellowship especially, we hope to support particularly promising people (organizers and others) to flexibly explore and build towards different ambitious longtermist projects, with the security of longer-term support for themselves and collaborators.

Some of our concerns about this work

Downsides to funding

I worry (and I think others have expressed concerns along these lines too) about potential downsides of funding in the student group space. There are at least a few different potential issues:

  • Attracting unaligned or less-aligned people
    • Funding packages, especially more generous ones (and we expect the funding we offer organizers to be somewhat higher than historically, especially for the very most promising organizers, though not vastly so) increases the risk of attracting people who are less aligned with our goals. We’re happy to support anyone doing work that, upon close scrutiny, seems helpful for longtermist projects; we’re funding work, not feelings, so it’s theoretically completely okay if their motivations are mercenary in nature. But practically, having people be emotionally bought into the same goals correlates with long-term good outcomes.
      • Group-organizing projects often have relatively difficult-to-measure outputs, and we have limited capacity for vetting to ensure that high-EV work is being attempted. (In other roles where aligned people are well-positioned to measure outputs, I think it’s less important to think about how aligned our goals are with our grantees’). Also, people who are more closely aligned are more likely to stay focused on their work (they are unlikely to leave abruptly if they get a more lucrative offer elsewhere, or start subtly using the group for another purpose), and they contribute to  a community with more trust and comradery from shared goals.
    • Despite these risks, we’ve also heard a lot of anecdotes of promising people being put off or demotivated by being offered relatively low pay; in some cases, they pursued different paths that seem less valuable to us.
    • We hope that our increase in capacity will allow us to maintain a fairly low “false positive” rate (accidentally funding organizers who are pretending to share our goals). 
       
  • The funding not being worth it/better off being used elsewhere 
    • Our research suggests that many of the people we think are doing promising longtermist work credit a relevant student group at university as one of the top influences that led them to their current path, and that there’s substantial variation in how successful groups have been (even among the top schools).
    • We currently value the work these people are doing very highly. It’s challenging to share our internal cost-effectiveness estimates (which include some sensitive information both about our assessments of the value of particular kinds of work and how we expect funding to be distributed between cause areas, in addition to being quite rough and potentially difficult to understand).
      • But, per the above, we do see a lot of variation between schools, suggesting that moderate differences in the quality of the organizers can greatly affect how many promising people say the group helped them. I’m pretty confident that, if e.g. changing the hourly rate for an organizer's time from $20/hr to $35/hr leads to a 5 percentage point higher chance of a very strong group rather than a median-quality groups (which might have less than half as many strong members), that will be cost-effective from our perspective, barring the other risks mentioned in this section. We aren’t sure we will achieve that level of impact, but we think it’s an experiment worth trying for a few years. 
         
  • Negative optics/PR concerns 
    • Even if this is a cost-effective use of funding, it might be “bad optics”, i.e. it might appear like a frivolous use of money to others.
    • This seems worth thinking through carefully, but in general, we prefer to try to share our reasoning for unconventional choices we make (though this is often challenging, given capacity constraints and the difficulty of communicating work we’ve done internally), rather than avoiding decisions that otherwise seem good out of fear of social censure. We think that so far this strategy has been fairly effective, especially among the people whose opinions seem especially important for our goals.

Providing insufficient support

  • On a different tack, here’s another way we could do harm: I think when an organization moves into a new space, there will almost always be some initial mistakes and confusions, including some pretty costly ones. We’ve recently increased our capacity a lot, but we’re still capacity-constrained, which I think heightens this risk, and we have less designated capacity for this right now than CEA did previously. In general and especially in the beginning, I worry about us not making decisions and providing support as quickly and as well as I want us to, and this causing some important projects to proceed more slowly and without access to helpful resources.
    • I don’t think I have a great “response” to this concern other than having confidence in my team and our ability to catch up over time, or empower others who can.
    • Also, we are very glad and grateful for other actors in this space, like CEA and the Global Challenges Project, among others. In the short term, we are mostly focusing on providing monetary support; we hope other groups provide other kinds of help. Generally, we think it’s actively good for there to be multiple strong organizations working in important spaces; it leads to greater robustness and diversity of perspectives, as well as some healthy competitive pressure.

We could be wrong

[Added by Asya:] We have our own views on who should and shouldn’t be funded to do this work, but those views could of course be mistaken. I don’t think it’s implausible that we come to believe we’ve made a mistake in either direction, e.g.:

  • We realize our fears about putting off good people and negatively affecting the community are overblown (e.g, maybe interacting with weaker representatives of the community doesn’t have much of a negative effect on people’s likelihood of getting involved later compared to the counterfactual; maybe people who we would be less excited to have around bounce off later in the pipeline reliably enough that it doesn’t matter as much who student groups initially attract), and from a hits-based perspective we should have been funding more organizers without worrying about these downsides.
  • We realize our bar for funding organizers has been too low, and we’ve made it slower or less likely for the most promising people to get involved, made the community less motivating and useful to be a part of for the most impactful people, or substantially damaged the trust network that historically made it easier for people to coordinate and make progress quickly.

 

Overall, we’re excited to support strong university groups, and to be able to offer more and more help to group organizers in the future!  Thanks for bearing with us, and please share your questions and thoughts.



Discuss
Read the whole story
kenb
594 days ago
reply
Philadelphia, PA
Share this story
Delete

Recent Fauci Claims Dismantled By Former CDC Director - And Fauci's Own Words

1 Share
Recent Fauci Claims Dismantled By Former CDC Director - And Fauci's Own Words

Dr. Anthony Fauci has been doing quite the tap-dance of late.

To review - as the head of the National Institute of Allergy and Infectious Diseases, he funded risky gain-of-function research at a Chinese lab aimed at making bat coronavirus transmissible to humans.

Then, when a human-infecting bat coronavirus broke out down the street from the lab he funded, Fauci performed extensive damage control over the virus's origins - before taking a direct role in setting disastrous public policy which included economy-killing lockdowns that led to trillions in inflationary stimulus (which he now denies).

Related:

Now that we're caught up - Fauci recently claims to have had an "open mind" about the possibility of a lab-leak, though he still says it's the least likely explanation for all of the above.

"It looks very much like this was a natural occurrence, but you keep an open mind," he told Fox News in a Friday interview.

Fauci repeated himself in an interview with The Hill on Monday.

Former CDC Director Robert Redfield is calling BS:

As the Epoch Times' Jack Phillips notes:

When asked about Fauci’s recent comments on Monday, Redfield told Fox News that he still suspects COVID-19 emerged “from the laboratory” and “had to be educated in the laboratory to gain the efficient human-to-human transmission capability that it has.”

“There’s very little evidence, if you really want to be critical, to support” the natural emergence theory, he said. The former Trump administration official then compared COVID-19 to prior coronaviruses such as Middle Eastern Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS) that emerged about 10 years ago, saying that neither virus had the same transmission capacity as COVID-19.

“So it’s really exceptional that this virus is one of the most infectious viruses for man. And I still argue that’s because it was educated how to infect human tissue,” Redfield told Fox News.

Laboratory

The same Wuhan laboratory, he added, was the subject of a 2014 report amid claims that researchers performed research on bat-borne viruses that could impact humans.

“I’m disappointed in the [National Institutes of Health] for not leading an objective evaluation from the beginning,” Redfield told the outlet. “I think it really is antithetical to the science where they took a very strong position that people like myself who are somehow conspiratorial just because we have a different scientific hypothesis.”

*  *  *

Natural immunity

In a second bit of furious tap-dancing, Fauci completely deflected when he was asked why natural immunity from previous COVID-19 infections wasn't recognized as a legitimate protection when he was involved in setting public policy that included lockdowns and vaccine passports.

The topic was so woefully ignored that experts urged the Biden administration to formally recognize natural immunity - which Fauci now says they were 'always aware' of.

And yet, watch how he spins it now:

And what did Fauci have to say about natural immunity when Pfizer and others didn't have an expensive vaccine with unproven long-term efficacy?

Tyler Durden Tue, 07/26/2022 - 19:10
Read the whole story
kenb
646 days ago
reply
Philadelphia, PA
Share this story
Delete

Why the Past 10 Years of American Life Have Been Uniquely Stupid

1 Comment and 3 Shares

What would it have been like to live in Babel in the days after its destruction? In the Book of Genesis, we are told that the descendants of Noah built a great city in the land of Shinar. They built a tower “with its top in the heavens” to “make a name” for themselves. God was offended by the hubris of humanity and said:

Look, they are one people, and they have all one language; and this is only the beginning of what they will do; nothing that they propose to do will now be impossible for them. Come, let us go down, and confuse their language there, so that they will not understand one another’s speech.

The text does not say that God destroyed the tower, but in many popular renderings of the story he does, so let’s hold that dramatic image in our minds: people wandering amid the ruins, unable to communicate, condemned to mutual incomprehension.

The story of Babel is the best metaphor I have found for what happened to America in the 2010s, and for the fractured country we now inhabit. Something went terribly wrong, very suddenly. We are disoriented, unable to speak the same language or recognize the same truth. We are cut off from one another and from the past.

It’s been clear for quite a while now that red America and blue America are becoming like two different countries claiming the same territory, with two different versions of the Constitution, economics, and American history. But Babel is not a story about tribalism; it’s a story about the fragmentation of everything. It’s about the shattering of all that had seemed solid, the scattering of people who had been a community. It’s a metaphor for what is happening not only between red and blue, but within the left and within the right, as well as within universities, companies, professional associations, museums, and even families.

From the December 2001 issue: David Brooks on Red and Blue America

Babel is a metaphor for what some forms of social media have done to nearly all of the groups and institutions most important to the country’s future—and to us as a people. How did this happen? And what does it portend for American life?

The Rise of the Modern Tower

There is a direction to history and it is toward cooperation at larger scales. We see this trend in biological evolution, in the series of “major transitions” through which multicellular organisms first appeared and then developed new symbiotic relationships. We see it in cultural evolution too, as Robert Wright explained in his 1999 book, Nonzero: The Logic of Human Destiny. Wright showed that history involves a series of transitions, driven by rising population density plus new technologies (writing, roads, the printing press) that created new possibilities for mutually beneficial trade and learning. Zero-sum conflicts—such as the wars of religion that arose as the printing press spread heretical ideas across Europe—were better thought of as temporary setbacks, and sometimes even integral to progress. (Those wars of religion, he argued, made possible the transition to modern nation-states with better-informed citizens.) President Bill Clinton praised Nonzero’s optimistic portrayal of a more cooperative future thanks to continued technological advance.

The early internet of the 1990s, with its chat rooms, message boards, and email, exemplified the Nonzero thesis, as did the first wave of social-media platforms, which launched around 2003. Myspace, Friendster, and Facebook made it easy to connect with friends and strangers to talk about common interests, for free, and at a scale never before imaginable. By 2008, Facebook had emerged as the dominant platform, with more than 100 million monthly users, on its way to roughly 3 billion today. In the first decade of the new century, social media was widely believed to be a boon to democracy. What dictator could impose his will on an interconnected citizenry? What regime could build a wall to keep out the internet?

The high point of techno-democratic optimism was arguably 2011, a year that began with the Arab Spring and ended with the global Occupy movement. That is also when Google Translate became available on virtually all smartphones, so you could say that 2011 was the year that humanity rebuilt the Tower of Babel. We were closer than we had ever been to being “one people,” and we had effectively overcome the curse of division by language. For techno-democratic optimists, it seemed to be only the beginning of what humanity could do.

In February 2012, as he prepared to take Facebook public, Mark Zuckerberg reflected on those extraordinary times and set forth his plans. “Today, our society has reached another tipping point,” he wrote in a letter to investors. Facebook hoped “to rewire the way people spread and consume information.” By giving them “the power to share,” it would help them to “once again transform many of our core institutions and industries.”

In the 10 years since then, Zuckerberg did exactly what he said he would do. He did rewire the way we spread and consume information; he did transform our institutions, and he pushed us past the tipping point. It has not worked out as he expected.

Things Fall Apart

Historically, civilizations have relied on shared blood, gods, and enemies to counteract the tendency to split apart as they grow. But what is it that holds together large and diverse secular democracies such as the United States and India, or, for that matter, modern Britain and France?

Social scientists have identified at least three major forces that collectively bind together successful democracies: social capital (extensive social networks with high levels of trust), strong institutions, and shared stories. Social media has weakened all three. To see how, we must understand how social media changed over time—and especially in the several years following 2009.

In their early incarnations, platforms such as Myspace and Facebook were relatively harmless. They allowed users to create pages on which to post photos, family updates, and links to the mostly static pages of their friends and favorite bands. In this way, early social media can be seen as just another step in the long progression of technological improvements—from the Postal Service through the telephone to email and texting—that helped people achieve the eternal goal of maintaining their social ties.

But gradually, social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell, they became more adept at putting on performances and managing their personal brand—activities that might impress others but that do not deepen friendships in the way that a private phone conversation will.

From the December 2019 issue: The dark psychology of social networks

Once social-media platforms had trained users to spend more time performing and less time connecting, the stage was set for the major transformation, which began in 2009: the intensification of viral dynamics.

Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.

Shortly after its “Like” button began to produce data about what best “engaged” its users, Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well. Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.

By 2013, social media had become a new game, with dynamics unlike those in 2008. If you were skillful or lucky, you might create a post that would “go viral” and make you “internet famous” for a few days. If you blundered, you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers, and you in turn contributed thousands of clicks to the game.

This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment, and their prediction of how others would react to each new action. One of the engineers at Twitter who had worked on the “Retweet” button later revealed that he regretted his contribution because it had made Twitter a nastier place. As he watched Twitter mobs forming through the use of the new tool, he thought to himself, “We might have just handed a 4-year-old a loaded weapon.”

As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.

It was just this kind of twitchy and explosive spread of anger that James Madison had tried to protect us from as he was drafting the U.S. Constitution. The Framers of the Constitution were excellent social psychologists. They knew that democracy had an Achilles’ heel because it depended on the collective judgment of the people, and democratic communities are subject to “the turbulency and weakness of unruly passions.” The key to designing a sustainable republic, therefore, was to build in mechanisms to slow things down, cool passions, require compromise, and give leaders some insulation from the mania of the moment while still holding them accountable to the people periodically, on Election Day.

From the October 2018 issue: America is living James Madison’s nightmare

The tech companies that enhanced virality from 2009 to 2012 brought us deep into Madison’s nightmare. Many authors quote his comments in “Federalist No. 10” on the innate human proclivity toward “faction,” by which he meant our tendency to divide ourselves into teams or parties that are so inflamed with “mutual animosity” that they are “much more disposed to vex and oppress each other than to cooperate for their common good.”

But that essay continues on to a less quoted yet equally important insight, about democracy’s vulnerability to triviality. Madison notes that people are so prone to factionalism that “where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts.”

Social media has both magnified and weaponized the frivolous. Is our democracy any healthier now that we’ve had Twitter brawls over Representative Alexandria Ocasio-Cortez’s Tax the Rich dress at the annual Met Gala, and Melania Trump’s dress at a 9/11 memorial event, which had stitching that kind of looked like a skyscraper? How about Senator Ted Cruz’s tweet criticizing Big Bird for tweeting about getting his COVID vaccine?

Read: The Ukraine crisis briefly put America’s culture war in perspective

It’s not just the waste of time and scarce attention that matters; it’s the continual chipping-away of trust. An autocracy can deploy propaganda or use fear to motivate the behaviors it desires, but a democracy depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions. Blind and irrevocable trust in any particular individual or organization is never warranted. But when citizens lose trust in elected leaders, health authorities, the courts, the police, universities, and the integrity of elections, then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side. The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia).

Recent academic studies suggest that social media is indeed corrosive to trust in governments, news media, and people and institutions in general. A working paper that offers the most comprehensive review of the research, led by the social scientists Philipp Lorenz-Spreen and Lisa Oswald, concludes that “the large majority of reported associations between digital media use and trust appear to be detrimental for democracy.” The literature is complex—some studies show benefits, particularly in less developed democracies—but the review found that, on balance, social media amplifies political polarization; foments populism, especially right-wing populism; and is associated with the spread of misinformation.

From the April 2021 issue: The internet doesn’t have to be awful

When people lose trust in institutions, they lose trust in the stories told by those institutions. That’s particularly true of the institutions entrusted with the education of children. History curricula have often caused political controversy, but Facebook and Twitter make it possible for parents to become outraged every day over a new snippet from their children’s history lessons––and math lessons and literature selections, and any new pedagogical shifts anywhere in the country. The motives of teachers and administrators come into question, and overreaching laws or curricular reforms sometimes follow, dumbing down education and reducing trust in it further. One result is that young people educated in the post-Babel era are less likely to arrive at a coherent story of who we are as a people, and less likely to share any such story with those who attended different schools or who were educated in a different decade.

The former CIA analyst Martin Gurri predicted these fracturing effects in his 2014 book, The Revolt of the Public. Gurri’s analysis focused on the authority-subverting effects of information’s exponential growth, beginning with the internet in the 1990s. Writing nearly a decade ago, Gurri could already see the power of social media as a universal solvent, breaking down bonds and weakening institutions everywhere it reached. He noted that distributed networks “can protest and overthrow, but never govern.” He described the nihilism of the many protest movements of 2011 that organized mostly online and that, like Occupy Wall Street, demanded the destruction of existing institutions without offering an alternative vision of the future or an organization that could bring it about.

Gurri is no fan of elites or of centralized authority, but he notes a constructive feature of the pre-digital era: a single “mass audience,” all consuming the same content, as if they were all looking into the same gigantic mirror at the reflection of their own society. In a comment to Vox that recalls the first post-Babel diaspora, he said:

The digital revolution has shattered that mirror, and now the public inhabits those broken pieces of glass. So the public isn’t one thing; it’s highly fragmented, and it’s basically mutually hostile. It’s mostly people yelling at each other and living in bubbles of one sort or another.

Mark Zuckerberg may not have wished for any of that. But by rewiring everything in a headlong rush for growth—with a naive conception of human psychology, little understanding of the intricacy of institutions, and no concern for external costs imposed on society—Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.

I think we can date the fall of the tower to the years between 2011 (Gurri’s focal year of “nihilistic” protests) and 2015, a year marked by the “great awokening” on the left and the ascendancy of Donald Trump on the right. Trump did not destroy the tower; he merely exploited its fall. He was the first politician to master the new dynamics of the post-Babel era, in which outrage is the key to virality, stage performance crushes competence, Twitter can overpower all the newspapers in the country, and stories cannot be shared (or at least trusted) across more than a few adjacent fragments—so truth cannot achieve widespread adherence.

The many analysts, including me, who had argued that Trump could not win the general election were relying on pre-Babel intuitions, which said that scandals such as the Access Hollywood tape (in which Trump boasted about committing sexual assault) are fatal to a presidential campaign. But after Babel, nothing really means anything anymore––at least not in a way that is durable and on which people widely agree.

Politics After Babel

“Politics is the art of the possible,” the German statesman Otto von Bismarck said in 1867. In a post-Babel democracy, not much may be possible.

Of course, the American culture war and the decline of cross-party cooperation predates social media’s arrival. The mid-20th century was a time of unusually low polarization in Congress, which began reverting back to historical levels in the 1970s and ’80s. The ideological distance between the two parties began increasing faster in the 1990s. Fox News and the 1994 “Republican Revolution” converted the GOP into a more combative party. For example, House Speaker Newt Gingrich discouraged new Republican members of Congress from moving their families to Washington, D.C., where they were likely to form social ties with Democrats and their families.

So cross-party relationships were already strained before 2009. But the enhanced virality of social media thereafter made it more hazardous to be seen fraternizing with the enemy or even failing to attack the enemy with sufficient vigor. On the right, the term RINO (Republican in Name Only) was superseded in 2015 by the more contemptuous term cuckservative, popularized on Twitter by Trump supporters. On the left, social media launched callout culture in the years after 2012, with transformative effects on university life and later on politics and culture throughout the English-speaking world.

From the September 2015 issue: The coddling of the American mind

What changed in the 2010s? Let’s revisit that Twitter engineer’s metaphor of handing a loaded gun to a 4-year-old. A mean tweet doesn’t kill anyone; it is an attempt to shame or punish someone publicly while broadcasting one’s own virtue, brilliance, or tribal loyalties. It’s more a dart than a bullet, causing pain but no fatalities. Even so, from 2009 to 2012, Facebook and Twitter passed out roughly 1 billion dart guns globally. We’ve been shooting one another ever since.

Social media has given voice to some people who had little previously, and it has made it easier to hold powerful people accountable for their misdeeds, not just in politics but in business, the arts, academia, and elsewhere. Sexual harassers could have been called out in anonymous blog posts before Twitter, but it’s hard to imagine that the #MeToo movement would have been nearly so successful without the viral enhancement that the major platforms offered. However, the warped “accountability” of social media has also brought injustice—and political dysfunction—in three ways.

First, the dart guns of social media give more power to trolls and provocateurs while silencing good citizens. Research by the political scientists Alexander Bor and Michael Bang Petersen found that a small subset of people on social-media platforms are highly concerned with gaining status and are willing to use aggression to do so. They admit that in their online discussions they often curse, make fun of their opponents, and get blocked by other users or reported for inappropriate comments. Across eight studies, Bor and Petersen found that being online did not make most people more aggressive or hostile; rather, it allowed a small number of aggressive people to attack a much larger set of victims. Even a small number of jerks were able to dominate discussion forums, Bor and Petersen found, because nonjerks are easily turned off from online discussions of politics. Additional research finds that women and Black people are harassed disproportionately, so the digital public square is less welcoming to their voices.

Second, the dart guns of social media give more power and voice to the political extremes while reducing the power and voice of the moderate majority. The “Hidden Tribes” study, by the pro-democracy group More in Common, surveyed 8,000 Americans in 2017 and 2018 and identified seven groups that shared beliefs and behaviors. The one furthest to the right, known as the “devoted conservatives,” comprised 6 percent of the U.S. population. The group furthest to the left, the “progressive activists,” comprised 8 percent of the population. The progressive activists were by far the most prolific group on social media: 70 percent had shared political content over the previous year. The devoted conservatives followed, at 56 percent.

These two extreme groups are similar in surprising ways. They are the whitest and richest of the seven groups, which suggests that America is being torn apart by a battle between two subsets of the elite who are not representative of the broader society. What’s more, they are the two groups that show the greatest homogeneity in their moral and political attitudes. This uniformity of opinion, the study’s authors speculate, is likely a result of thought-policing on social media: “Those who express sympathy for the views of opposing groups may experience backlash from their own cohort.” In other words, political extremists don’t just shoot darts at their enemies; they spend a lot of their ammunition targeting dissenters or nuanced thinkers on their own team. In this way, social media makes a political system based on compromise grind to a halt.

From the October 2021 issue: Anne Applebaum on how mob justice is trampling democratic discourse

Finally, by giving everyone a dart gun, social media deputizes everyone to administer justice with no due process. Platforms like Twitter devolve into the Wild West, with no accountability for vigilantes. A successful attack attracts a barrage of likes and follow-on strikes. Enhanced-virality platforms thereby facilitate massive collective punishment for small or imagined offenses, with real-world consequences, including innocent people losing their jobs and being shamed into suicide. When our public square is governed by mob dynamics unrestrained by due process, we don’t get justice and inclusion; we get a society that ignores context, proportionality, mercy, and truth.

Structural Stupidity

Since the tower fell, debates of all kinds have grown more and more confused. The most pervasive obstacle to good thinking is confirmation bias, which refers to the human tendency to search only for evidence that confirms our preferred beliefs. Even before the advent of social media, search engines were supercharging confirmation bias, making it far easier for people to find evidence for absurd beliefs and conspiracy theories, such as that the Earth is flat and that the U.S. government staged the 9/11 attacks. But social media made things much worse.

From the September 2018 issue: The cognitive biases tricking your brain

The most reliable cure for confirmation bias is interaction with people who don’t share your beliefs. They confront you with counterevidence and counterargument. John Stuart Mill said, “He who knows only his own side of the case, knows little of that,” and he urged us to seek out conflicting views “from persons who actually believe them.” People who think differently and are willing to speak up if they disagree with you make you smarter, almost as if they are extensions of your own brain. People who try to silence or intimidate their critics make themselves stupider, almost as if they are shooting darts into their own brain.

In his book The Constitution of Knowledge, Jonathan Rauch describes the historical breakthrough in which Western societies developed an “epistemic operating system”—that is, a set of institutions for generating knowledge from the interactions of biased and cognitively flawed individuals. English law developed the adversarial system so that biased advocates could present both sides of a case to an impartial jury. Newspapers full of lies evolved into professional journalistic enterprises, with norms that required seeking out multiple sides of a story, followed by editorial review, followed by fact-checking. Universities evolved from cloistered medieval institutions into research powerhouses, creating a structure in which scholars put forth evidence-backed claims with the knowledge that other scholars around the world would be motivated to gain prestige by finding contrary evidence.

Part of America’s greatness in the 20th century came from having developed the most capable, vibrant, and productive network of knowledge-producing institutions in all of human history, linking together the world’s best universities, private companies that turned scientific advances into life-changing consumer products, and government agencies that supported scientific research and led the collaboration that put people on the moon.

But this arrangement, Rauch notes, “is not self-maintaining; it relies on an array of sometimes delicate social settings and understandings, and those need to be understood, affirmed, and protected.” So what happens when an institution is not well maintained and internal disagreement ceases, either because its people have become ideologically uniform or because they have become afraid to dissent?

This, I believe, is what happened to many of America’s key institutions in the mid-to-late 2010s. They got stupider en masse because social media instilled in their members a chronic fear of getting darted. The shift was most pronounced in universities, scholarly associations, creative industries, and political organizations at every level (national, state, and local), and it was so pervasive that it established new behavioral norms backed by new policies seemingly overnight. The new omnipresence of enhanced-virality social media meant that a single word uttered by a professor, leader, or journalist, even if spoken with positive intent, could lead to a social-media firestorm, triggering an immediate dismissal or a drawn-out investigation by the institution. Participants in our key institutions began self-censoring to an unhealthy degree, holding back critiques of policies and ideas—even those presented in class by their students—that they believed to be ill-supported or wrong.

But when an institution punishes internal dissent, it shoots darts into its own brain.

The stupefying process plays out differently on the right and the left because their activist wings subscribe to different narratives with different sacred values. The “Hidden Tribes” study tells us that the “devoted conservatives” score highest on beliefs related to authoritarianism. They share a narrative in which America is eternally under threat from enemies outside and subversives within; they see life as a battle between patriots and traitors. According to the political scientist Karen Stenner, whose work the “Hidden Tribes” study drew upon, they are psychologically different from the larger group of “traditional conservatives” (19 percent of the population), who emphasize order, decorum, and slow rather than radical change.

Only within the devoted conservatives’ narratives do Donald Trump’s speeches make sense, from his campaign’s ominous opening diatribe about Mexican “rapists” to his warning on January 6, 2021: “If you don’t fight like hell, you’re not going to have a country anymore.”

The traditional punishment for treason is death, hence the battle cry on January 6: “Hang Mike Pence.” Right-wing death threats, many delivered by anonymous accounts, are proving effective in cowing traditional conservatives, for example in driving out local election officials who failed to “stop the steal.” The wave of threats delivered to dissenting Republican members of Congress has similarly pushed many of the remaining moderates to quit or go silent, giving us a party ever more divorced from the conservative tradition, constitutional responsibility, and reality. We now have a Republican Party that describes a violent assault on the U.S. Capitol as “legitimate political discourse,” supported—or at least not contradicted—by an array of right-wing think tanks and media organizations.

The stupidity on the right is most visible in the many conspiracy theories spreading across right-wing media and now into Congress. “Pizzagate,” QAnon, the belief that vaccines contain microchips, the conviction that Donald Trump won reelection—it’s hard to imagine any of these ideas or belief systems reaching the levels that they have without Facebook and Twitter.

The Democrats have also been hit hard by structural stupidity, though in a different way. In the Democratic Party, the struggle between the progressive wing and the more moderate factions is open and ongoing, and often the moderates win. The problem is that the left controls the commanding heights of the culture: universities, news organizations, Hollywood, art museums, advertising, much of Silicon Valley, and the teachers’ unions and teaching colleges that shape K–12 education. And in many of those institutions, dissent has been stifled: When everyone was issued a dart gun in the early 2010s, many left-leaning institutions began shooting themselves in the brain. And unfortunately, those were the brains that inform, instruct, and entertain most of the country.

Liberals in the late 20th century shared a belief that the sociologist Christian Smith called the “liberal progress” narrative, in which America used to be horrifically unjust and repressive, but, thanks to the struggles of activists and heroes, has made (and continues to make) progress toward realizing the noble promise of its founding. This story easily supports liberal patriotism, and it was the animating narrative of Barack Obama’s presidency. It is also the view of the “traditional liberals” in the “Hidden Tribes” study (11 percent of the population), who have strong humanitarian values, are older than average, and are largely the people leading America’s cultural and intellectual institutions.

But when the newly viralized social-media platforms gave everyone a dart gun, it was younger progressive activists who did the most shooting, and they aimed a disproportionate number of their darts at these older liberal leaders. Confused and fearful, the leaders rarely challenged the activists or their nonliberal narrative in which life at every institution is an eternal battle among identity groups over a zero-sum pie, and the people on top got there by oppressing the people on the bottom. This new narrative is rigidly egalitarian––focused on equality of outcomes, not of rights or opportunities. It is unconcerned with individual rights.

The universal charge against people who disagree with this narrative is not “traitor”; it is “racist,” “transphobe,” “Karen,” or some related scarlet letter marking the perpetrator as one who hates or harms a marginalized group. The punishment that feels right for such crimes is not execution; it is public shaming and social death.

You can see the stupefaction process most clearly when a person on the left merely points to research that questions or contradicts a favored belief among progressive activists. Someone on Twitter will find a way to associate the dissenter with racism, and others will pile on. For example, in the first week of protests after the killing of George Floyd, some of which included violence, the progressive policy analyst David Shor, then employed by Civis Analytics, tweeted a link to a study showing that violent protests back in the 1960s led to electoral setbacks for the Democrats in nearby counties. Shor was clearly trying to be helpful, but in the ensuing outrage he was accused of “anti-Blackness” and was soon dismissed from his job. (Civis Analytics has denied that the tweet led to Shor’s firing.)

The Shor case became famous, but anyone on Twitter had already seen dozens of examples teaching the basic lesson: Don’t question your own side’s beliefs, policies, or actions. And when traditional liberals go silent, as so many did in the summer of 2020, the progressive activists’ more radical narrative takes over as the governing narrative of an organization. This is why so many epistemic institutions seemed to “go woke” in rapid succession that year and the next, beginning with a wave of controversies and resignations at The New York Times and other newspapers, and continuing on to social-justice pronouncements by groups of doctors and medical associations (one publication by the American Medical Association and the Association of American Medical Colleges, for instance, advised medical professionals to refer to neighborhoods and communities as “oppressed” or “systematically divested” instead of “vulnerable” or “poor”), and the hurried transformation of curricula at New York City’s most expensive private schools.

Tragically, we see stupefaction playing out on both sides in the COVID wars. The right has been so committed to minimizing the risks of COVID that it has turned the disease into one that preferentially kills Republicans. The progressive left is so committed to maximizing the dangers of COVID that it often embraces an equally maximalist, one-size-fits-all strategy for vaccines, masks, and social distancing—even as they pertain to children. Such policies are not as deadly as spreading fears and lies about vaccines, but many of them have been devastating for the mental health and education of children, who desperately need to play with one another and go to school; we have little clear evidence that school closures and masks for young children reduce deaths from COVID. Most notably for the story I’m telling here, progressive parents who argued against school closures were frequently savaged on social media and met with the ubiquitous leftist accusations of racism and white supremacy. Others in blue cities learned to keep quiet.

American politics is getting ever more ridiculous and dysfunctional not because Americans are getting less intelligent. The problem is structural. Thanks to enhanced-virality social media, dissent is punished within many of our institutions, which means that bad ideas get elevated into official policy.

It’s Going to Get Much Worse

In a 2018 interview, Steve Bannon, the former adviser to Donald Trump, said that the way to deal with the media is “to flood the zone with shit.” He was describing the “firehose of falsehood” tactic pioneered by Russian disinformation programs to keep Americans confused, disoriented, and angry. But back then, in 2018, there was an upper limit to the amount of shit available, because all of it had to be created by a person (other than some low-quality stuff produced by bots).

Now, however, artificial intelligence is close to enabling the limitless spread of highly believable disinformation. The AI program GPT-3 is already so good that you can give it a topic and a tone and it will spit out as many essays as you like, typically with perfect grammar and a surprising level of coherence. In a year or two, when the program is upgraded to GPT-4, it will become far more capable. In a 2020 essay titled “The Supply of Disinformation Will Soon Be Infinite,” Renée DiResta, the research manager at the Stanford Internet Observatory, explained that spreading falsehoods—whether through text, images, or deep-fake videos—will quickly become inconceivably easy. (She co-wrote the essay with GPT-3.)

American factions won’t be the only ones using AI and social media to generate attack content; our adversaries will too. In a haunting 2018 essay titled “The Digital Maginot Line,” DiResta described the state of affairs bluntly. “We are immersed in an evolving, ongoing conflict: an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality,” she wrote. The Soviets used to have to send over agents or cultivate Americans willing to do their bidding. But social media made it cheap and easy for Russia’s Internet Research Agency to invent fake events or distort real ones to stoke rage on both the left and the right, often over race. Later research showed that an intensive campaign began on Twitter in 2013 but soon spread to Facebook, Instagram, and YouTube, among other platforms. One of the major goals was to polarize the American public and spread distrust—to split us apart at the exact weak point that Madison had identified.

We now know that it’s not just the Russians attacking American democracy. Before the 2019 protests in Hong Kong, China had mostly focused on domestic platforms such as WeChat. But now China is discovering how much it can do with Twitter and Facebook, for so little money, in its escalating conflict with the U.S. Given China’s own advances in AI, we can expect it to become more skillful over the next few years at further dividing America and further uniting China.

In the 20th century, America’s shared identity as the country leading the fight to make the world safe for democracy was a strong force that helped keep the culture and the polity together. In the 21st century, America’s tech companies have rewired the world and created products that now appear to be corrosive to democracy, obstacles to shared understanding, and destroyers of the modern tower.

Democracy After Babel

We can never return to the way things were in the pre-digital age. The norms, institutions, and forms of political participation that developed during the long era of mass communication are not going to work well now that technology has made everything so much faster and more multidirectional, and when bypassing professional gatekeepers is so easy. And yet American democracy is now operating outside the bounds of sustainability. If we do not make major changes soon, then our institutions, our political system, and our society may collapse during the next major war, pandemic, financial meltdown, or constitutional crisis.

What changes are needed? Redesigning democracy for the digital age is far beyond my abilities, but I can suggest three categories of reforms––three goals that must be achieved if democracy is to remain viable in the post-Babel era. We must harden democratic institutions so that they can withstand chronic anger and mistrust, reform social media so that it becomes less socially corrosive, and better prepare the next generation for democratic citizenship in this new age.

Harden Democratic Institutions

Political polarization is likely to increase for the foreseeable future. Thus, whatever else we do, we must reform key institutions so that they can continue to function even if levels of anger, misinformation, and violence increase far above those we have today.

For instance, the legislative branch was designed to require compromise, yet Congress, social media, and partisan cable news channels have co-evolved such that any legislator who reaches across the aisle may face outrage within hours from the extreme wing of her party, damaging her fundraising prospects and raising her risk of being primaried in the next election cycle.

Reforms should reduce the outsize influence of angry extremists and make legislators more responsive to the average voter in their district. One example of such a reform is to end closed party primaries, replacing them with a single, nonpartisan, open primary from which the top several candidates advance to a general election that also uses ranked-choice voting. A version of this voting system has already been implemented in Alaska, and it seems to have given Senator Lisa Murkowski more latitude to oppose former President Trump, whose favored candidate would be a threat to Murkowski in a closed Republican primary but is not in an open one.

A second way to harden democratic institutions is to reduce the power of either political party to game the system in its favor, for example by drawing its preferred electoral districts or selecting the officials who will supervise elections. These jobs should all be done in a nonpartisan way. Research on procedural justice shows that when people perceive that a process is fair, they are more likely to accept the legitimacy of a decision that goes against their interests. Just think of the damage already done to the Supreme Court’s legitimacy by the Senate’s Republican leadership when it blocked consideration of Merrick Garland for a seat that opened up nine months before the 2016 election, and then rushed through the appointment of Amy Coney Barrett in 2020. A widely discussed reform would end this political gamesmanship by having justices serve staggered 18-year terms so that each president makes one appointment every two years.

Reform Social Media

A democracy cannot survive if its public squares are places where people fear speaking up and where no stable consensus can be reached. Social media’s empowerment of the far left, the far right, domestic trolls, and foreign agents is creating a system that looks less like democracy and more like rule by the most aggressive.

But it is within our power to reduce social media’s ability to dissolve trust and foment structural stupidity. Reforms should limit the platforms’ amplification of the aggressive fringes while giving more voice to what More in Common calls “the exhausted majority.”

Those who oppose regulation of social media generally focus on the legitimate concern that government-mandated content restrictions will, in practice, devolve into censorship. But the main problem with social media is not that some people post fake or toxic stuff; it’s that fake and outrage-inducing content can now attain a level of reach and influence that was not possible before 2009. The Facebook whistleblower Frances Haugen advocates for simple changes to the architecture of the platforms, rather than for massive and ultimately futile efforts to police all content. For example, she has suggested modifying the “Share” function on Facebook so that after any content has been shared twice, the third person in the chain must take the time to copy and paste the content into a new post. Reforms like this are not censorship; they are viewpoint-neutral and content-neutral, and they work equally well in all languages. They don’t stop anyone from saying anything; they just slow the spread of content that is, on average, less likely to be true.

Perhaps the biggest single change that would reduce the toxicity of existing platforms would be user verification as a precondition for gaining the algorithmic amplification that social media offers.

Read: Facebook has a superuser-supremacy problem

Banks and other industries have “know your customer” rules so that they can’t do business with anonymous clients laundering money from criminal enterprises. Large social-media platforms should be required to do the same. That does not mean users would have to post under their real names; they could still use a pseudonym. It just means that before a platform spreads your words to millions of people, it has an obligation to verify (perhaps through a third party or nonprofit) that you are a real human being, in a particular country, and are old enough to be using the platform. This one change would wipe out most of the hundreds of millions of bots and fake accounts that currently pollute the major platforms. It would also likely reduce the frequency of death threats, rape threats, racist nastiness, and trolling more generally. Research shows that antisocial behavior becomes more common online when people feel that their identity is unknown and untraceable.

In any case, the growing evidence that social media is damaging democracy is sufficient to warrant greater oversight by a regulatory body, such as the Federal Communications Commission or the Federal Trade Commission. One of the first orders of business should be compelling the platforms to share their data and their algorithms with academic researchers.

Prepare the Next Generation

The members of Gen Z––those born in and after 1997––bear none of the blame for the mess we are in, but they are going to inherit it, and the preliminary signs are that older generations have prevented them from learning how to handle it.

Childhood has become more tightly circumscribed in recent generations––with less opportunity for free, unstructured play; less unsupervised time outside; more time online. Whatever else the effects of these shifts, they have likely impeded the development of abilities needed for effective self-governance for many young adults. Unsupervised free play is nature’s way of teaching young mammals the skills they’ll need as adults, which for humans include the ability to cooperate, make and enforce rules, compromise, adjudicate conflicts, and accept defeat. A brilliant 2015 essay by the economist Steven Horwitz argued that free play prepares children for the “art of association” that Alexis de Tocqueville said was the key to the vibrancy of American democracy; he also argued that its loss posed “a serious threat to liberal societies.” A generation prevented from learning these social skills, Horwitz warned, would habitually appeal to authorities to resolve disputes and would suffer from a “coarsening of social interaction” that would “create a world of more conflict and violence.”

From the September 2017 issue: Have smartphones destroyed a generation?

And while social media has eroded the art of association throughout society, it may be leaving its deepest and most enduring marks on adolescents. A surge in rates of anxiety, depression, and self-harm among American teens began suddenly in the early 2010s. (The same thing happened to Canadian and British teens, at the same time.) The cause is not known, but the timing points to social media as a substantial contributor—the surge began just as the large majority of American teens became daily users of the major platforms. Correlational and experimental studies back up the connection to depression and anxiety, as do reports from young people themselves, and from Facebook’s own research, as reported by The Wall Street Journal.

Depression makes people less likely to want to engage with new people, ideas, and experiences. Anxiety makes new things seem more threatening. As these conditions have risen and as the lessons on nuanced social behavior learned through free play have been delayed, tolerance for diverse viewpoints and the ability to work out disputes have diminished among many young people. For example, university communities that could tolerate a range of speakers as recently as 2010 arguably began to lose that ability in subsequent years, as Gen Z began to arrive on campus. Attempts to disinvite visiting speakers rose. Students did not just say that they disagreed with visiting speakers; some said that those lectures would be dangerous, emotionally devastating, a form of violence. Because rates of teen depression and anxiety have continued to rise into the 2020s, we should expect these views to continue in the generations to follow, and indeed to become more severe.

Read: Why I cover campus controversies

The most important change we can make to reduce the damaging effects of social media on children is to delay entry until they have passed through puberty. Congress should update the Children’s Online Privacy Protection Act, which unwisely set the age of so-called internet adulthood (the age at which companies can collect personal information from children without parental consent) at 13 back in 1998, while making little provision for effective enforcement. The age should be raised to at least 16, and companies should be held responsible for enforcing it.

More generally, to prepare the members of the next generation for post-Babel democracy, perhaps the most important thing we can do is let them out to play. Stop starving children of the experiences they most need to become good citizens: free play in mixed-age groups of children with minimal adult supervision. Every state should follow the lead of Utah, Oklahoma, and Texas and pass a version of the Free-Range Parenting Law that helps assure parents that they will not be investigated for neglect if their 8- or 9-year-old children are spotted playing in a park. With such laws in place, schools, educators, and public-health authorities should then encourage parents to let their kids walk to school and play in groups outside, just as more kids used to do.

Hope After Babel

The story I have told is bleak, and there is little evidence to suggest that America will return to some semblance of normalcy and stability in the next five or 10 years. Which side is going to become conciliatory? What is the likelihood that Congress will enact major reforms that strengthen democratic institutions or detoxify social media?

Yet when we look away from our dysfunctional federal government, disconnect from social media, and talk with our neighbors directly, things seem more hopeful. Most Americans in the More in Common report are members of the “exhausted majority,” which is tired of the fighting and is willing to listen to the other side and compromise. Most Americans now see that social media is having a negative impact on the country, and are becoming more aware of its damaging effects on children.

Will we do anything about it?

When Tocqueville toured the United States in the 1830s, he was impressed by the American habit of forming voluntary associations to fix local problems, rather than waiting for kings or nobles to act, as Europeans would do. That habit is still with us today. In recent years, Americans have started hundreds of groups and organizations dedicated to building trust and friendship across the political divide, including BridgeUSA, Braver Angels (on whose board I serve), and many others listed at BridgeAlliance.us. We cannot expect Congress and the tech companies to save us. We must change ourselves and our communities.

What would it be like to live in Babel in the days after its destruction? We know. It is a time of confusion and loss. But it is also a time to reflect, listen, and build.


This article appears in the May 2022 print edition with the headline “After Babel.”

Read the whole story
kenb
647 days ago
reply
Test
Philadelphia, PA
Share this story
Delete

Thursday assorted links

1 Share

1. Claims about overnight excess returns.  Big if true.

2. Thread about the Malaysian economy.

3. Lithuania and nested games — Bloomberg column.

4. How often do bystanders stop attacks? (NYT)

5. What does research on a windfall profits tax for oil tell us?  Recommended, brutal but not unexpected.  #TheGreatForgetting

The post Thursday assorted links appeared first on Marginal REVOLUTION.

Read the whole story
kenb
653 days ago
reply
Philadelphia, PA
Share this story
Delete

Hindu Nationalists Try to Demonize St. Devasahayam

1 Share
National Catholic Register
Devotees of St. Devasahayam pray June 5 at the spot where the Indian saint is said to have knelt down and prayed before his execution, which has become a shrine.

In the wake of the May canonization, fundamentalist networks are continuing their campaign of online defamation, as they have with other revered Christians.

Read the whole story
kenb
657 days ago
reply
Philadelphia, PA
Share this story
Delete

Honoring Albert Camus

1 Share
We are approaching the sixtieth anniversary of the death of Albert Camus, commentator on human absurdity.
Read the whole story
kenb
1600 days ago
reply
Philadelphia, PA
Share this story
Delete
Next Page of Stories