And such responses are precisely what bring in audiences and revenue. “The business models that run the social-media industrial complex have a lot to do with the outcomes we’re seeing,” Aral says. “It’s an attention economy, and businesses want you engaged. How do they get engagement? Well, they give you little dopamine hits, and … get you riled up.”
The political implications are sobering. During the 2016 US presidential campaign, Russia spread false information to at least 126 million people on Facebook and another 20 million on Instagram. “I think we need to be a lot more vigilant than we are,” says Aral.
To that end, he favors automated and user-generated labeling of false news, and measures to minimize the ad revenue that content creators can collect from misinformation. He believes federal privacy measures are potentially useful and calls for data portability and interoperability, so consumers “could freely switch from one network to another.” He does not endorse breaking up Facebook, suggesting instead that the social-media economy needs structural reform.
But without change, he adds, Facebook and the others risk civic backlash. “If you get me angry and riled up, I might click more in the short term, but I might also grow really tired and annoyed by how this is making my life miserable, and I might turn you off entirely,” he says. But bad outcomes are not inevitable—for the companies or for society.
“Technology is what we make it,” he says, “and we are abdicating our responsibility to steer technology toward good and away from bad. That is the path I try to illuminate in this book.”
Send book news to
MIT News, 1 Main Street, 13th Floor
Cambridge, MA 02142