Front page layout
Site theme
Sign up or login to join the discussions!
– Sep 21, 2021 6:32 pm UTC
Facebook had it rough last week. Leaked documents—many leaked documents—formed the backbone of a string of reports published in The Wall Street Journal. Together, the stories paint the picture of a company barely in control of its own creation. The revelations run the gamut: Facebook had created special rules for VIPs that largely exempted 5.8 million users from moderation, forced troll farm content on 40 percent of America, created toxic conditions for teen girls, ignored cartels and human traffickers, and even undermined CEO Mark Zuckerberg’s own desire to promote vaccination against COVID.
Now, Facebook wants you to know it’s sorry and that it’s trying to do better.

The change, Facebook said, was the integration of safety and security into product development. The press release doesn’t say when the change was made, and a Facebook spokesperson couldn’t confirm for Ars when integrity became more embedded in the product teams. But the press release does say the company’s Facebook Horizon VR efforts benefitted from this process. Those were released to beta only last year.
The release would appear to confirm that, prior to development of Horizon, safety and security were sideshows that were considered after features had been defined and code had been written. Or, maybe problems weren’t addressed until even later, when users encountered them. Regardless of when it happened, it’s a stunning revelation for a multibillion dollar company that counts 2 billion people as users.
Facebook isn’t the first company to have a cavalier approach to security, and as such, it didn’t have to make the same mistakes. Early in Facebook’s history, all it had to do was look as far as one of its major shareholders, Microsoft, which had bought special stock in the startup in 2007.
In the late 1990s and early 2000s, Microsoft had its own issues with security, producing versions of Windows and Internet Information Server that were riddled with security holes. The company began to fix things after Bill Gates made security the company’s top priority in his 2002 “Trustworthy computing” memo. One result of that push was the Microsoft Security Development Lifecycle, which implores managers to “make security everyone’s business.” Microsoft began publishing books about its approach in the mid-2000s, and it’s hard to imagine that Facebook’s engineers were unaware of it.
But a security-first development program must have come with costs that Facebook was unwilling to bear—namely, growth. Time and again the company has been confronted with choices about whether to address a safety or security problem or prioritize growth. It has ignored privacy concerns by allowing business partners to access users’ personal data. It killed a project to use artificial intelligence to tackle disinformation on the platform. It’s focus on Groups a few years ago led to “super-inviters” able to recruit hundreds of people to the “Stop the Steal” group that ultimately helped foment the January 6 insurrection at the US Capitol. In each case, the company had chosen to pursue growth first and deal with the consequences later.
That mindset appears to have been baked into the company from the beginning, when Zuckerberg took an investment from Peter Thiel and copied the “blitzscaling” strategy that Thiel and others used at PayPal.

“The big picture is that several mid-level VPs and Directors invested and built big quantitative social science teams on the belief that knowing what was wrong would lead to positive change. Those teams have run into the power of the Growth and unified Policy teams,” Stamos tweeted this week. “Turns out the knowledge isn’t helpful when the top execs haven’t changed the way products are measured and employees are compensated.”
Even today, there doesn’t appear to be one person who is responsible for safety and security at the company. “Our integrity work is made up of many different teams, so hard to say [if there is] one leader, but Guy Rosen is VP of Integrity,” a Facebook spokesperson told Ars. Perhaps it’s telling that Rosen doesn’t appear on Facebook’s list of top management.
For now, Facebook doesn’t seem to have much incentive to change. Its stock price is up more than 50 percent over the last year, and shareholders don’t have much leverage given the outsize power of Zuckerberg’s voting shares. Growth at all costs will probably continue. Until, of course, the safety and security problems become so large that they start harming growth and retention. Given Facebook’s statement today, it’s not clear whether the company is there yet. If that moment arrives—and if Microsoft’s transition is anything to go by—it will be years before an embrace of safety and security affects users in a meaningful way.
You must to comment.
Join the Ars Orbital Transmission mailing list to get weekly updates delivered to your inbox.
CNMN Collection
WIRED Media Group
© 2021 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy.
Your California Privacy Rights | Do Not Sell My Personal Information
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices