64 stories

Is President Trump Above The Law? Possibly

1 Share

President Trump, no stranger to litigation, is now facing the dual threat of criminal and civil legal action. As special counsel Robert Mueller marches forward with his investigation into Russian interference in the 2016 election, evidence is mounting for the possibility of obstruction of justice charges. And Trump is already being sued in state court for defamation by one of the women who came forward during the presidential campaign with claims of past sexual misconduct.

The stakes are high, for both Trump and his critics. With a Republican-controlled Congress that shows little appetite for impeachment, legal action could be a powerful tool for undermining Trump’s presidency — and, if there’s evidence that he committed a crime, the most effective means of holding him accountable. But while both the lawsuit and the possibility of criminal charges have the potential to severely damage or even derail his presidency, it’s not clear how much Trump should be worried about this multi-pronged threat while he’s in the White House.

It is undisputed, according to legal experts, that litigation over obstruction of justice or defamation could proceed after Trump leaves office. But the question of whether the president can be sued or prosecuted while in office is murkier.

In 1997, the Supreme Court ruled that Bill Clinton could face a civil lawsuit in federal court for actions outside the scope of his official duties. But the opinion pointedly noted that it was not addressing whether similar suits could proceed in state courts, which each have their own judges, juries and procedures. And although the Constitution provides instructions for how a president can be removed from office, it’s silent on whether the commander-in-chief can be indicted or criminally prosecuted.

Trump’s lawyers are contending that as president, he’s immune to civil lawsuits in the state courts as well as criminal charges like obstruction of justice because he holds the highest office in the land. And they could well be right. The courts have not ruled definitively on either issue — and it’s possible they would exempt him when presented with a chance to weigh in.

That’s one of several factors that could ultimately help Trump:

The courts could rule that presidents need to be impeached and removed from office before they can be prosecuted.

There are two schools of thought on why the Constitution doesn’t address the question of whether a president can be indicted. The first is that impeachment — rather than criminal prosecution — was seen as the appropriate course of action when there was evidence that the president might have broken the law. This interpretation can be traced as far back as the writings of Alexander Hamilton, who appeared to assume that the president would first be impeached and removed from office, then prosecuted for any crimes related to impeachment.

In this view, postponing prosecution until the president was out of office does not mean he is above the law; it simply avoids the impracticalities of bringing a president to trial.

“The case for impeachment is that a criminal prosecution of the president will inevitably have political overtones that would muddy the course of justice,” said Brian Kalt, a professor at Michigan State University College of Law. “So the idea is that you first put the president through a special political process — which is impeachment — and then he can go through the criminal justice system when he’s no longer in this special position of power.” If Congress chooses not to act even in the face of evidence that the president committed a crime, Kalt said, there’s another solution: The people can choose not to re-elect him.

Others have argued, however, that indictment wasn’t mentioned because it was obvious to the framers of the Constitution that criminal prosecution and impeachment were remedies for different kinds of misconduct.

“Presidents can behave in ways that make them unfit to be president, even if that action isn’t criminal, and that’s what impeachment is designed to deal with,” said Eric Freedman, a professor at Hofstra University’s Deane School of Law. “And there are criminal activities that are bad and deserve criminal action but aren’t necessarily impeachable. They’re essentially on different tracks.” A traffic violation might be an example of a criminal offense that isn’t impeachable, while ordering the firing of a special prosecutor might be impeachable but not criminal.

The ability to sue the president in state court is still an open question.

The debate over whether the president can be sued in state courts, on the other hand, is already in motion, and here Trump looks more vulnerable. In the case involving Clinton, the Supreme Court rejected his argument that a federal lawsuit would be an unconstitutional burden on his ability to perform his duties as president. “The case at hand, if properly managed by the District Court, it appears to us highly unlikely to occupy any substantial amount of petitioner’s time,” Justice John Paul Stevens wrote.

But there is some chance Trump could temporarily escape the legal process if the high court went his way in this case. Trump’s lawyers are arguing now that allowing state courts to rule in lawsuits involving the president would violate the supremacy clause of the Constitution, which says that state law is always subordinate to federal law.

University of Pennsylvania Law School professor Stephen Burbank thinks the distinction that the president’s attorneys are making between lawsuits in state and federal courts isn’t very convincing. “They’re appealing to a precedent that state courts can’t exercise too much control over the president,” he said. “But as long as the state courts are deferential to the president’s schedule, it’s hard to see how requiring a deposition or even testimony would fundamentally interfere with the president’s duties.”

Freedman also thinks it’s clear that presidents can be sued in state court while they’re in office, in part because such cases are likely to have less of an impact on Trump’s presidency than a criminal indictment would — although Zervos and her lawyers presumably think they have a better shot at winning in state court, since they chose to file the lawsuit there rather than federal court. “Being sued is, on the whole, perceived to be less invasive and serious than being criminally prosecuted,” Freedman said.

Mueller could force the issue, but he might not want to.

The dispute over whether a president can be put on trial was nearly settled in 1974, when special prosecutor Leon Jaworski wrestled with whether to indict President Richard Nixon for his role in the Watergate break-in. The Supreme Court ruled that Nixon did not have the power to block Jaworski’s subpoena for the infamous Watergate tapes, but Jaworski ultimately decided not to push for an indictment. Nixon resigned before he could be impeached — at which point he became vulnerable to prosecution, but he was pardoned by his successor, Gerald Ford.

The question of whether the president can be indicted re-emerged in the 1990s, when Clinton was investigated by independent counsel Kenneth Starr. In a 1998 memo, Starr concluded that he did have the authority to indict Clinton, but, like Jaworski, he decided instead to refer the case to Congress for impeachment.7

The fact that ultimately neither Jaworski nor Starr chose to indict the president he was investigating illustrates the practical challenges faced by Mueller. Even if indicting Trump turns out to be legal, Mueller would likely be politically lambasted for taking the fate of the presidency into his own hands.

“Indicting Trump for something like obstruction of justice might prompt the courts to resolve whether it’s actually possible to indict the president, but it could also derail the investigation,” Kalt said. “Deciding to file criminal charges against Trump could easily be perceived as a politically motivated overreach — Mueller saying he gets to decide who’s the president.”

A better strategy might be to copy Jaworski and name Trump as an unindicted co-conspirator, Freedman said. In this scenario, Trump would not be formally charged, but any relevant evidence against him would be admissible in the trials of the people who were indicted, allowing it to be publicly disseminated and scrutinized.

It may be wisest, though, for Mueller to present his findings to Congress — and the people — and let them act first, rather than trying to settle a constitutional debate that’s been percolating since the 18th century. That’s one major reason why the question of whether the president can be indicted or prosecuted may not be answered anytime soon.

Another is that the special counsel is technically bound by a Department of Justice legal opinion that states that the president cannot be indicted or prosecuted. There’s debate about whether this opinion would really preclude Mueller from indicting Trump, but its existence provides an additional incentive for the special counsel to proceed cautiously, since a violation of DOJ rules would be grounds for ending his investigation. Mueller would be free of these limitations, according to Freedman, if Congress passed a statute giving him more protections — similar to a previous law that placed the special counsel under the authority of a three-judge panel, rather than DOJ — but that seems unlikely to occur.

All of this indicates that, even if we don’t know for sure whether Trump can be criminally prosecuted, he probably doesn’t have much to worry about for now — at least unless Mueller decides to take a big risk by indicting him.

Read the whole story
11 days ago
Walnut Creek, California
Share this story

How Wolff Got His Story Is As Important As What He Wrote

1 Share

There has been a raging debate about whether or not all of the material in Michael Wolff’s book measures up to journalistic standards. I’m going to put that question aside for a moment because I’m more interested in what his access says about the Trump administration.

One of the White House’s initial charges against the book was that Wolff didn’t have the access he claims to have had. Yesterday Sebastian Gorka pretty much drove a stake through that argument by writing that he had been told to speak to Wolff for the book.

The author himself told Chuck Todd that he didn’t have an agenda when he first visited the White House, but he also admitted to Samantha Guthrie that he would do anything to get the story. We don’t know what he told the president or anyone in the administration about his intentions, but we do know that within days of the inauguration, he wrote about how the media was losing the war with Trump and told Brian Stelter that they were having a nervous breakdown over Trump and should instead be covering him like they would any other new president. I’m sure that was music to the president’s ears. Around the same time, Wolff wrote a rather glowing profile of Kellyanne Conway, just as she was making a name for herself on television as the one who defended Trump’s “alternative facts.”

Regardless of what Wolff claims about his own motives for doing all of that, it sent a clear message to the White House that he had the one thing Trump looks for in allies: loyalty to Trump. So it is no surprise that he was given access. He had done what was necessary to ingratiate himself with the president. Obviously, things didn’t work out the way the White House had planned.

I am once again reminded of what psychiatrists told Richard Greene about interacting with people who have Narcissistic Personality Disorder (NPD).

There are only two ways to deal with someone with NPD, and they are both dangerous. There is no healthy way of interacting with someone with this affliction. If you criticize them they will lash out at you and if they have a great deal of power, that can be consequential. If you compliment them it only acts to increase the delusional and grandiose reality the sufferer has created, causing him to be even more reliant on constant and endless compliments and unwavering support.

Obviously, Wolff chose the second option and it gave him the kind of access he needed to write his book. That is a fact. The question for journalists is whether that is an acceptable strategy to use with this president. Drew Magary answers in the affirmative.

[Wolff] did it by sleazily ingratiating himself with the White House, gaining access, hosting weird private dinners, and then taking full advantage of the administration’s basic lack of knowledge about how reporting works. Some of the officials Wolff got on tape claim to be unaware that they were on the record. Wolff denies this, but he’s very much up front in the book’s intro about the fact that he was able to exploit the incredible “lack of experience” on display here. In other words, Wolff got his book by playing a bunch of naive dopes.

Thank God for that. Wolff has spent this week thoroughly exploiting Trump and his minions the same way they’ve exploited the cluelessness of others. And he pulled it off because, at long last, there was a reporter out there willing to toss decorum aside and burn bridges the same way Trump does.

I’m not so sure. I never react positively to advice that suggests we should join opponents in the gutter just because that is where they dwell. But it’s hard to argue with the outcomes Wolff achieved.

The more important takeaway is what this says about how the current occupant of the White House is so easily played. We’ve seen foreign leaders in countries from China to Saudi Arabia do the same kind of ingratiating with Trump, and he positively eats it up. There is no mystery to how they can go about playing this president to get exactly what they want. All it takes is tapping into that need for ego validation, and Trump is willing to give away the store in return.

But it’s not just foreign leaders who are singing this tune. Wolff himself writes that Trump hated Paul Ryan until the Speaker came to the White House to grovel and kiss ass. CNN reports that this is exactly what is happening with Republican Senators like Lindsey Graham. Their motives are no different than those of the leadership in China and Saudi Arabia: to play an unfit president on the assumption that it will get them what they want (i.e., tax cuts, entitlement cuts, military spending).

In the end, Wolff used the access he got via this game in order to expose the truth about this administration. That isn’t what foreign leaders or Republicans are after. They’re playing him to get what they want. The fact that this president makes himself vulnerable to that kind of tactic is yet another example of why he is unfit to serve.

Read the whole story
11 days ago
Walnut Creek, California
Share this story

How to Download Fire and Fury: Inside the Trump White House Now as a Free Audiobook

1 Share

Despite cease and desist orders issued by the president's lawyers, Michael Wolff's Fire and Fury: Inside the Trump White House is now out and it's the #1 bestselling book on Amazon. If you want a print copy, you'll have to wait 2-4 weeks. But there are some more immediate options: You can instantly snag a copy in Kindle format (price $14.99). Or download it as an audio book essentially for free.

If you start a 30 day free trial with Audible.com, you can download two free audio books of your choice. At the end of 30 days, you can decide whether you want to become an Audible subscriber or not. (I definitely recommend the service and use it every day.) No matter what you decide, you get to keep the two free audiobooks. Fire and Fury: Inside the Trump White House can be one of them. It runs 12 hours.

To sign up for Audible's free trial program here, follow the prompts/instructions on this page.

NB: Audible is an Amazon.com subsidiary, and we're a member of their affiliate program. Also, this post is not an endorsement of the book. (We haven't read it yet.) It's simply an fyi on how you can "read" a bestselling book that's in short supply.

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

Related Content:

900 Free Audio Books: Download Great Books for Free

Gonzo Illustrator Ralph Steadman Draws the American Presidents, from Nixon to Trump

A 1958 TV Show Had an Unsavory Character Named “Trump” Who Promised to Build a Wall & Save the World

How to Download <i>Fire and Fury: Inside the Trump White House</i> Now as a Free Audiobook is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Read the whole story
13 days ago
Walnut Creek, California
Share this story

Nebraska State Senator proposes constitutional amendment to allow corporations to create tiny, sovereign nations with no laws, taxes or rules

1 Share

Nebraska State Senator Paul Schumacher [R-22] [(402) 471-2715] has proposed an amendment to the state constitution that would create 36-square-mile regions in the state where corporations would enjoy up to 99 years of sovereignty, with "no city or state taxes and no local or state regulations." (more…)

Read the whole story
14 days ago
Walnut Creek, California
Share this story

The Core of Trump’s Genius

1 Share
For more than a year now, I’ve been hearing from people in the inner circles of official Washington...
Read the whole story
14 days ago
Walnut Creek, California
Share this story

How to Fix Facebook—Before It Fixes Us

1 Share

In early 2006, I got a call from Chris Kelly, then the chief privacy officer at Facebook, asking if I would be willing to meet with his boss, Mark Zuckerberg. I had been a technology investor for more than two decades, but the meeting was unlike any I had ever had. Mark was only twenty-two. He was facing a difficult decision, Chris said, and wanted advice from an experienced person with no stake in the outcome.

When we met, I began by letting Mark know the perspective I was coming from. Soon, I predicted, he would get a billion-dollar offer to buy Facebook from either Microsoft or Yahoo, and everyone, from the company’s board to the executive staff to Mark’s parents, would advise him to take it. I told Mark that he should turn down any acquisition offer. He had an opportunity to create a uniquely great company if he remained true to his vision. At two years old, Facebook was still years away from its first dollar of profit. It was still mostly limited to students and lacked most of the features we take for granted today. But I was convinced that Mark had created a game-changing platform that would eventually be bigger than Google was at the time. Facebook wasn’t the first social network, but it was the first to combine true identity with scalable technology. I told Mark the market was much bigger than just young people; the real value would come when busy adults, parents and grandparents, joined the network and used it to keep in touch with people they didn’t get to see often.

My little speech only took a few minutes. What ensued was the most painful silence of my professional career. It felt like an hour. Finally, Mark revealed why he had asked to meet with me: Yahoo had made that billion-dollar offer, and everyone was telling him to take it.

It only took a few minutes to help him figure out how to get out of the deal. So began a three-year mentoring relationship. In 2007, Mark offered me a choice between investing or joining the board of Facebook. As a professional investor, I chose the former. We spoke often about a range of issues, culminating in my suggestion that he hire Sheryl Sandberg as chief operating officer, and then my help in recruiting her. (Sheryl had introduced me to Bono in 2000; a few years later, he and I formed Elevation Partners, a private equity firm.) My role as a mentor ended prior to the Facebook IPO, when board members like Marc Andreessen and Peter Thiel took on that role.

In my thirty-five-year career in technology investing, I have never made a bigger contribution to a company’s success than I made at Facebook. It was my proudest accomplishment. I admired Mark Zuckerberg and Sheryl Sandberg—whom I helped Mark recruit—enormously.

In my thirty-five-year career in technology investing, I have never made a bigger contribution to a company’s success than I made at Facebook. It was my proudest accomplishment. I admired Mark and Sheryl enormously. Not surprisingly, Facebook became my favorite app. I checked it constantly, and I became an expert in using the platform by marketing my rock band, Moonalice, through a Facebook page. As the administrator of that page, I learned to maximize the organic reach of my posts and use small amounts of advertising dollars to extend and target that reach. It required an ability to adapt, because Facebook kept changing the rules. By successfully adapting to each change, we made our page among the highest-engagement fan pages on the platform.

My familiarity with building organic engagement put me in a position to notice that something strange was going on in February 2016. The Democratic primary was getting under way in New Hampshire, and I started to notice a flood of viciously misogynistic anti-Clinton memes originating from Facebook groups supporting Bernie Sanders. I knew how to build engagement organically on Facebook. This was not organic. It appeared to be well organized, with an advertising budget. But surely the Sanders campaign wasn’t stupid enough to be pushing the memes themselves. I didn’t know what was going on, but I worried that Facebook was being used in ways that the founders did not intend.

A month later I noticed an unrelated but equally disturbing news item. A consulting firm was revealed to be scraping data about people interested in the Black Lives Matter protest movement and selling it to police departments. Only after that news came out did Facebook announce that it would cut off the company’s access to the information. That got my attention. Here was a bad actor violating Facebook’s terms of service, doing a lot of harm, and then being slapped on the wrist. Facebook wasn’t paying attention until after the damage was done. I made a note to myself to learn more.

Meanwhile, the flood of anti-Clinton memes continued all spring. I still didn’t understand what was driving it, except that the memes were viral to a degree that didn’t seem to be organic. And, as it turned out, something equally strange was happening across the Atlantic.

When citizens of the United Kingdom voted to leave the European Union in June 2016, most observers were stunned. The polls had predicted a victory for the “Remain” campaign. And common sense made it hard to believe that Britons would do something so obviously contrary to their self-interest. But neither common sense nor the polling data fully accounted for a crucial factor: the new power of social platforms to amplify negative messages.

Facebook, Google, and other social media platforms make their money from advertising. As with all ad-supported businesses, that means advertisers are the true customers, while audience members are the product. Until the past decade, media platforms were locked into a one-size-fits-all broadcast model. Success with advertisers depended on producing content that would appeal to the largest possible audience. Compelling content was essential, because audiences could choose from a variety of distribution mediums, none of which could expect to hold any individual consumer’s attention for more than a few hours. TVs weren’t mobile. Computers were mobile, but awkward. Newspapers and books were mobile and not awkward, but relatively cerebral. Movie theaters were fun, but inconvenient.

When their business was limited to personal computers, the internet platforms were at a disadvantage. Their proprietary content couldn’t compete with traditional media, and their delivery medium, the PC, was generally only usable at a desk. Their one advantage—a wealth of personal data—was not enough to overcome the disadvantage in content. As a result, web platforms had to underprice their advertising.

Smartphones changed the advertising game completely. It took only a few years for billions of people to have an all-purpose content delivery system easily accessible sixteen hours or more a day. This turned media into a battle to hold users’ attention as long as possible. And it left Facebook and Google with a prohibitive advantage over traditional media: with their vast reservoirs of real-time data on two billion individuals, they could personalize the content seen by every user. That made it much easier to monopolize user attention on smartphones and made the platforms uniquely attractive to advertisers. Why pay a newspaper in the hopes of catching the attention of a certain portion of its audience, when you can pay Facebook to reach exactly those people and no one else?

Whenever you log into Facebook, there are millions of posts the platform could show you. The key to its business model is the use of algorithms, driven by individual user data, to show you stuff you’re more likely to react to. Wikipedia defines an algorithm as “a set of rules that precisely defines a sequence of operations.” Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits. They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.

Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy. The result is that the algorithms favor sensational content over substance. Of course, this has always been true for media; hence the old news adage “If it bleeds, it leads.” But for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members. To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.

It took Brexit for me to begin to see the danger of this dynamic. I’m no expert on British politics, but it seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise—there would be savings from leaving the European Union that would fund a big improvement in the National Health System—while also exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fearmongering.

Meanwhile, the Remain campaign was making an appeal to reason. Leave’s crude, emotional message would have been turbocharged by sharing far more than Remain’s. I did not see it at the time, but the users most likely to respond to Leave’s messages were probably less wealthy and therefore cheaper for the advertiser to target: the price of Facebook (and Google) ads is determined by auction, and the cost of targeting more upscale consumers gets bid up higher by actual businesses trying to sell them things. As a consequence, Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their questionable beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.

But there was one major element to the story that I was still missing.

Shortly after the Brexit vote, I reached out to journalists to validate my concerns about Facebook. At this point, all I had was a suspicion of two things: bad actors were exploiting an unguarded platform; and Facebook’s algorithms may have had a decisive impact on Brexit by favoring negative messages. My Rolodex was a bit dusty, so I emailed my friends Kara Swisher and Walt Mossberg at Recode, the leading tech industry news blog. Unfortunately, they didn’t reply. I tried again in August, and nothing happened.

Meanwhile, the press revealed that the Russians were behind the server hack at the Democratic National Committee and that Trump’s campaign manager had ties to Russian oligarchs close to Vladimir Putin. This would turn out to be the missing piece of my story. As the summer went on, I began noticing more and more examples of troubling things happening on Facebook that might have been prevented had the company accepted responsibility for the actions of third parties—such as financial institutions using Facebook tools to discriminate based on race and religion. In late September, Walt Mossberg finally responded to my email and suggested I write an op-ed describing my concerns. I focused entirely on nonpolitical examples of harm, such as discrimination in housing advertisements, suggesting that Facebook had an obligation to ensure that its platform not be abused. Like most people, I assumed that Clinton would win the election, and I didn’t want my concerns to be dismissed as inconsequential if she did.

My wife recommended that I send what I wrote to Mark Zuckerberg and Sheryl Sandberg before publishing in Recode. Mark and Sheryl were my friends, and my goal was to make them aware of the problems so they could fix them. I certainly wasn’t trying to take down a company in which I still hold equity. I sent them the op-ed on October 30. They each responded the next day. The gist of their messages was the same: We appreciate you reaching out; we think you’re misinterpreting the news; we’re doing great things that you can’t see. Then they connected me to Dan Rose, a longtime Facebook executive with whom I had an excellent relationship. Dan is a great listener and a patient man, but he was unwilling to accept that there might be a systemic issue. Instead, he asserted that Facebook was not a media company, and therefore was not responsible for the actions of third parties.

In the hope that Facebook would respond to my goodwill with a serious effort to solve the problems, I told Dan that I would not publish the op-ed. Then came the U.S. election. The next day, I lost it. I told Dan there was a flaw in Facebook’s business model. The platform was being exploited by a range of bad actors, including supporters of extremism, yet management claimed the company was not responsible. Facebook’s users, I warned, might not always agree. The brand was at risk of becoming toxic. Over the course of many conversations, I urged Dan to protect the platform and its users.

The last conversation we had was in early February 2017. By then there was increasing evidence that the Russians had used a variety of methods to interfere in our election. I formed a simple hypothesis: the Russians likely orchestrated some of the manipulation on Facebook that I had observed back in 2016. That’s when I started looking for allies.

On April 11, I cohosted a technology-oriented show on Bloomberg TV. One of the guests was Tristan Harris, formerly the design ethicist at Google. Tristan had just appeared on 60 Minutes to discuss the public health threat from social networks like Facebook. An expert in persuasive technology, he described the techniques that tech platforms use to create addiction and the ways they exploit that addiction to increase profits. He called it “brain hacking.”

In February 2016, I started to notice a flood of viciously misogynistic anti-Clinton memes originating from Facebook groups supporting Bernie Sanders. I knew how to build engagement organically on Facebook. This was not organic.

The most important tool used by Facebook and Google to hold user attention is filter bubbles. The use of algorithms to give consumers “what they want” leads to an unending stream of posts that confirm each user’s existing beliefs. On Facebook, it’s your news feed, while on Google it’s your individually customized search results. The result is that everyone sees a different version of the internet tailored to create the illusion that everyone else agrees with them. Continuous reinforcement of existing beliefs tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts. Facebook takes the concept one step further with its “groups” feature, which encourages like-minded users to congregate around shared interests or beliefs. While this ostensibly provides a benefit to users, the larger benefit goes to advertisers, who can target audiences even more effectively.

After talking to Tristan, I realized that the problems I had been seeing couldn’t be solved simply by, say, Facebook hiring staff to monitor the content on the site. The problems were inherent in the attention-based, algorithm-driven business model. And what I suspected was Russia’s meddling in 2016 was only a prelude to what we’d see in 2018 and beyond. The level of political discourse, already in the gutter, was going to get even worse.

I asked Tristan if he needed a wingman. We agreed to work together to try to trigger a national conversation about the role of internet platform monopolies in our society, economy, and politics. We recognized that our effort would likely be quixotic, but the fact that Tristan had been on 60 Minutes gave us hope.

Our journey began with a trip to New York City in May, where we spoke with journalists and had a meeting at the ACLU. Tristan found an ally in Arianna Huffington, who introduced him to people like Bill Maher, who invited Tristan to be on his show. A friend introduced me over email to a congressional staffer who offered to arrange a meeting with his boss, a key member of one of the intelligence committees. We were just starting, but we had already found an audience for Tristan’s message.

In July, we went to Washington, D.C., where we met with two members of Congress. They were interested in Tristan’s public health argument as it applied to two issues: Russia’s election meddling, and the giant platforms’ growing monopoly power. That was an eye-opener. If election manipulation and monopoly were what Congress cared about, we would help them understand how internet platforms related to those issues. My past experience as a congressional aide, my long career in investing, and my personal role at Facebook gave me credibility in those meetings, complementing Tristan’s domain expertise.

With respect to the election meddling, we shared a few hypotheses based on our knowledge of how Facebook works. We started with a question: Why was Congress focused exclusively on collusion between Russia and the Trump campaign in 2016? The Russian interference, we reasoned, probably began long before the presidential election campaign itself. We hypothesized that those early efforts likely involved amplifying polarizing issues, such as immigration, white supremacy, gun rights, and secession. (We already knew that the California secession site had been hosted in Russia.) We suggested that Trump had been nominated because he alone among Republicans based his campaign on the kinds of themes the Russians chose for their interference.

We theorized that the Russians had identified a set of users susceptible to its message, used Facebook’s advertising tools to identify users with similar profiles, and used ads to persuade those people to join groups dedicated to controversial issues. Facebook’s algorithms would have favored Trump’s crude message and the anti-Clinton conspiracy theories that thrilled his supporters, with the likely consequence that Trump and his backers paid less than Clinton for Facebook advertising per person reached. The ads were less important, though, than what came next: once users were in groups, the Russians could have used fake American troll accounts and computerized “bots” to share incendiary messages and organize events. Trolls and bots impersonating Americans would have created the illusion of greater support for radical ideas than actually existed. Real users “like” posts shared by trolls and bots and share them on their own news feeds, so that small investments in advertising and memes posted to Facebook groups would reach tens of millions of people. A similar strategy prevailed on other platforms, including Twitter. Both techniques, bots and trolls, take time and money to develop—but the payoff would have been huge.

Our final hypothesis was that 2016 was just the beginning. Without immediate and aggressive action from Washington, bad actors of all kinds would be able to use Facebook and other platforms to manipulate the American electorate in future elections.

These were just hypotheses, but the people we met in Washington heard us out. Thanks to the hard work of journalists and investigators, virtually all of these hypotheses would be confirmed over the ensuing six weeks. Almost every day brought new revelations of how Facebook, Twitter, Google, and other platforms had been manipulated by the Russians.

We now know, for instance, that the Russians indeed exploited topics like Black Lives Matter and white nativism to promote fear and distrust, and that this had the benefit of laying the groundwork for the most divisive presidential candidate in history, Donald Trump. The Russians appear to have invested heavily in weakening the candidacy of Hillary Clinton during the Democratic primary by promoting emotionally charged content to supporters of Bernie Sanders and Jill Stein, as well as to likely Clinton supporters who might be discouraged from voting. Once the nominations were set, the Russians continued to undermine Clinton with social media targeted at likely Democratic voters. We also have evidence now that Russia used its social media tactics to manipulate the Brexit vote. A team of researchers reported in November, for instance, that more than 150,000 Russian-language Twitter accounts posted pro-Leave messages in the run-up to the referendum.

The week before our return visit to Washington in mid-September, we woke up to some surprising news. The group that had been helping us in Washington, the Open Markets team at the think tank New America, had been advocating forcefully for anti-monopoly regulation of internet platforms, including Google. It turns out that Eric Schmidt, an executive at Alphabet, Google’s parent company, is a major New America donor. The think tank cut Open Markets loose. The story line basically read, “Anti-monopoly group fired by liberal think tank due to pressure from monopolist.” (New America disputes this interpretation, maintaining that the group was let go because of a lack of collegiality on the part of its leader, Barry Lynn, who writes often for this magazine.) Getting fired was the best possible evidence of the need for their work, and funders immediately put the team back in business as the Open Markets Institute. Tristan and I joined their advisory board.

Our second trip to Capitol Hill was surreal. This time, we had three jam-packed days of meetings. Everyone we met was already focused on our issues and looking for guidance about how to proceed. We brought with us a new member of the team, Renee DiResta, an expert in how conspiracy theories spread on the internet. Renee described how bad actors plant a rumor on sites like 4chan and Reddit, leverage the disenchanted people on those sites to create buzz, build phony news sites with “press” versions of the rumor, push the story onto Twitter to attract the real media, then blow up the story for the masses on Facebook. It was sophisticated hacker technique, but not expensive. We hypothesized that the Russians were able to manipulate tens of millions of American voters for a sum less than it would take to buy an F-35 fighter jet.

In Washington, we learned we could help policymakers and their staff members understand the inner workings of Facebook, Google, and Twitter. They needed to get up to speed quickly, and our team was happy to help.

Tristan and I had begun in April with very low expectations. By the end of September, a conversation on the dangers of internet platform monopolies was in full swing. We were only a small part of what made the conversation happen, but it felt good.

Facebook and Google are the most powerful companies in the global economy. Part of their appeal to shareholders is that their gigantic advertising businesses operate with almost no human intervention. Algorithms can be beautiful in mathematical terms, but they are only as good as the people who create them. In the case of Facebook and Google, the algorithms have flaws that are increasingly obvious and dangerous.

Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions. No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil. No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.

A week before the 2016 election, I emailed Zuckerberg and Sandberg, suggesting that Facebook had an obligation to ensure that its platform not be exploited by bad actors. They each responded the next day, saying: We appreciate you reaching out, but think you’re misinterpreting the news.

Facebook and Google are now so large that traditional tools of regulation may no longer be effective. The European Union challenged Google’s shopping price comparison engine on antitrust grounds, citing unfair use of Google’s search and AdWords data. The harm was clear: most of Google’s European competitors in the category suffered crippling losses. The most successful survivor lost 80 percent of its market share in one year. The EU won a record $2.7 billion judgment—which Google is appealing. Google investors shrugged at the judgment, and, as far as I can tell, the company has not altered its behavior. The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.

It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. This is precisely what happened in the United States during the 2016 election. We had constructed a modern Maginot Line—half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad—never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost. Not only was the attack an overwhelming success, but it was also a persistent one, as the political party that benefited refuses to acknowledge reality. The attacks continue every day, posing an existential threat to our democratic processes and independence.

We still don’t know the exact degree of collusion between the Russians and the Trump campaign. But the debate over collusion, while important, risks missing what should be an obvious point: Facebook, Google, Twitter, and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.

Awareness of the role of Facebook, Google, and others in Russia’s interference in the 2016 election has increased dramatically in recent months, thanks in large part to congressional hearings on October 31 and November 1. This has led to calls for regulation, starting with the introduction of the Honest Ads Act, sponsored by Senators Mark Warner, Amy Klobuchar, and John McCain, which attempts to extend current regulation of political ads on networks to online platforms. Facebook and Google responded by reiterating their opposition to government regulation, insisting that it would kill innovation and hurt the country’s global competitiveness, and that self-regulation would produce better results.

But we’ve seen where self-regulation leads, and it isn’t pretty. Unfortunately, there is no regulatory silver bullet. The scope of the problem requires a multi-pronged approach.

First, we must address the resistance to facts created by filter bubbles. Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.

This idea, which originated with my colleague Tristan Harris, is based on experience with cults. When you want to deprogram a cult member, it is really important that the call to action come from another member of the cult, ideally the leader. The platforms will claim this is too onerous. Facebook has indicated that up to 126 million Americans were touched by the Russian manipulation on its core platform and another twenty million on Instagram, which it owns. Together those numbers exceed the 137 million Americans who voted in 2016. What Facebook has offered is a portal buried within its Help Center where curious users will be able to find out if they were touched by Russian manipulation through a handful of Facebook groups created by a single troll farm. This falls far short of what is necessary to prevent manipulation in 2018 and beyond. There’s no doubt that the platforms have the technological capacity to reach out to every affected person. No matter the cost, platform companies must absorb it as the price for their carelessness in allowing the manipulation.

Second, the chief executive officers of Facebook, Google, Twitter, and others—not just their lawyers—must testify before congressional committees in open session. As Senator John Kennedy, a Louisiana Republican, demonstrated in the October 31 Senate Judiciary hearing, the general counsel of Facebook in particular did not provide satisfactory answers. This is important not just for the public, but also for another crucial constituency: the employees who keep the tech giants running. While many of the folks who run Silicon Valley are extreme libertarians, the people who work there tend to be idealists. They want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of spokespeople or PR spin—would go a long way to puncturing their carefully preserved cults of personality in the eyes of their employees.

These two remedies would only be a first step, of course. We also need regulatory fixes. Here are a few ideas.

First, it’s essential to ban digital bots that impersonate humans. They distort the “public square” in a way that was never possible in history, no matter how many anonymous leaflets you printed. At a minimum, the law could require explicit labeling of all bots, the ability for users to block them, and liability on the part of platform vendors for the harm bots cause.

Second, the platforms should not be allowed to make any acquisitions until they have addressed the damage caused to date, taken steps to prevent harm in the future, and demonstrated that such acquisitions will not result in diminished competition. An underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it includes YouTube, Google Maps, AdSense, and many others—and using them to extend their monopoly power.

This is important, because the internet has lost something very valuable. The early internet was designed to be decentralized. It treated all content and all content owners equally. That equality had value in society, as it kept the playing field level and encouraged new entrants. But decentralization had a cost: no one had an incentive to make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use alternatives from Facebook and Google. This allowed the platforms to centralize the internet, inserting themselves between users and content, effectively imposing a tax on both sides. This is a great business model for Facebook and Google—and convenient in the short term for customers—but we are drowning in evidence that there are costs that society may not be able to afford.

Third, the platforms must be transparent about who is behind political and issues-based communication. The Honest Ads Act is a good start, but does not go far enough for two reasons: advertising was a relatively small part of the Russian manipulation; and issues-based advertising played a much larger role than candidate-oriented ads. Transparency with respect to those who sponsor political advertising of all kinds is a step toward rebuilding trust in our political institutions.

Fourth, the platforms must be more transparent about their algorithms. Users deserve to know why they see what they see in their news feeds and search results. If Facebook and Google had to be up-front about the reason you’re seeing conspiracy theories—namely, that it’s good for business—they would be far less likely to stick to that tactic. Allowing third parties to audit the algorithms would go even further toward maintaining transparency. Facebook and Google make millions of editorial choices every hour and must accept responsibility for the consequences of those choices. Consumers should also be able to see what attributes are causing advertisers to target them.

Facebook, Google, and other social media platforms make their money from advertising. As with all ad-supported businesses, that means advertisers are the true customers, while audience members are the product.

Fifth, the platforms should be required to have a more equitable contractual relationship with users. Facebook, Google, and others have asserted unprecedented rights with respect to end-user license agreements (EULAs), the contracts that specify the relationship between platform and user. When you load a new operating system or PC application, you’re confronted with a contract—the EULA—and the requirement that you accept its terms before completing installation. If you don’t want to upgrade, you can continue to use the old version for some time, often years. Not so with internet platforms like Facebook or Google. There, your use of the product comes with implicit acceptance of the latest EULA, which can change at any time. If there are terms you choose not to accept, your only alternative is to abandon use of the product. For Facebook, where users have contributed 100 percent of the content, this non-option is particularly problematic.

All software platforms should be required to offer a legitimate opt-out, one that enables users to stick with the prior version if they do not like the new EULA. “Forking” platforms between old and new versions would have several benefits: increased consumer choice, greater transparency on the EULA, and more care in the rollout of new functionality, among others. It would limit the risk that platforms would run massive social experiments on millions—or billions—of users without appropriate prior notification. Maintaining more than one version of their services would be expensive for Facebook, Google, and the rest, but in software that has always been one of the costs of success. Why should this generation get a pass?

Customers understand that their “free” use of platforms like Facebook and Google gives the platforms license to exploit personal data. The problem is that platforms are using that data in ways consumers do not understand, and might not accept if they did.

Sixth, we need a limit on the commercial exploitation of consumer data by internet platforms. Customers understand that their “free” use of platforms like Facebook and Google gives the platforms license to exploit personal data. The problem is that platforms are using that data in ways consumers do not understand, and might not accept if they did. For example, Google bought a huge trove of credit card data earlier this year. Facebook uses image-recognition software and third-party tags to identify users in contexts without their involvement and where they might prefer to be anonymous. Not only do the platforms use your data on their own sites, but they also lease it to third parties to use all over the internet. And they will use that data forever, unless someone tells them to stop.

There should be a statute of limitations on the use of consumer data by a platform and its customers. Perhaps that limit should be ninety days, perhaps a year. But at some point, users must have the right to renegotiate the terms of how their data is used.

Seventh, consumers, not the platforms, should own their own data. In the case of Facebook, this includes posts, friends, and events—in short, the entire social graph. Users created this data, so they should have the right to export it to other social networks. Given inertia and the convenience of Facebook, I wouldn’t expect this reform to trigger a mass flight of users. Instead, the likely outcome would be an explosion of innovation and entrepreneurship. Facebook is so powerful that most new entrants would avoid head-on competition in favor of creating sustainable differentiation. Start-ups and established players would build new products that incorporate people’s existing social graphs, forcing Facebook to compete again. It would be analogous to the regulation of the AT&T monopoly’s long-distance business, which led to lower prices and better service for consumers.

Eighth, and finally, we should consider that the time has come to revive the country’s traditional approach to monopoly. Since the Reagan era, antitrust law has operated under the principle that monopoly is not a problem so long as it doesn’t result in higher prices for consumers. Under that framework, Facebook and Google have been allowed to dominate several industries—not just search and social media but also email, video, photos, and digital ad sales, among others—increasing their monopolies by buying potential rivals like YouTube and Instagram. While superficially appealing, this approach ignores costs that don’t show up in a price tag. Addiction to Facebook, YouTube, and other platforms has a cost. Election manipulation has a cost. Reduced innovation and shrinkage of the entrepreneurial economy has a cost. All of these costs are evident today. We can quantify them well enough to appreciate that the costs to consumers of concentration on the internet are unacceptably high.

Increasing awareness of the threat posed by platform monopolies creates an opportunity to reframe the discussion about concentration of market power. Limiting the power of Facebook and Google not only won’t harm America, it will almost certainly unleash levels of creativity and innovation that have not been seen in the technology industry since the early days of, well, Facebook and Google.

Before you dismiss regulation as impossible in the current economic environment, consider this. Eight months ago, when Tristan Harris and I joined forces, hardly anyone was talking about the issues I described above. Now lots of people are talking, including policymakers. Given all the other issues facing the country, it’s hard to be optimistic that we will solve the problems on the internet, but that’s no excuse for inaction. There’s far too much at stake.

Read the whole story
15 days ago
Walnut Creek, California
Share this story
Next Page of Stories