|
FOCUS: Facebook's Broken Vows |
|
|
Written by <a href="index.php?option=com_comprofiler&task=userProfile&user=30270"><span class="small">Jill Lepore, The New Yorker</span></a>
|
|
Monday, 26 July 2021 10:52 |
|
Lepore writes: "How the company's pledge to bring the world together wound up pulling us apart."
Facebook CEO Mark Zuckerberg. (photo: Drew Angerer/Getty Images)

Facebook's Broken Vows
By Jill Lepore, The New Yorker
26 July 21
How the company’s pledge to bring the world together wound up pulling us apart.
acebook has a save-the-world mission statement—“to give people the power to build community and bring the world closer together”—that sounds like a better fit for a church, and not some little wood-steepled, white-clapboarded, side-of-the-road number but a castle-in-a-parking-lot megachurch, a big-as-a-city-block cathedral, or, honestly, the Vatican. Mark Zuckerberg, Facebook’s C.E.O., announced this mission the summer after the 2016 U.S. Presidential election, replacing the company’s earlier and no less lofty purpose: “to give people the power to share and make the world more open and connected.” Both versions, like most mission statements, are baloney.
The word “mission” comes from the Latin for “send.” In English, historically, a mission is Christian, and means sending the Holy Spirit out into the world to spread the Word of God: a mission involves saving souls. In the seventeenth century, when “mission” first conveyed something secular, it meant diplomacy: emissaries undertake missions. Scientific and military missions—and the expression “mission accomplished”—date to about the First World War. In 1962, J.F.K. called going to the moon an “untried mission.” “Mission statements” date to the Vietnam War, when the Joint Chiefs of Staff began drafting ever-changing objectives for a war known for its purposelessness. (The TV show “Mission: Impossible” débuted in 1966.) After 1973, and at the urging of the management guru Peter Drucker, businesses started writing mission statements as part of the process of “strategic planning,” another expression Drucker borrowed from the military. Before long, as higher education was becoming corporatized, mission statements crept into university life. “We are on the verge of mission madness,” the Chronicle of Higher Education reported in 1979. A decade later, a management journal announced, “Developing a mission statement is an important first step in the strategic planning process.” But by the nineteen-nineties corporate mission statements had moved from the realm of strategic planning to public relations. That’s a big part of why they’re bullshit. One study from 2002 reported that most managers don’t believe their own companies’ mission statements. Research surveys suggest a rule of thumb: the more ethically dubious the business, the more grandiose and sanctimonious its mission statement.
Facebook’s stated mission amounts to the salvation of humanity. In truth, the purpose of Facebook, a multinational corporation with headquarters in California, is to make money for its investors. Facebook is an advertising agency: it collects data and sells ads. Founded in 2004, it now has a market value of close to a trillion dollars. Since 2006, with the launch of its News Feed, Facebook has also been a media company, one that now employs fifteen thousand “content moderators.” (In the U.S., about a third of the population routinely get their news from Facebook. In other parts of the world, as many as two-thirds do.) Since 2016, Facebook has become interested in election integrity here and elsewhere; the company has thirty-five thousand security specialists in total, many of whom function almost like a U.N. team of elections observers. But its early mantra, “Company over country,” still resonates. The company is, in important respects, larger than any country. Facebook possesses the personal data of more than a quarter of the world’s people, 2.8 billion out of 7.9 billion, and governs the flow of information among them. The number of Facebook users is about the size of the populations of China and India combined. In some corners of the globe, including more than half of African nations, Facebook provides free basic data services, positioning itself as a privately owned utility.
“An Ugly Truth: Inside Facebook’s Battle for Domination” (Harper), by Sheera Frenkel and Cecilia Kang, takes its title from a memo written by a Facebook executive in 2016 and leaked to BuzzFeed News. Andrew Bosworth, who created Facebook’s News Feed, apparently wrote the memo in response to employees’ repeated pleas for a change in the service, which, during the U.S. Presidential election that year, they knew to be prioritizing fake news, like a story that Hillary Clinton was in a coma. Some employees suspected that a lot of these stories were being posted by fake users, and even by foreign actors (which was later discovered to be the case). Bosworth wrote:
So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. . . . That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in.
Bosworth argued that his memo was meant to provoke debate, not to be taken literally, but plainly it spoke to views held within the company. That’s the downside of a delusional sense of mission: the loss of all ethical bearings.
“An Ugly Truth” is the result of fifteen years of reporting. Frenkel and Kang, award-winning journalists for the Times, conducted interviews with more than four hundred people, mostly Facebook employees, past and present, for more than a thousand hours. Many people who spoke with them were violating nondisclosure agreements. Frenkel and Kang relied, too, on a very leaky spigot of “never-reported emails, memos, and white papers involving or approved by top executives.” They did speak to Facebook’s chief operating officer, Sheryl Sandberg, off the record, but Zuckerberg, who had coöperated with a 2020 book, “Facebook: The Inside Story” (Blue Rider), by the Wired editor Steven Levy, declined to talk to them.
Zuckerberg started the company in 2004, when he was a Harvard sophomore, with this mission statement: “Thefacebook is an online directory that connects people through social networks at colleges.” The record of an online chat is a good reminder that he was, at the time, a teen-ager:
ZUCK: i have over 4000 emails, pictures, addresses, sns
FRIEND: what?! how’d you manage that one?
ZUCK: people just submitted it
ZUCK: i don’t know why
ZUCK: they “trust me”
ZUCK: dumb fucks
Zuckerberg dropped out of college, moved to California, and raised a great deal of venture capital. The network got better, and bigger. Zuckerberg would end meetings by pumping his fist and shouting, “Domination!” New features were rolled out as fast as possible, for the sake of fuelling growth. “Fuck it, ship it” became a company catchphrase. Facebook announced a new mission in 2006, the year it introduced the News Feed: “Facebook is a social utility that connects you with the people around you.” Growth in the number of users mattered, but so did another measurement: the amount of time a user spent on the site. The point of the News Feed was to drive that second metric.
“Facebook was the world’s biggest testing lab, with a quarter of the planet’s population as its test subjects,” Frenkel and Kang write. Zuckerberg was particularly obsessed with regular surveys that asked users whether Facebook is “good for the world” (a tally abbreviated as GFW). When Facebook implemented such changes as demoting lies in the News Feed, the GFW went up, but the time users spent on Facebook went down. Zuckerberg decided to reverse the changes.
Meanwhile, he talked, more and more, about his sense of mission, each new user another saved soul. He toured the world promoting the idea. “For almost ten years, Facebook has been on a mission to make the world more open and connected,” Zuckerberg wrote in 2013, in a Facebook post called “Is Connectivity a Human Right?” It reads something like a papal encyclical. Zuckerberg was abroad when Sandberg, newly appointed Facebook’s chief operating officer—a protégée of Lawrence Summers’s and a former Google vice-president—established an ambitious growth model. But, Frenkel and Kang argue, “as Facebook entered new nations, no one was charged with monitoring the rollouts with an eye toward the complex political and cultural dynamics within those countries. No one was considering how the platform might be abused in a nation like Myanmar, or asking if they had enough content moderators to review the hundreds of new languages in which Facebook users across the planet would be posting.” Facebook, inadvertently, inflamed the conflict; its algorithms reward emotion, the more heated the better. Eventually, the United Nations concluded that social media played a “determining role” in the genocide and humanitarian crisis in Myanmar—with some twenty-four thousand Rohingya being killed, and seven hundred thousand becoming refugees. “We need to do more,” Zuckerberg and Sandberg would say, again, and again, and again. “We need to do better.”
In 2015, by which time anyone paying attention could see that the News Feed was wreaking havoc on journalism, especially local news reporting, a new hire named Andrew Anker proposed adding a paywall option to a feature called “Instant Articles.” “That meant that in order to keep viewing stories on a publication, readers would have to be subscribers,” Levy writes. “Publishers had been begging for something like that to monetize their stories on Facebook.” But, Levy reports, when Anker pitched the idea to Zuckerberg, the C.E.O. cut him off. “Facebook’s mission is to make the world more open and connected,” Zuckerberg said. “I don’t understand how subscription would make the world either more open or connected.”
By the next year, more than half of all Americans were getting their news from social media. During the 2016 Presidential election, many were wildly misinformed. Russian hackers set up hundreds of fake Facebook accounts. They bought political ads. “I don’t want anyone to use our tools to undermine democracy,” Zuckerberg said. “That’s not what we stand for.” But, as Frenkel and Kang observe, “Trump and the Russian hackers had separately come to the same conclusion: they could exploit Facebook’s algorithms to work in their favor.” It didn’t matter if a user, or a post, or an article approved or disapproved of something Trump said or did; reacting to it, in any way, elevated its ranking, and the more intense the reaction, the higher the ranking. Trump became inescapable. The News Feed became a Trump Feed.
In 2017, Zuckerberg went on a listening tour of the United States. “My work is about connecting the world and giving everyone a voice,” he announced, messianically. “I want to personally hear more of those voices this year.” He gave motivational speeches. “We have to build a world where every single person has a sense of purpose and community—that’s how we’ll bring the world closer together,” he told a crowd of Facebook-group administrators. “I know we can do this!” And he came up with a new mission statement.
“An Ugly Truth” is a work of muckraking, a form of investigative journalism perfected by Ida Tarbell in a series of essays published in McClure’s between 1902 and 1904 about John D. Rockefeller’s company, Standard Oil. When Samuel McClure decided to assign a big piece on monopolies, Tarbell suggested the sugar trust, but, as Steve Weinberg reported in his 2008 book, “Taking on the Trust,” McClure wanted her to write about Standard Oil. That was partly because it was such a good story, and partly because of Tarbell’s family history: she’d grown up near an oil field, and Rockefeller had more or less put her father out of business.
Standard Oil, founded in 1870, had, like Facebook, faced scrutiny of its business practices from the start. In 1872 and 1876, it had been the subject of congressional hearings; in 1879, Rockefeller was called to hearings before committees in Pennsylvania, New York, and Ohio; Standard Oil executives were repeatedly summoned by the Interstate Commerce Commission after its establishment, in 1887; the company was investigated by Congress again in 1888, and by Ohio for more than a decade, and was the subject of a vast number of private suits. Earlier reporters had tried to get the goods, too. In 1881, the Chicago Tribune reporter Henry Demarest Lloyd wrote an article for The Atlantic called “The Story of a Great Monopoly.” Lloyd accused the oil trust of bribing politicians, having, for instance, “done everything with the Pennsylvania legislature except refine it.” He concluded: “America has the proud satisfaction of having furnished the world with the greatest, wisest and meanest monopoly known to history.”
Lloyd wrote something between an essay and a polemic. Tarbell took a different tack, drawing on research skills she’d acquired as a biographer of Lincoln. “Neither Standard Oil and Rockefeller nor any powerful American institution had ever encountered a journalist like Tarbell,” Weinberg writes. She also, in something of a first, revealed her sources to readers, explaining that she had gone to state and federal legislatures and courthouses and got the records of all those lawsuits and investigations and even all those private lawsuits, “the testimony of which,” she wrote, “is still in manuscript in the files of the courts where the suits were tried.” She dug up old newspaper stories (quite difficult to obtain in those days) and wrote to Standard Oil’s competitors, asking them to send any correspondence that might cast light on Rockefeller’s anti-competitive practices. She tried, too, to talk to executives at Standard Oil, but, she wrote, “I had been met with that formulated chatter used by those who have accepted a creed.” Finally, she found a source inside the company, Henry Rogers, who had known of her father. As Stephanie Gorton writes in her recent book, “Citizen Reporters,” Tarbell “went to the Standard Oil offices at 26 Broadway regularly for two years. Each time, she entered the imposing colonnaded building and was immediately whisked by an assistant from the lobby via a circuitous and private route to Rogers’s office, kept out of sight from Standard Oil employees who might recognize her, and spoken to by no one but Rogers and his secretary.” Because McClure’s published the work serially, the evidence kept coming; even as Tarbell was writing, disgruntled competitors and employees went on sending her letters and memos. As the Boston Globe put it, she was “writing unfinished history.”
On the subject of John D. Rockefeller, Tarbell proved scathing. “ ‘The most important man in the world,’ a great and serious newspaper passionately devoted to democracy calls him, and unquestionably this is the popular measure of him,” she wrote. “His importance lies not so much in the fact that he is the richest individual in the world. . . . It lies in the fact that his wealth, and the power springing from it, appeal to the most universal and powerful passion in this country—the passion for money.” In sum, “our national life is on every side distinctly poorer, uglier, meaner for the kind of influence he exercises.”
On reading the series, Lloyd wrote to her, “When you get through with ‘Johnnie,’ I don’t think there will be very much left of him except something resembling one of his own grease spots.” Critics accused Tarbell of being mean-spirited. A review in The Nation claimed, “To stir up envy, to arouse prejudice, to inflame passion, to appeal to ignorance, to magnify evils, to charge corruption—these seem to be the methods in favor with too many writers who profess a desire to reform society.” In 1906, Theodore Roosevelt coined the term “muckraking” as a slur. “There is in America today a distinct prejudice in favor of those who make the accusations,” Walter Lippmann observed, of Tarbell’s form of journalism, admitting that “if business and politics really served American need, you could never induce people to believe so many accusations against them.” Few could dispute Tarbell’s evidence, especially after she published the series of articles as a book of four hundred and six pages, with thirty-six appendices stretching across a hundred and forty pages.
Tarbell hadn’t enjoyed taking down Standard Oil. “It was just one of those things that had to be done,” she wrote. “I trust that it has not been useless.” It had not been useless. In 1911, the U.S. Supreme Court ordered the dissolution of Standard Oil.
The year McClure’s published the final installment of Tarbell’s series, Rockefeller’s son, John, Jr., on the threshold of inheriting one of the world’s greatest fortunes, suffered a nervous breakdown. Shortly before the breakup of his father’s company, Rockefeller, Jr., a devout and earnest Christian, stepped away from any role in Standard Oil or its successor firms; he turned his attention to philanthropy, guided, in part, by Ivy Lee, his father’s public-relations manager. In 1920, at Madison Avenue Baptist Church, before an audience of twelve hundred clergymen, he announced that he had found a new calling, as a booster and chief underwriter of a utopian, ecumenical Protestant organization called the Interchurch World Movement. “When a vast multitude of people come together earnestly and prayerfully,” he told the crowd, “there must be developed an outpouring of spiritual power such as this land has never before known.” In a letter to his father, asking him for tens of millions of dollars to give to the cause, the younger Rockefeller wrote, “I do not think we can overestimate the importance of this Movement. As I see it, it is capable of having a much more far-reaching influence than the League of Nations in bringing about peace, contentment, goodwill, and prosperity among the people of the earth.” The Interchurch World Movement, in short, aimed to give people the power to build community and bring the world closer together. It failed. Rockefeller repurposed its funds for Christian missions.
“Our mission is to give people the power to build community and bring the world closer together” is a statement to be found in Facebook’s Terms of Service; everyone who uses Facebook implicitly consents to this mission. During the years of the company’s ascent, the world has witnessed a loneliness epidemic, the growth of political extremism and political violence, widening political polarization, the rise of authoritarianism, the decline of democracy, a catastrophic crisis in journalism, and an unprecedented rise in propaganda, fake news, and misinformation. By no means is Facebook responsible for these calamities, but evidence implicates the company as a contributor to each of them. In July, President Biden said that misinformation about covid-19 on Facebook “is killing people.”
Collecting data and selling ads does not build community, and it turns out that bringing people closer together, at least in the way Facebook does it, makes it easier for them to hurt one another. Facebook wouldn’t be so successful if people didn’t love using it, sharing family photographs, joining groups, reading curated news, and even running small businesses. But studies have consistently shown that the more time people spend on Facebook the worse their mental health becomes; Facebooking is also correlated with increased sedentariness, a diminishment of meaningful face-to-face relationships, and a decline in real-world social activities. Efforts to call Zuckerberg and Sandberg to account and get the company to stop doing harm have nearly all ended in failure. Employees and executives have tried in vain to change the company’s policies and, especially, its algorithms. Congress has held hearings. Trustbusters have tried to break the company up. Regulators have attempted to impose rules on it. And journalists have written exposés. But, given how profoundly Facebook itself has undermined journalism, it’s hard to see how Frenkel and Kang’s work, or anyone else’s, could have a Tarbell-size effect.
“If what you care about is democracy and elections,” Mark Zuckerberg said in 2019, “then you want a company like us to be able to invest billions of dollars a year, like we are, in building really advanced tools to fight election interference.” During the next year’s Presidential election, Frenkel and Kang report, “Trump was the single-largest spender on political ads on Facebook.” His Facebook page was busier than those of the major networks, BuzzFeed, the Washington Post, and the New York Times taken together. Over the protests of many Facebook employees, Zuckerberg had adopted, and stuck to, a policy of not subjecting any political advertisements to fact-checking. Refusing to be “an arbiter of truth,” Facebook instead established itself as a disseminator of misinformation.
On January 27, 2021, three weeks after the insurrection at the U.S. Capitol, Zuckerberg, having suspended Trump’s account, renewed Facebook’s commitments: “We’re going to continue to focus on helping millions more people participate in healthy communities, and we’re going to focus even more on being a force for bringing people closer together.” Neither a record-setting five-billion-dollar penalty for privacy violations nor the latest antitrust efforts have managed to check one of the world’s most dangerous monopolies. Billions of people remain, instead, in the tightfisted, mechanical grip of its soul-saving mission.

|
|
FOCUS: Why Isn't Joe Biden Doing All He Can to Protect American Democracy? |
|
|
Written by <a href="index.php?option=com_comprofiler&task=userProfile&user=9643"><span class="small">Robert Reich, Guardian UK</span></a>
|
|
Monday, 26 July 2021 10:28 |
|
Reich writes: "You'd think Biden and the Democratic party leadership would do everything in their power to stop Republicans from undermining democracy."
Former Clinton labor secretary Robert Reich. (photo: Steve Russell/Toronto Star)

Why Isn't Joe Biden Doing All He Can to Protect American Democracy?
By Robert Reich, Guardian UK
26 July 21
Both parties are beholden to an anti-democratic coalition. This is stopping real change
ou’d think Biden and the Democratic party leadership would do everything in their power to stop Republicans from undermining democracy.
So far this year, the Republican party has passed roughly 30 laws in states across the country that will make voting harder, especially in Black and Latino communities. With Trump’s baseless claim that the 2020 election was stolen, Republicans are stoking white people’s fears that a growing non-white population is usurping their dominance.
Yet while Biden and Democratic leaders are openly negotiating with holdout senators for Biden’s stimulus and infrastructure proposals, they aren’t exerting similar pressure when it comes to voting rights and elections. In fact, Biden now says he won’t take on the filibuster, which stands firmly in the way.
What gives? Part of the explanation, I think, lies with an outside group that has almost as much influence on the Democratic party as on the Republican, and which isn’t particularly enthusiastic about election reform: the moneyed interests bankrolling both parties.
A more robust democracy would make it harder for the wealthy to keep their taxes low and profits high. So at the same time white supremacists have been whipping up white fears about non-whites usurping their dominance, America’s wealthy have been spending vast sums on campaign donations and lobbyists to prevent a majority from usurping their money.
They’re now whipping up resistance among congressional Democrats to Biden’s plan to tax capital gains at 39.6% – up from 20% – for those earning more than $1m, and they’re on the way to restoring the federal tax deduction for state and local taxes, of which they’re the biggest beneficiaries.
In recent years these wealth supremacists, as they might be called, have quietly joined white supremacists to become a powerful anti-democracy coalition. Some have backed white supremacist’s efforts to divide poor and working-class whites from poor and working-class Black and brown people, so they don’t look upward and see where most of the economic gains have been going and don’t join together to demand a fair share of those gains.
Similarly, white supremacists have quietly depended on wealth supremacists to donate to lawmakers who limit voting rights, so people of color continue to be second-class citizens. It’s no accident that six months after the insurrection, dozens of giant corporations that promised not to fund members of Congress who refused to certify Biden as president are now back funding them and their anti-voting rights agenda.
Donald Trump was put into office by this anti-democracy coalition. According to Forbes, 9% of America’s billionaires, together worth a combined $210bn, pitched in to cover the costs of Trump’s 2020 campaign. During his presidency Trump gave both parts of the coalition what they wanted most: tax cuts and regulatory rollbacks for the wealth supremacists; legitimacy for the white supremacists.
The coalition is now the core of the Republican party, which stands for little more than voter suppression based on Trump’s big lie that the 2020 election was stolen, and tax cuts for the wealthy and their corporations.
Meanwhile, as wealth supremacists have accumulated a larger share of the nation’s income and wealth than at any time in more than a century, they’ve used a portion of that wealth to bribe lawmakers not to raise their taxes. It was recently reported that several American billionaires have paid only minimal or no federal income tax at all.
Tragically, the supreme court is supporting both the white supremacists and wealth supremacists. Since Chief Justice John Roberts and Justice Samuel Alito joined in 2005 and 2006, respectively, the court has been whittling away voting rights while enlarging the rights of the wealthy to shower money on lawmakers. The conservative majority has been literally making it easier to buy elections and harder to vote in them.
The Democrats’ proposed For the People Act admirably takes on both parts of the coalition. It sets minimum national standards for voting, and it seeks to get big money out of politics through public financing of election campaigns.
Yet this comprehensiveness may explain why the Act is now stalled in the Senate. Biden and Democratic leaders are firmly against white supremacists but are not impervious to the wishes of wealth supremacists. After all, to win elections they need likely Democrats to vote but also need big money to finance their campaigns.
Some progressives have suggested a carve-out to the filibuster solely for voting rights. This might constrain the white supremacists but would do nothing to protect American democracy from the wealth supremacists.
If democracy is to be preserved, both parts of the anti-democracy coalition must be stopped.

|
|
|
The Loopholes May Be Smaller in the Justice Department's New Media Rules, but They're Still There |
|
|
Written by <a href="index.php?option=com_comprofiler&task=userProfile&user=60272"><span class="small">James Risen, The Washington Post</span></a>
|
|
Monday, 26 July 2021 08:26 |
|
Risen writes: "The news media is lavishing praise on the new guidelines issued by Attorney General Merrick Garland to limit when prosecutors go after journalists' phone and email records."
Merrick Garland. (photo: AP)

The Loopholes May Be Smaller in the Justice Department's New Media Rules, but They're Still There
By James Risen, The Washington Post
26 July 21
he news media is lavishing praise on the new guidelines issued by Attorney General Merrick Garland to limit when prosecutors go after journalists’ phone and email records. The guidelines replace rules set out in January 2015 by then-Attorney General Eric Holder, designed to restrict the ability of prosecutors to seize phone records and other data from reporters when prosecutors were seeking to identify their sources in leak investigations. The Holder revisions followed an outcry from the news media after disclosures that the Justice Department had secretly obtained the phone records of Associated Press reporters in one leak investigation and labeled a Fox News reporter a “co-conspirator” in another.
When the 2015 revisions were announced, Holder was praised for taking action to protect reporters from government intrusion. But it turned out that the loopholes in Holder’s guidelines were big enough to drive a Mack truck through — as President Donald Trump’s Justice Department did.
In 2017, Jeff Sessions, Trump’s first attorney general, said he was reviewing Holder’s guidelines to see whether they needed to be changed to make it easier for prosecutors to crack down on leaks. But in the end, the Trump Justice Department never bothered to revise the guidelines. They were still in effect when the department secretly obtained the phone records of reporters at The Post, the New York Times and CNN last year. Holder’s guidelines proved to be little more than a speed bump on the path to conducting secret surveillance of journalists.
Garland’s guidelines came in response to reports uncovering those efforts and the Biden administration’s initial move to continue pursuing the Trump-era subpoenas.
News media executives are right to praise Garland for getting rid of the ineffective Holder rules. But as someone who came close to being jailed in a seven-year battle with the Bush and the Obama administrations over whether I would be forced to testify about my confidential sources in a leak prosecution, I think it is too early to sing the praises of Garland’s new rules. Federal prosecutors are quite capable of finding new ways to undercut press freedom.
On paper, Garland’s guidelines look much better than Holder’s version, with wording that seems to include a more categorical prohibition on going after reporters.
Perhaps the most significant change is that Garland has dropped a “balancing test,” in which prosecutors could weigh supposed national security interests against the rights of a journalist in deciding whether to subpoena reporters or their communications. “It is really a substantial rethinking of the DOJ-press relationship,” said Bruce Brown, executive director of the Reporters Committee for Freedom of the Press.
But Garland’s guidelines still have loopholes, just smaller and less obvious. They allow the targeting of reporters and their communications when “necessary to prevent an imminent risk of death or serious bodily harm” from terrorism, as well as attacks on “critical infrastructure.” I clearly remember the years after 9/11 when the government kept the American public in fear over supposed plots to blow up the Brooklyn Bridge and other components of “critical infrastructure.” Will these guidelines wilt in the face of the next national security crisis?
More ominously, the guidelines seem to be in direct conflict with the Biden administration’s own efforts to prosecute Julian Assange, the WikiLeaks founder indicted by the Trump Justice Department under the Espionage Act.
The Garland guidelines ban the use of subpoenas against journalists even when they have “possessed or published” classified information. But those are precisely the grounds upon which the Justice Department is seeking to prosecute Assange. The 2019 indictment of Assange charged that he “was complicit” in “unlawfully obtaining and disclosing classified documents related to the national defense.” Assange was accused of obtaining classified documents from former Army intelligence analyst Chelsea Manning and then publishing those documents on WikiLeaks.
In January, a British judge blocked Assange’s extradition, but rather than drop the Trump-era case, the Biden Justice Department appealed.
Most depressing to me is that Garland’s new guidelines have received plaudits even as the Justice Department continues to prosecute and imprison journalists’ sources. The department is seeking a nine-year prison sentence for Daniel Hale, a former Air Force analyst who allegedly leaked information to the Intercept about targeted drone killings by the United States in the war on terrorism. That would be the longest prison sentence ever in a case involving a leak to the press.
Constant leak prosecutions are now accepted as a fact of life by the news media, when the truth is they almost never occurred before 9/11. Genuine reform of the way the Justice Department deals with the press will require Garland to return to the era before our endless wars and stop prosecuting whistleblowers who help reporters do their jobs.

|
|
Artificial Intelligence Wants You (and Your Job) |
|
|
Written by <a href="index.php?option=com_comprofiler&task=userProfile&user=53611"><span class="small">John Feffer, TomDispatch</span></a>
|
|
Monday, 26 July 2021 08:21 |
|
Feffer writes: "Thanks to AI, technology is hurtling us toward a new inflection point."
American author of science fiction Isaac Asimov. (photo: Getty Images/Salon)

Artificial Intelligence Wants You (and Your Job)
By John Feffer, TomDispatch
26 July 21
As many of you may remember, Dispatch Books has long been publishing John Feffer’s Splinterlands trilogy, his dystopian novels that foresaw so much that’s since engulfed us. His first volume, Splinterlands, was published in 2016, Frostlands in 2018, and the final must-read book, Songlands, is now out. Of it, Adam Hochschild has written: “An intriguing conclusion to a worthy trilogy. Feffer leaps far into the future in this book, but his view of it is enriched by a quirky, sensitive understanding of our world as it is — both its dangers and its possibilities.” Make sure, at the very least, to order yourself a copy. Any of you who might, however, like to support TomDispatch in return for your own signed, personalized Songlands, should go to our donation page and contribute at least $100 (or, if you live outside the U.S.A., $125) and it’ll be yours. Truly, you won’t regret it. In fact, given the ever-hotter world we find ourselves in, it couldn’t be a more appropriate book to read! Tom]
In my younger years, I had significant experience with futuristic worlds, sometimes of the grimmest sort. After all, I went to the moon with Jules Verne; saw London being destroyed with H.G. Wells; met my first robot with Isaac Asimov; faced the apocalyptic world of those aggressively poisonous plants, the Triffids, with John Wyndham; and met Big Brother with George Orwell. Yet, from pandemics to climate change, social media to the robotization of the planet that TomDispatch regular John Feffer describes today, nothing that I read once upon a time, no matter how futuristic, no matter how strange or apocalyptic, prepared me for the everyday world I now find myself in at age 77.
Back in the days of the pen and manual typewriter (remember, I’ve been an editor most of my life), if you had told me that, were I someday to mistakenly spell “life” as “kife,” the spell-check program on my computer (yes, an actual computer!) would promptly underline it in red to let me know that I had goofed, I would never have believed you. I, edited incessantly by a machine? Not on your life, or perhaps I should say: not until it became part of my seldom-thought-about everyday life. Nor, of course, could you have convinced me that someday I would be able to carry my total communications system in my pocket and more or less talk to anyone I know anywhere, anytime. Had you suggested that, then, I would undoubtedly have laughed you out of the room.
And yet here I am, living in an online world I barely grasp in a version of everyday life that’s left more youthful thoughts about the future in the dust. And now, Feffer has the nerve to fill me in on a future world to be in which, functionally, a robot may be carrying the equivalent of me around in its pocket or simply leave beings like me in a ditch somewhere along the way. Apocalypse then? I shudder to think. Read his piece and see if you don’t shudder, too. Tom
-Tom Engelhardt, TomDispatch
Artificial Intelligence Wants You (and Your Job) We’d Better Control Machines Before They Control Us
y wife and I were recently driving in Virginia, amazed yet again that the GPS technology on our phones could guide us through a thicket of highways, around road accidents, and toward our precise destination. The artificial intelligence (AI) behind the soothing voice telling us where to turn has replaced passenger-seat navigators, maps, even traffic updates on the radio. How on earth did we survive before this technology arrived in our lives? We survived, of course, but were quite literally lost some of the time.
My reverie was interrupted by a toll booth. It was empty, as were all the other booths at this particular toll plaza. Most cars zipped through with E-Z passes, as one automated device seamlessly communicated with another. Unfortunately, our rental car didn’t have one.
So I prepared to pay by credit card, but the booth lacked a credit-card reader.
Okay, I thought, as I pulled out my wallet, I’ll use cash to cover the $3.25.
As it happened, that booth took only coins and who drives around with 13 quarters in his or her pocket?
I would have liked to ask someone that very question, but I was, of course, surrounded by mute machines. So, I simply drove through the electronic stile, preparing myself for the bill that would arrive in the mail once that plaza’s automated system photographed and traced our license plate.
In a thoroughly mundane fashion, I’d just experienced the age-old conflict between the limiting and liberating sides of technology. The arrowhead that can get you food for dinner might ultimately end up lodged in your own skull. The car that transports you to a beachside holiday contributes to the rising tides — by way of carbon emissions and elevated temperatures — that may someday wash away that very coastal gem of a place. The laptop computer that plugs you into the cyberworld also serves as the conduit through which hackers can steal your identity and zero out your bank account.
In the previous century, technology reached a true watershed moment when humans, harnessing the power of the atom, also acquired the capacity to destroy the entire planet. Now, thanks to AI, technology is hurtling us toward a new inflection point.
Science-fiction writers and technologists have long worried about a future in which robots, achieving sentience, take over the planet. The creation of a machine with human-like intelligence that could someday fool us into believing it’s one of us has often been described, with no small measure of trepidation, as the “singularity.” Respectable scientists like Stephen Hawking have argued that such a singularity will, in fact, mark the “end of the human race.”
This will not be some impossibly remote event like the sun blowing up in a supernova several billion years from now. According to one poll, AI researchers reckon that there’s at least a 50-50 chance that the singularity will occur by 2050. In other words, if pessimists like Hawking are right, it’s odds on that robots will dispatch humanity before the climate crisis does.
Neither the artificial intelligence that powers GPS nor the kind that controlled that frustrating toll plaza has yet attained anything like human-level intelligence — not even close. But in many ways, such dumb robots are already taking over the world. Automation is currently displacing millions of workers, including those former tollbooth operators. “Smart” machines like unmanned aerial vehicles have become an indispensable part of waging war. AI systems are increasingly being deployed to monitor our every move on the Internet, through our phones, and whenever we venture into public space. Algorithms are replacing teaching assistants in the classroom and influencing sentencing in courtrooms. Some of the loneliest among us have already become dependent on robot pets.
As AI capabilities continue to improve, the inescapable political question will become: to what extent can such technologies be curbed and regulated? Yes, the nuclear genie is out of the bottle as are other technologies — biological and chemical — capable of causing mass destruction of a kind previously unimaginable on this planet. With AI, however, that day of singularity is still in the future, even if a rapidly approaching one. It should still be possible, at least theoretically, to control such an outcome before there’s nothing to do but play the whack-a-mole game of non-proliferation after the fact.
As long as humans continue to behave badly on a global scale — war, genocide, planet-threatening carbon emissions — it’s difficult to imagine that anything we create, however intelligent, will act differently. And yet we continue to dream that some deus in machina, a god in the machine, could appear as if by magic to save us from ourselves.
Taming AI?
In the early 1940s, science fiction writer Isaac Asimov formulated his famed three laws of robotics: that robots were not to harm humans, directly or indirectly; that they must obey our commands (unless doing so violates the first law); and that they must safeguard their own existence (unless self-preservation contravenes the first two laws).
Any number of writers have attempted to update Asimov. The latest is legal scholar Frank Pasquale, who has devised four laws to replace Asimov’s three. Since he’s a lawyer not a futurist, Pasquale is more concerned with controlling the robots of today than hypothesizing about the machines of tomorrow. He argues that robots and AI should help professionals, not replace them; that they should not counterfeit humans; that they should never become part of any kind of arms race; and that their creators, controllers, and owners should always be transparent.
Pasquale’s “laws,” however, run counter to the artificial-intelligence trends of our moment. The prevailing AI ethos mirrors what could be considered the prime directive of Silicon Valley: move fast and break things. This philosophy of disruption demands, above all, that technology continuously drive down labor costs and regularly render itself obsolescent.
In the global economy, AI indeed helps certain professionals — like Facebook’s Mark Zuckerberg and Amazon’s Jeff Bezos, who just happen to be among the richest people on the planet — but it’s also replacing millions of us. In the military sphere, automation is driving boots off the ground and eyes into the sky in a coming robotic world of war. And whether it’s Siri, the bots that guide increasingly frustrated callers through automated phone trees, or the AI that checks out Facebook posts, the aim has been to counterfeit human beings — “machines like me,” as Ian McEwan called them in his 2019 novel of that title — while concealing the strings that connect the creation to its creator.
Pasquale wants to apply the brakes on a train that has not only left the station but no longer is under the control of the engine driver. It’s not difficult to imagine where such a runaway phenomenon could end up and techno-pessimists have taken a perverse delight in describing the resulting cataclysm. In his book Superintelligence, for instance, Nick Bostrom writes about a sandstorm of self-replicating nanorobots that chokes every living thing on the planet — the so-called grey goo problem — and an AI that seizes power by “hijacking political processes.”
Since they would be interested only in self-preservation and replication, not protecting humanity or following its orders, such sentient machines would clearly tear up Asimov’s rulebook. Futurists have leapt into the breach. For instance, Ray Kurzweil, who predicted in his 2005 book The Singularity Is Near that a robot would attain sentience by about 2045, has proposed a “ban on self-replicating physical entities that contain their own codes for self-replication.” Elon Musk, another billionaire industrialist who’s no enemy of innovation, has called AI humanity’s “biggest existential threat” and has come out in favor of a ban on future killer robots.
To prevent the various worst-case scenarios, the European Union has proposed to control AI according to degree of risk. Some products that fall in the EU’s “high risk” category would have to get a kind of Good Housekeeping seal of approval (the Conformité Européenne). AI systems “considered a clear threat to the safety, livelihoods, and rights of people,” on the other hand, would be subject to an outright ban. Such clear-and-present dangers would include, for instance, biometric identification that captures personal data by such means as facial recognition, as well as versions of China’s social credit system where AI helps track individuals and evaluate their overall trustworthiness.
Techno-optimists have predictably lambasted what they consider European overreach. Such controls on AI, they believe, will put a damper on R&D and, if the United States follows suit, allow China to secure an insuperable technological edge in the field. “If the member states of the EU — and their allies across the Atlantic — are serious about competing with China and retaining their power status (as well as the quality of life they provide to their citizens),” writes entrepreneur Sid Mohasseb in Newsweek, “they need to call for a redraft of these regulations, with growth and competition being seen as at least as important as regulation and safety.”
Mohasseb’s concerns are, however, misleading. The regulators he fears so much are, in fact, now playing a game of catch-up. In the economy and on the battlefield, to take just two spheres of human activity, AI has already become indispensable.
The Automation of Globalization
The ongoing Covid-19 pandemic has exposed the fragility of global supply chains. The world economy nearly ground to a halt in 2020 for one major reason: the health of human workers. The spread of infection, the risk of contagion, and the efforts to contain the pandemic all removed workers from the labor force, sometimes temporarily, sometimes permanently. Factories shut down, gaps widened in transportation networks, and shops lost business to online sellers.
A desire to cut labor costs, a major contributor to a product’s price tag, has driven corporations to look for cheaper workers overseas. For such cost-cutters, eliminating workers altogether is an even more beguiling prospect. Well before the pandemic hit, corporations had begun to turn to automation. By 2030, up to 45 million U.S. workers will be displaced by robots. The World Bank estimates that they will eventually replace an astounding 85% of the jobs in Ethiopia, 77% in China, and 72% in Thailand.”
The pandemic not only accelerated this trend, but increased economic inequality as well because, at least for now, robots tend to replace the least skilled workers. In a survey conducted by the World Economic Forum, 43% of businesses indicated that they would reduce their workforces through the increased use of technology. “Since the pandemic hit,” reports NBC News,
“food manufacturers ramped up their automation, allowing facilities to maintain output while social distancing. Factories digitized controls on their machines so they could be remotely operated by workers working from home or another location. New sensors were installed that can flag, or predict, failures, allowing teams of inspectors operating on a schedule to be reduced to an as-needed maintenance crew.”
In an ideal world, robots and AI would increasingly take on all the dirty, dangerous, and demeaning jobs globally, freeing humans to do more interesting work. In the real world, however, automation is often making jobs dirtier and more dangerous by, for instance, speeding up the work done by the remaining human labor force. Meanwhile, robots are beginning to encroach on what’s usually thought of as the more interesting kinds of work done by, for example, architects and product designers.
In some cases, AI has even replaced managers. A contract driver for Amazon, Stephen Normandin, discovered that the AI system that monitored his efficiency as a deliveryman also used an automated email to fire him when it decided he wasn’t up to snuff. Jeff Bezos may be stepping down as chief executive of Amazon, but robots are quickly climbing its corporate ladder and could prove at least as ruthless as he’s been, if not more so.
Mobilizing against such a robot replacement army could prove particularly difficult as corporate executives aren’t the only ones putting out the welcome mat. Since fully automated manufacturing in “dark factories” doesn’t require lighting, heating, or a workforce that commutes to the site by car, that kind of production can reduce a country’s carbon footprint — a potentially enticing factor for “green growth” advocates and politicians desperate to meet their Paris climate targets.
It’s possible that sentient robots won’t need to devise ingenious stratagems for taking over the world. Humans may prove all too willing to give semi-intelligent machines the keys to the kingdom.
The New Fog of War
The 2020 war between Armenia and Azerbaijan proved to be unlike any previous military conflict. The two countries had been fighting since the 1980s over a disputed mountain enclave, Nagorno-Karabakh. Following the collapse of the Soviet Union, Armenia proved the clear victor in conflict that followed in the early 1990s, occupying not only the disputed territory but parts of Azerbaijan as well.
In September 2020, as tensions mounted between the two countries, Armenia was prepared to defend those occupied territories with a well-equipped army of tanks and artillery. Thanks to its fossil-fuel exports, Azerbaijan, however, had been spending considerably more than Armenia on the most modern version of military preparedness. Still, Armenian leaders often touted their army as the best in the region. Indeed, according to the 2020 Global Militarization Index, that country was second only to Israel in terms of its level of militarization.
Yet Azerbaijan was the decisive winner in the 2020 conflict, retaking possession of Nagorno-Karabkah. The reason: automation.
“Azerbaijan used its drone fleet — purchased from Israel and Turkey — to stalk and destroy Armenia’s weapons systems in Nagorno-Karabakh, shattering its defenses and enabling a swift advance,” reported the Washington Post‘s Robyn Dixon. “Armenia found that air defense systems in Nagorno-Karabakh, many of them older Soviet systems, were impossible to defend against drone attacks, and losses quickly piled up.”
Armenian soldiers, notorious for their fierceness, were spooked by the semi-autonomous weapons regularly above them. “The soldiers on the ground knew they could be hit by a drone circling overhead at any time,” noted Mark Sullivan in the business magazine Fast Company. “The drones are so quiet they wouldn’t hear the whir of the propellers until it was too late. And even if the Armenians did manage to shoot down one of the drones, what had they really accomplished? They’d merely destroyed a piece of machinery that would be replaced.”
The United States pioneered the use of drones against various non-state adversaries in its war on terror in Afghanistan, Iraq, Pakistan, Somalia, and elsewhere across the Greater Middle East and Africa. But in its 2020 campaign, Azerbaijan was using the technology to defeat a modern army. Now, every military will feel compelled not only to integrate increasingly more powerful AI into its offensive capabilities, but also to defend against the new technology.
To stay ahead of the field, the United States is predictably pouring money into the latest technologies. The new Pentagon budget includes the “largest ever” request for R&D, including a down payment of nearly a billion dollars for AI. As TomDispatch regular Michael Klare has written, the Pentagon has even taken a cue from the business world by beginning to replace its war managers — generals — with a huge, interlinked network of automated systems known as the Joint All-Domain Command-and-Control (JADC2).
The result of any such handover of greater responsibility to machines will be the creation of what mathematician Cathy O’Neill calls “weapons of math destruction.” In the global economy, AI is already replacing humans up and down the chain of production. In the world of war, AI could in the end annihilate people altogether, whether thanks to human design or computer error.
After all, during the Cold War, only last-minute interventions by individuals on both sides ensured that nuclear “missile attacks” detected by Soviet and American computers — which turned out to be birds, unusual weather, or computer glitches — didn’t precipitate an all-out nuclear war. Take the human being out of the chain of command and machines could carry out such a genocide all by themselves.
And the fault, dear reader, would lie not in our robots but in ourselves.
Robots of Last Resort
In my new novel Songlands, humanity faces a terrible set of choices in 2052. Having failed to control carbon emissions for several decades, the world is at the point of no return, too late for conventional policy fixes. The only thing left is a scientific Hail Mary pass, an experiment in geoengineering that could fail or, worse, have terrible unintended consequences. The AI responsible for ensuring the success of the experiment may or may not be trustworthy. My dystopia, like so many others, is really about a narrowing of options and a whittling away of hope, which is our current trajectory.
And yet, we still have choices. We could radically shift toward clean energy and marshal resources for the whole world, not just its wealthier portions, to make the leap together. We could impose sensible regulations on artificial intelligence. We could debate the details of such programs in democratic societies and in participatory multilateral venues.
Or, throwing up our hands because of our unbridgeable political differences, we could wait for a post-Trumpian savior to bail us out. Techno-optimists hold out hope that automation will set us free and save the planet. Laissez-faire enthusiasts continue to believe that the invisible hand of the market will mysteriously direct capital toward planet-saving innovations instead of SUVs and plastic trinkets.
These are illusions. As I write in Songlands, we have always hoped for someone or something to save us: “God, a dictator, technology. For better or worse, the only answer to our cries for help is an echo.”
In the end, robots won’t save us. That’s one piece of work that can’t be outsourced or automated. It’s a job that only we ourselves can do.
Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Books, John Feffer’s new dystopian novel, Songlands (the final one in his Splinterlands series), Beverly Gologorsky’s novel Every Body Has a Story, and Tom Engelhardt’s A Nation Unmade by War, as well as Alfred McCoy’s In the Shadows of the American Century: The Rise and Decline of U.S. Global Power and John Dower’s The Violent American Century: War and Terror Since World War II.

|
|