RSN Fundraising Banner
FB Share
Email This Page
add comment
Print

McLaughlin writes: "Twitter announced on Friday that it has shut down over 125,000 user accounts for promoting violent threats or terrorist acts, mostly having to do with ISIS, in less than a year. At the same time, the company made it clear that there is no automated way of distinguishing between protected speech and what it considers violations of its rules."

Twitter. (photo: Bethany Clarke/Getty Images)
Twitter. (photo: Bethany Clarke/Getty Images)


Twitter Says There's No "Magical Algorithm" to Find Terrorists

By Jenna McLaughlin, The Intercept

08 February 16

 

witter announced on Friday that it has shut down over 125,000 user accounts for promoting violent threats or terrorist acts, mostly having to do with ISIS, in less than a year.

At the same time, the company made it clear that there is no automated way of distinguishing between protected speech and what it considers violations of its rules.

“As many experts and other companies have noted, there is no ‘magic algorithm’ for identifying terrorist content on the internet, so global online platforms are forced to make challenging judgment calls based on very limited information and guidance,” the company said.

“As an open platform for expression, we have always sought to strike a balance between the enforcement of our own Twitter Rules covering prohibited behaviors, the legitimate needs of law enforcement, and the ability of users to share their views freely — including views that some people may disagree with or find offensive,” the company said.

Just last month, top national security officials parachuted into Silicon Valley to meet with technology executives and ask for technology “that could make it harder for terrorists to use the internet … or easier for us to find them when they do.”

Scientists tend to agree that this is impossible, based on the rarity of terrorist attacks and the unique, unpredictable circumstances surrounding them — though it hasn’t stopped companies like CIA-funded Palantir from trying. These efforts have been criticized because they generate too many false positives, and cast suspicion on far more innocent people than true terrorists lurking in their midst.

Algorithms are better for exerting social control or monitoring political views than they are for predicting large-scale violence.

Twitter’s new policy instead stresses the importance of human monitoring, reports from users, and delicate decision making.

Twitter’s system doesn’t sound all that different from what Facebook does. Facebook reportedly has a team dedicated to responding to user complaints, which then will look for similar content in the network of the offending accounts.

Silicon Valley has pushed back on efforts made by high-ranking Sens. Dianne Feinstein, D-Calif., and Richard Burr, R-N.C., to essentially delegate to them the task of reporting signs of possible terrorist activity.


e-max.it: your social media marketing partner
Email This Page

 

THE NEW STREAMLINED RSN LOGIN PROCESS: Register once, then login and you are ready to comment. All you need is a Username and a Password of your choosing and you are free to comment whenever you like! Welcome to the Reader Supported News community.

RSNRSN