Monday, March 12, 2012

"The Dark Side of Facebook"


That's the title of a report in the Telegraph a couple of weeks ago. I came across it today, and it confirms many of my previously expressed fears about the security risks of using Facebook. Here's an excerpt.

Some four billion pieces of content are shared [on Facebook] every day by 845 million users. And while most are harmless, it has recently come to light that the site is brimming with paedophilia, pornography, racism and violence – all moderated by outsourced, poorly vetted workers in third world countries paid just $1 an hour.

. . .

Although this invisible army of moderators receive basic training, they work from home, do not appear to undergo criminal checks, and have worrying access to users’ personal details. In a week in which there has been an outcry over Google’s privacy policies, can we expect a wider backlash over the extent to which we trust companies with our intimate information?

Last month, 21-year-old Amine Derkaoui gave an interview to Gawker, an American media outlet. Derkaoui had spent three weeks working in Morocco for oDesk, one of the outsourcing companies used by Facebook. His job, for which he claimed he was paid around $1 an hour, involved moderating photos and posts flagged as unsuitable by other users.

“It must be the worst salary paid by Facebook,” he told The Daily Telegraph this week. “And the job itself was very upsetting – no one likes to see a human cut into pieces every day.”

Derkaoui is not exaggerating. An articulate man, he described images of animal abuse, butchered bodies and videos of fights. Other moderators, mainly young, well-educated people working in Asia, Africa and Central America, have similar stories. “Paedophilia, necrophilia, beheadings, suicides, etc,” said one. “I left [because] I value my sanity.” Another compared it to working in a sewer. “All the ---- of the world flows towards you and you have to clean it up,” he said.

. . .

Neither is Facebook alone in outsourcing unpleasant work. Adam Levin, the US-based chief executive of Criterion Capital Partners and the owner of British social network Bebo, says that the process is “rampant” across Silicon Valley.

“We do it at Bebo,” he says. “Facebook has so much content flowing into its system every day that it needs hundreds of people moderating all the images and posts which are flagged. That type of workforce is best outsourced for speed, scale and cost.”

A spokesman for Twitter said that they have an internal moderation team, but refused to answer a question about outsourcing. Similarly, a Google spokesperson would not say how Google+, the search giant’s new social network, will be moderated. Neither Facebook nor oDesk were willing to comment on anything to do with outsourcing or moderation.

. . .

The biggest worry for the rest of us, however, is that the moderation process isn’t nearly secretive enough. According to Derkaoui, there are no security measures on a moderator’s computer to stop them uploading obscene material themselves. Despite coming into daily contact with such material, he was never subjected to a criminal record check. Where, then, is the oversight body for these underpaid global police? Quis custodiet ipsos custodes?

. . .

... maybe disgruntled commuters, old schoolfriends and new mothers will think twice before sharing intimate information with their “friends” – only to find that two minutes later it’s being viewed by an under-vetted, unfulfilled person on a dollar an hour in an internet café in Marrakech.


There's more at the link, and in a follow-up article. A third article discusses Facebook's guidelines for whether or not to censor a posting.

There will doubtless be many who argue that this alleged 'security risk' is really of trifling concern; that Facebook's moderators will be so overwhelmed with the sheer volume of posts requiring attention that they won't be able to focus on any particular post, or picture, or user. However, I think such confidence is misplaced. For example, the moderator who gave the above interview later admitted that he'd gone back to research further information about particular users. No prizes for guessing what sort of posts 'aroused' his interest! It's creepy, to put it mildly.

I suppose the ultimate answer to this sort of security threat is not to post anything overly personal, too revealing, or that can potentially be used to blackmail, threaten or coerce you in any way. However, I'm willing to bet that most Facebook users don't take those considerations into account at all. They - like most of us - develop a blind, unthinking faith in technology, and use it without worrying about the potential consequences. The bad guys rely on that reaction, and exploit it ruthlessly.

I remain adamant that I won't use Facebook or other 'social media' at all unless I absolutely have to; and if that should become necessary, for commercial or other reasons, I'll strictly limit the information I put up there. I can only advise my readers to do likewise.

Peter

1 comment:

Anonymous said...

The most troubling aspect is that many websites and apps are now requiring you to login via Facebook. One example is Spotify which is similiar to Grooveshark and Pandora. I'd heard good reviews and was dismayed to learn it had no traditional login feature. As a result I did not sign up as I refuse to let fb become the ID of the internet. I would delete it as soon as I graduate, but I have many work contacts friended as well. That's when it gets complicated. As far as anything useful, I haven't seen anything remotely intelligent posted in quite some time including the viral "Invisible Children/Kony" campaign.