Would you tell Facebook youre happyto see all the bared flesh it canshow you? Andthat the more gratuitous violence it pumps into your News Feed the better?
Obtaining answers to where a personsline on viewing what can becontroversial types of content lies is nowon Facebooks product roadmap explicitly statedby CEO Mark Zuckerberg in a lengthyblogpost last week, not-so-humblyentitled Building a global community.
Make no mistake, this is a huge shift from the one-size fits all community standards Facebookhas peddled for years crashing into controversies of its own when, for example, itdisappearedan iconic Vietnam war photograph of a naked child fleeing a napalm attack.
In last weeks wordy essay in which Zuckerberg generallytries to promotethe grandiose notionthat Facebooks future role is tobethe glueholding the fabric of global society together, even as hefailsto flag the obvious paradox: that technology which helps amplify misinformation and prejudice might not be so great for social cohesion after all the Facebook CEOsketches out an impending changeto community standards that will see the siteactively ask users to set a personal tolerancethreshold for viewingvarious types of less-than-vanilla content.
On this Zuckerbergwrites:
The idea is to give everyone in the community options for how they would like to set the content policy for themselves. Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings. We will periodically ask you these questions to increase participation and so you dont need to dig around to find them. For those who dont make a decision, the default will be whatever the majority of people in your region selected, like a referendum. Of course you will always be free to update your personal settings anytime.
With a broader range of controls, content will only be taken down if it is more objectionable than the most permissive options allow. Within that range, content should simply not be shown to anyone whose personal controls suggest they would not want to see it, or at least they should see a warning first. Although we will still block content based on standards and local laws, our hope is that this system of personal controls and democratic referenda should minimize restrictions on what we can share.
A followingparagraph caveats thatFacebooks in-house AI doesnot currentlyhave the ability to automatically identify everytype of (potentially)problematic content. Though the engineer in Zuck is apparently keeping the flame of possibilityalive by declining to state the obvious: that understanding theentire spectrum of possiblehuman controversies would require a truly super-intelligent AI.
(Meanwhile, Facebooks in-house algorithms have shown themselves to be hopeless at being able to correctly ID some pretty bald-facedfakery. And hes leaning on third party fact-checking organizations who do employ actual humans to separatetruth and lies to help fight the spread of Fake News on the platform, soset your expectations accordingly )
Its worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more. At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years, is how Zuck frames Facebookschallenge here.
The problem is this and indeed much else in the ~5,000-word post is mostlymisdirection.
Theissue is not whether Facebook will be able todo what he suggests is its ultimate AI-powered goal(i.e. scan all user-shared content for problems; categorize everything accurately across a range of measures; and then dish upexactly the stuffeach userwants to see in order tokeep them fully engaged onFacebook, and save Facebookfrom any more content removal controversies) rather the point is Facebook is going to beasking users to explicitly give iteven more personal data.
Data thatis necessarily highlysensitive in nature being asthe community governance issue hes flagging here relates to controversial content. Nudity, violence, profanity, hate speech, and so on.
Yet Facebook remainsan advertising business. It profiles all its users, and even tracks non-users web browsing habits, continually harvesting digital usagesignalsto feed its ad targeting algorithms.So the obvious questionis whether or not any additional data Facebook gathers from users via a content threshold setting will becomeanother input for fleshing out its user profiles forhelping it target ads.
We asked Facebookwhether it intends to use data provided by users responding to content settings-related questions in future for ad targeting purposes butthe company declined to comment further on Zuckerbergs post.
You might also wonder whether, given the scaleofFacebooks tracking systems and machine learning algorithms, couldnt it essentially infer individuals likely tolerance for controversial content? Why does it need to ask at all?
And isnt it also odd that Zuckerberg didnt suggest an engineering solution for managing controversial content, given, for example, hes been so intent on pursuing anengineering solution to the problem ofFake News. Why doesnt he talk about howAI might also rise to thecomplex challenge of figuring out personal content tastes without offending people?
To some extent they probably can already make a very educated, very good guess at [the types of content people are okay seeing], arguesEerke Boiten, senior lecturer in computer science at the University of Kent. But telling Facebook explicitly what your line in the sand is on different categories of content is in itself giving Facebook a whole lot of quite high level information that they can use for profiling again.
Not only could they derive that information from what they already have but it would also help them to fine-tune the information they already have. It works in two directions. It reinforces the profiling, and could be deduced from profiling in the first place.
Its checking their inferred data is accurate, agrees Paul Bernal, law lecturer at theUniversity of East Anglia. Its almost testing their algorithms. We reckon this about you, this is what you say and this is why weve got it wrong. It can actually, effectively be improving their ability to determine information on people.
Bernal also makes the point that there couldbe a difference,in data protection law terms, if Facebook usersare directly handing overpersonal information about content tolerances to Facebook (i.e. when it asks them to tell it)vs such personal informationbeing inferred by Facebooks indirect tracking of their usage of its platform.
In data protection terms there is at least some question if they derive information for example sexuality from our shopping habits whether that brings into play all of the sensitive personal data rules. If weve given it consensually then its clearer that they have permission. So again they may be trying to head off issues, he suggests.I do see this as being another data-grab, and I do see this as being another way of enriching the value of their own data and testing their algorithms.
This is increasing risks and increasing our vulnerability at a time when we should be doing exactly the opposite.
Im not on Facebookand this makes it even clearer to me why Im not on Facebookbecause it seems to me in particular this is increasing risks and increasing our vulnerability at a time when we should be doing exactly the opposite, adds Bernal.
Facebook users are able to request to see some of thepersonal data Facebook holds on them. But, as Boiten points out, this list is by no means complete. What they give you back is not the full information they have on you, he tells TechCrunch. Because some of the tracking they are doing is really more sophisticated than that. I am absolutely, 100 per cent certain that they are hiding stuff in there. They dont give you the full information even if you ask for it.
A very simple example of that is that they memorize your search history within Facebook. Even if you delete your Facebook search history it still autocompletes on the basis of your past searches. So I have no doubt whatsoever that Facebook knows more than they are letting on There remains a complete lack of transparency.
So it at least seems fair that Facebookcouldtake a shot at inferring users content thresholds, based on the mass of personal data it holds on individuals. (One anecdotal example: I recall once seeing a notification float into my News Feed that a Facebook friendhad liked a page called Rough sex, which would seem tobe just the sort of relevant preference signal Facebook coulduse to auto-determine the types of content thresholds Zuckerberg is talking about, at least forusers who have sharedenough such signals with it.)
But of course if it did that Facebookwould be diving headfirstinto some very controversial territory. And underlining exactly how much it knows about you, dear user and that mightcomeacross asespecially creepy when paired with a News Feed thatsinjectinggraphic content into your eyeballsbecauseit thinks thats what you wantto see.
Given the level at which theyre profiling we shouldnt tell them anymore, saysBoiten, when asked whether people should feel okay feeding Facebook infoabout their line in the sand pointing to anothercontroversy that arose last year when it emerged Facebooks ad capabilities could be used to actively exclude or include people with specific ethnic affinities (aka racial profiling).
If they make the advances in understanding of natural language content the AI slant that Zuckerbergs [blog post] promises, probably unrealistically, but nevertheless if they get that sort of advantage then blimey theyre going to know an awful lot more than they already do, he adds.
You can bet that theyre going to be profiling people based on their standards settings in this way, addsBernal. Thats how it works, and then they aggregate it and theyll be using it I bet to target their advertising and so on. It is more total information management. The more they can get, and the more granular those personal controls get the more information theyre picking up.
And I do think its disingenuous in that Zuckerbergspost is not mentioning any of this.
While its not yet clear exactly how (or when) thesecontent settingswill be implemented, the structuresketched out byZuckerberg already looks pretty problematic given that Facebook users who do not wantto share anyadditional sensitive signals with the ad-targeting giant will be forcedto tolerate their peerspredilections.
Which immediately puts pressure onusersto confess their contentlikes/dislikes to Facebook in order to avoid this hell is other peoples tastes bind i.e. in order to not be subjectto the preferencesof alocal median.And to avoid being tainted by association of the types of content showing up (or not showing up) in theirNews Feed. After all, the Facebook News Feed is inherently individual so theres a risk of the character of the content in a users Feed beingassumed to be a reflection of theirpersonal tastes.
So bynot telling Facebook anything about your content thresholds youre put into adefault corneroftelling Facebook youre okay with whatever theregional average is okay with, content wise. And that may be the opposite of okay for you.
I think theres another little trap here that theyve done before, continuesBernal. When you make controls granular it looks as if youre giving people control but actually people generally get bored and dont bother changing anything. So you can say youve given people control, and now its all much better but in general they dont use it. The few people who do are the few people who would understand it and get round it anyway. It will be very interesting to see what extent people actually use it.
Such a majority rulesystem could also be at risk of being gamed by lets say mischievous 4Channers banding together andworking to get graphic boundaries opened up in a region where more conservative standards are the norm.
I can see people gaming this kind of system in the way that all kinds of online polls and referenda are gamed, somebody will work out the way to get the systems set the way they want There are all kinds of possibilities, arguesBernal. Theres also a danger of community leaders taking some degree of control; recommending people particular settings. Im wary of Zuckerbergending up doing this so you have standards for particular kind of people, so you chose the standards that someone else has effectively chosen for you.
A lot will depend on the implementation of the content controls, certainly, but when you look at how easily, for example, Facebooks trending news section not to mention itsNews Feed in general has been shown to be vulnerable to manipulation (by purveyors of clickbait, Fake News etc) it suggests there could well be risks of content settingsbeing turnedon theirhead, and ending up causing moreoffenses than theywere trying to prevent.
Another point Bernal makes is that shifting some of the responsibility for the types of content being shownonto users implicitly shifts some of the blame away from Facebook when controversies inexorably arise. So, basically: see something you dont like in your News Feed in future? Well, thats YOUR fault now! Either youdidnt set yourFacebook content settings correctly. Or you didnt set any at all Tsk!
In other words, Facebook gets to deflect objections to the type of content its algorithms are shunting into the News Feeds of users all over the worldas a settings configuration issue sidestepping having to address the more systemic and fundamental flaw embedded into the design of the Facebookproduct: aka the filter bubble issue.
Facebook has long beenaccused of encouraginga narrowing of personal perspective viaits user-engagement focusedcontent hierarchies. And Zuckerbergs blog post has a fair amount of fuzzy thinkingon filter bubbles, as you might expect from the chief of an engagement-algorithm-driven content distribution machine. But for all his talk of building global community he offers no clear fix forhow Facebookcan helpbreak users out of the AI-enabled, navel-gazing circles its business model creates.
Yet a very simple fix for this does exist which would be to show people a chronological News Feed of friends posts as the default vs the current default being thealgorithmically powered one. Facebook users can manually switch to a chronological feed but the option is tricky to find, and clearly actively discouraged as the choice gets reset back to the AI Feed either per session or very soon after. In short the choicebarely exists.
The root problem here ofcourse is that Facebooks business benefits massively from algorithmically engaged users. So theres zero chance Zuck is going to announce its abandoning such alucrative and (thus far) scalable default. Sohissolitaryclaim in the essay to worry about fake news and filter bubbles rings veryhollow indeed.
Indeed, thereis also a riskthat giving users controls over controversial content could exacerbate the filter bubble effect further. Because a user who can effectively dial down all controversy to zero is probably notgoing to be encounteringnews about conflictin Syria, say. Its going to be a lot easier for them to live inside a paddedFacebook stream populatedwith cute photos ofbabies and kittens. News? What news? Awwww, how purdy!
And while that might make a pleasingexperience for individuals who wants to disengage from wider global realities, its reductive for society as a whole if lots of peoplestart retreating into rose-tinted filter bubbles. (Dialing up hateful content, should that also be possible via the future Facebook content filters, would also obviously likely have a deleterious and divisive societal impact).
The point is, giving people easy opt outs fortypes of content thatmight push them outside their comfort zone andforce them to confront unfamiliarideas or encountera different or difficult perspective just offers a self-enabledfilter bubble (alongside the algorithmic filter Facebook users get pushedinside when inside Facebook, thanks to its default setting).
This issue is of risingimportant given how many users Facebook has, and how the massively dominant platformhas been shown to be increasingly cannibalizingtraditional news media; becoming aplace people go to get news generally, not just to learn what their friends are up to.
And remember, all this stuff is being discussedin a post where Zuckerberg is seekingtopositionFacebook as the platformto glue the world together in a global community and at a fractious moment in history. Which would imply giving users the ability toaccess perspectives far-flung from theirown, rather that helping peopleretreat into reductivedigital comfort zones. A multitudeof disconnected filter bubbles certainly doesnot have the ring of global community to me.
Another glaring omission in Zuckerbergs writingis the risk of Facebooks cache of highly personal (and likelyincreasingly sensitive) data being misused by overreaching governments seeking to clamp down on particular groups within society.
Its especially strangefor a US CEO to stay silent on this at this point in time, given how social media searches by US customs agentshaveramped upfollowing President Trumps Executive Order on immigration last month. There have also beensuggestionsthat foreigners wanting to enter the US could be forced to hand over their social media passwords to US border agentsin future. All of which has very clear and very alarming implications for Facebook users and their Facebook data.
Yetthe threat posed to Facebook users bygovernment agencies appropriatingaccounts to enablehighly intrusive, warrantless searches and presumably go on phishing expeditions for incriminating content, perhaps, in future, as a matter of course forall foreigners traveling to the US apparently doesnot merit public consideration by Facebooks CEO.
Instead, Zuckerberg is calling for more user data, and for increaseduse of Facebook.
While clearly suchcalls are driven by the commercial imperatives of his business, the essayis couched as a humanitarian manifesto. So those callsseems either willfully ignorant or recklessly disingenuous.
Ill leave the last word to Bernal:The idea that we concentrate all our stuff in one place both in one online place (i.e. Facebook) and one physical place (i.e. our smartphones), puts us at greater risk when we have governments who are likely to take advantage of those risks. And are actually looking at doing things that will be putting us under pressure. So I think we need to be looking at diversifying, rather than looking at one particular route in.
Anyone whos got any sense is not going to be doing anything thats even slightly risky on Facebook, he adds. And should be looking for alternatives. Because while the border guards may know about Facebook and Twitter theyre not going to know about the more obscure systems, and theyre not going to be able to get access to them. So now is actually the time for us to be saying lets do less Facebook, not more Facebook.