Tumblr Porn Ban is a Window into the Unsustainable Logic of Internet Capitalism

Today Tumblr announced that they were effective banning porn and sex related content except most text from their platform as of December 17th this year. This policy comes on the heels of Apple delisting the Tumblr app from the App Store because of some child pornography that slipped through their filters. This move to ban this content follows similar bans from other digital technology and financial companies over the past few years including Paypal, Patreon, and Facebook. Although I am always in support of deplatforming pedophilic content (and content created from trafficking), these moves that also sweep up content created by sex workers and pornographic actors is indicative of inherent problems of large social networks themselves and how legitimate sex content (and the people who make them) is being victimized by it. Note I am not a sex worker and so I’m only speaking about this from my perspective as a socialist and sociologist. Please go to my Twitter where I’ve been retweeting sex workers’ unique perspectives on this topic.

Part of my original thread on this topic from earlier today.

As I mentioned in the above Twitter thread, it makes no logical sense why Tumblr (and many of these other platforms) would ban sex content when the problem was specifically child pornography, which again we all agree should be banned from the internet. First of all by banning sex content these companies are leaving advertising and payment processing money on the table which seems like a contradictory thing for capitalists to do. Secondly, they are doing this when we know good and well that child pornography and sex trafficking related content is not the same thing as general sexual content or content created by sex workers. Thirdly, Tumblr itself said that its ban from the App Store was related to a piece of child porn that got past it’s already existing child porn filters. So they have filters and NSFW tags….but somehow banning everything is the solution. It doesn’t add up chief. 

No Idea Idk GIF by Bounce - Find & Share on GIPHY

More likely reason for this ban is twofold. One pressure from the government and conservative forces that pushed and passed the anti-sex work FOSTA bill (aka 
H.R.1865 Allow States and Victims to Fight Online Sex Trafficking Act of 2017)
which makes internet platforms for hosting content that facilitates sex work or trafficking activities. The rhetoric around those laws, which too many liberals and leftists have also bought into, have equated sex work by free individuals with sex trafficking. The result has been that platforms that sex workers depend on for their work banning them for fear that they might end up like Backpage which is being legally attacked over the content posted on their site. In this anti-sex work environment it is easy for any company feeling this state facilitated pressure to simply wipe out all sex content. The problem, as discussed at length on Twitter (won’t be linking directly to examples cause I don’t want folks to be harassed via my article), is that sex workers especially Black, Native, and Women of Color who do sex work are being cut off from safe venues to do their work in. Because our society is very hostile to sex workers, especially women/LGBT ones, its easy for Tumblr and other platforms to effectively throw them under the bus to protect themselves instead of fighting for the users who bring tons of value to their sites.

The second reason why wholesale bans seem attractive to these platforms is because of their drive to maximize profits even in light of inherent problems with their platforms. All the large platforms, Tumblr included, currently face situations where their growth has become a liability to their profitability. That liability is rooted in these platforms become de facto public spaces where all groups in society come to discuss issues and do business. The problem there is that they have alot more untoward activity to deal with. The obvious solution from an user perspective is to prioritize the safety of users and have a robust moderation/reporting systems. The problem is that doing either cuts into the profits of these companies as banning Nazis for example will lower ad revenue and having a well organized moderation division costs money. I think many of these companies thought that all they needed to scale was adding more servers when in reality they have to basically govern millions of people based on their own values, that of their users, and the states that allow them to do business there.

In light of that contradiction they seem to be option to arguing that its simply too hard to moderate this much content and just wholesale ban inconvenient groups as they become less profitable. That’s the reason why sex workers are attacked but Nazis aren’t, the political cost of supporting sex workers is much less than supporting Nazis…which well says alot about this society. What they won’t face is the possibility that their platforms are simply too large or badly organized at their core for the purposes that users want to use them for. Even if they acknowledge the fundamental flaws in their business model they won’t admit it until they absolutely have to because acknowledging that will likely mean a drop in their stocks which are dependent on investors believing that profits will grown indefinitely. Although Silicon Valley pushes this logic to the extreme, the response by these platforms to this problem is like that of all capitalists: extract maximum profits before the market crashes and escape before you are held responsible for the carnage you leave behind. As these platforms continue to grow expect more of this “cut the fat” logic being applied to communities on their sites that become economically or politically inconvenient while making the market increasingly hostile to any newcomers through the vertical monopolies being created by these same platforms. And considering that we live in a white supremacist settler colonial society, I put money on it being marginalized non-white people (especially poor, disabled, and LGBT folks) who will bear the cost of this online capital expansion and increasing state control over the web. 

Against Black Inclusion in Facial Recognition

By Nabil Hassein

Researchers have documented the frequent inability of facial recognition software to detect Black people’s faces due to programmers’ use of unrepresentative data to train machine learning models.1 This issue is not unique, but systemic; in a related example, automated passport photo validation has registered Asian people’s open eyes as being closed.2 Such technological biases have precedents in mediums older than software. For example, color photography was initially optimized for lighter skin tones at the expense of people with darker skin, a bias corrected mainly due to the efforts and funding of furniture manufacturers and chocolate sellers to render darker tones more easily visible in photographs — the better to sell their products.3 Groups such as the Algorithmic Justice League have made it their mission to “highlight algorithmic bias” and “develop practices for accountability during the design, development, and deployment of coded systems”.4 I support all of those goals abstractly, but at a concrete level, I question whose interests would truly be served by the deployment of automated systems capable of reliably identifying Black people.

As a longtime programmer, I know first-hand that software can be an unwelcoming medium for Black folks, not only because of racism among programmers, but also because of biases built into code, which programmers can hardly avoid as no other foundations exist to build on. It’s easy for me to understand a desire to rid software of these biases. Just last month, I wrote up a sketch of a proposal to decolonize the Pronouncing software library5 I used in a simple art project to generate rhymes modeled on those of my favorite rapper.6 So I empathized when I heard Joy Buolamwini of the Algorithmic Justice League speak on wearing a white mask to get her own highly imaginative “Aspire Mirror” project involving facial recognition to perceive her existence.7 Modern technology has rendered literal Frantz Fanon’s metaphor of “Black Skin, White Masks”.8

Facial recognition has diverse applications, but as a police and prison abolitionist, the enhancement of state-controlled surveillance cameras (including police body cameras) to automatically identify people looms much larger in my mind than any other use.9 Researchers at Georgetown University found that fully half of American adults, or over 100 million people, are registered in one or another law enforcement facial recognition database, drawing from sources such as driver’s license photos.10 Baltimore Police used the technology to identify participants in the uprising following the murder of Freddie Gray.11 The US government plans to use facial recognition to identify every airline passenger exiting the United States.12 Machine learning researchers have even reinvented the racist pseudoscience of physiognomy, in a study claiming to identify criminals with approximately 90% accuracy based on their faces alone — using data provided by police.13

I consider it obvious that most if not all data collected by police to serve their inherently racist mission will be severely biased. It is equally clear to me that no technology under police control will be used to hold police accountable or to benefit Black folks or other oppressed people. Even restricting our attention to machine learning in the so-called “justice” system, examples abound of technology used to harm us, such as racist predictive models used by the courts to determine bail and sentencing decisions — matters of freedom and captivity, life and death.14 Accordingly, I have no reason to support the development or deployment of technology which makes it easier for the state to recognize and surveil members of my community. Just the opposite: by refusing to don white masks, we may be able to gain some temporary advantages by partially obscuring ourselves from the eyes of the white supremacist state. The reality for the foreseeable future is that the people who control and deploy facial recognition technology at any consequential scale will predominantly be our oppressors. Why should we desire our faces to be legible for efficient automated processing by systems of their design? We could demand instead that police be forbidden to use such unreliable surveillance technologies. Anti-racist technologists could engage in high-tech direct action by using the limited resources at our disposal to further develop extant techniques for tricking machine learning models into misclassifications,15 or distributing anti-surveillance hardware such as glasses designed to obscure the wearer’s face from cameras.16

This analysis clearly contradicts advocacy of “diversity and inclusion” as the universal or even typical response to bias. Among the political class, “Black faces in high places” have utterly failed to produce gains for the Black masses.17 Similarly, Black cops have shown themselves just as likely as white cops to engage in racist brutality and murder.18 Why should the inclusion of Black folks in facial recognition, or for that matter, the racist technology industry be different? Systemic oppression cannot be addressed by a change in the complexion of the oppressor, as though a rainbow 1% and more white people crowding the prisons would mean justice. That’s not the world I want to live in. We must imagine and build a future of real freedom.

All of the arguments I’ve presented could be (and have been) applied to many domains beyond facial recognition. I continue to grapple with what that means for my own work as a technologist and a political organizer, but I am firm already in at least two conclusions. The first is that despite every disadvantage, we must reappropriate oppressive technology for emancipatory purposes. The second is that the liberation of Black folks and all oppressed peoples will never be achieved by inclusion in systems controlled by a capitalist elite which benefits from the perpetuation of racism and related oppressions. It can only be achieved by the destruction of those systems, and the construction of new technologies designed, developed, and deployed by our own communities for our own benefit. The struggle for liberation is not a struggle for diversity and inclusion — it is a struggle for decolonization, reparations, and self-determination. We can realize those aspirations only in a socialist world.

Nabil Hassein is a software developer and organizer based in Brooklyn, NY.

  1. https://www.digitaltrends.com/photography/google-apologizes-for-misidentifying-a-black-couple-as-gorillas-in-photos-app/; https://www.theguardian.com/technology/2017/may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias
  2. https://www.dailydot.com/irl/richard-lee-eyes-closed-facial-recognition/
  3. https://petapixel.com/2015/09/19/heres-a-look-at-how-color-film-was-originally-biased-toward-white-people/
  4. https://www.ajlunited.org
  5. https://pronouncing.readthedocs.io/en/latest/
  6. https://nabilhassein.github.io/blog/generative-doom/
  7. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms/
  8. Frantz Fanon: “Black Skin, White Masks”.↩
  9. https://theintercept.com/2017/03/22/real-time-face-recognition-threatens-to-turn-cops-body-cameras-into-surveillance-machines/
  10. https://www.law.georgetown.edu/news/press-releases/half-of-all-american-adults-are-in-a-police-face-recognition-database-new-report-finds.cfm
  11. http://www.aclunc.org/docs/20161011_geofeedia_baltimore_case_study.pdf
  12. https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp030-tvs-may2017.pdf
  13. https://www.rt.com/news/368307-facial-recognition-criminal-china/
  14. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  15. https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture; https://cvdazzle.com/
  16. https://blogs.wsj.com/japanrealtime/2015/08/07/eyeglasses-with-face-un-recognition-function-to-debut-in-japan/
  17. Keeanga-Yamahtta Taylor: “From #BlackLivesMatter to Black Liberation”, Chapter 3, “Black Faces in High Places”.↩
  18. https://mic.com/articles/118290/it-s-time-to-talk-about-the-black-police-officers-who-killed-freddie-gray