Tumblr has lost 30 percent of web traffic since December .

theverge.com

1 Views

        


This doesn't surprise me at all. I run a reasonably popular non-porn, submissions-based blog and immediately after the ban was implemented, our numbers tanked. Submissions dropped from 25-35 per day to around 10-20, while the number of notes (likes+reblogs+replies) per post has dropped from 600-800 to 200-400.

Unfortunately, we still see about the same total number of spambots and fake blogs in our notes. So at least from my own anecdotal experience, the ban did nothing except drive away human users.

A lot of our followers asked if/when we would move to another platform, but unfortunately for them (and us), Tumblr is the only major blog platform we know of which supports a curated, moderated submissions-based blog. Reddit is the closest runner-up in that it allows submissions and can be moderated, but isn't curatable for all intents and purposes. I suspect if (when) Tumblr goes under, it's going to take my blog with it, which is disappointing.


I run a popular tech blog on Tumblr and I've noticed the same thing.

Their faulty AI censor bot also flagged 20+ of my non porn tech posts as porn, including a photo of Mars. Sent them all in for manual review months ago, no response yet.

Conversely I'm also still seeing some actual porn on Tumblr. Its apparently not difficult to fool the AI censor with carefully crafted images.


I had several text posts flagged by their algorithm. I still don't know how it managed to do that.

Just add a #sfw tag to your posts. Supposedly that bypasses the scanner.

this isn't production-ready yet, but there's a federated blogging platform being developed called "Plume" that might satisfy a few of criteria that you've listed in this thread: https://joinplu.me

Well, going by some of the false positive reports about Tumblr's NSFW filter here I'm not entirely sure we could call Tumblr "production ready" at the moment either, heh.

I find this very interesting. Could you elaborate a bit more in what features tumblr has that you require?

The main thing Tumblr offers that I haven't seen anywhere else is curated submissions. Like the sister reply said, on Tumblr, you can allow anyone to submit posts to your blog (including anonymous users and users without Tumblr accounts). You can set restrictions, such as "only text post or photo+caption submissions", and provide tags for submitters to choose from (e.g., on my blog submitters can choose the tag for the TTRPG they're using).

The key differentiator is the curation. Only those submissions which my mod team approves get posted to the blog, unlike Reddit where all submissions get posted first and then up/downvoted accordingly. I think (?) Reddit might have a way to curate, but it's not feasible at the scale of Reddit and a subreddit equivalent of the blog would quickly overwhelm the mod team and have to become uncurated. Considering the number of low-quality submissions we actively curate out, the result would be a significantly degraded experience. Which is not to say a transfer to Reddit is impossible, but it would definitely be An Effort, and one I'm not sure I'm willing to put in for what's been a fun hobby project for the last seven years.

Asks are another Tumblr feature that I haven't seen in other places. An ask is a question submitted to the blog mods, which the mods can reply to either publicly or privately, and which are visually differentiated on users' dashboards. On Reddit, a question to the mods would look exactly like any other post, and the mods' answer could easily be lost in the sea of general comments on the post.

There are also lesser features that Tumblr has that I've never seen on other platforms. Tagging is a big part of Tumblr culture and something I've had a lot of fun with on my blog. Reblogs and replies are very different forms of interaction than retweets/@'s. Fun coincidences like "dash did a thing" (where two unrelated posts show up together on a user's dashboard in a way that's amusing) don't appear to be a thing on other platforms for various reasons. The visual layout and formatting of Tumblr posts on the dashboard tends to be more readable than Reddit or Twitter.

These are all little things, but they're little things that have built up the personality of the blog over the years. I'd rather close the blog with its personality intact than watch it slowly die as yet another subreddit plagued by low-quality posts, insufficient moderation, and sameness.

(This sounds like I'm down on Reddit, which is not the case at all - I like Reddit, just not as a possible host for my blog.)


You are wrong about Reddit.

* Reddit AutoModerator has settings option where every submission and/or comments must be first approved by moderator. AutoModerator supports relatively complex moderator rules. https://www.reddit.com/wiki/automoderator/full-documentation You can also allow only approved submitters or design rules for allowed submitters.

* Questions to moderators in Reddit should be send as messages to moderators. They show in completely different place where moderators can reply and discuss.


Automod is what I was thinking of when I said I thought Reddit has some tools to do this. The problem, like I mentioned above, is one of scale. Automod is not capable of making judgments about post quality; it can't do anything to enforce a "must be something someone said" rule, for example. Nor is it capable of flagging the many, many variants of the same low-effort submission we frequently get (not without a lot of false positives, at least). That's what my blog's curation team does, and what takes the most time. We can manage the blog's former peak submission rate of 25-35 per day, but given the nature of Reddit I expect that number would skyrocket, overwhelming the mod team.

Modmail isn't public, is it? It's only viewable to the sub's mods. If it isn't public, then it is not equivalent to asks.


What you want sounds within Reddit's normal use case. If it's not a site feature, it's a feature of automod: https://www.reddit.com/r/AutoModerator/

There's a lot of subreddits configured to only allow text posts, one of the subreddits I frequent is configired to delete your post if it is an image post that doesn't have a top level comment by the poster, as the subreddit rules require all image posts to have text descriptions (to guide conversation/mitigate low quality posts).

I don't Tumblr so I don't know the full use case for tags, but from what you've described it sounds like flair: https://mods.reddithelp.com/hc/en-us/articles/360010513191-P...


For all we Tumblr users love to mock the blue hellsite, it really does do some neat and very unique things. I'm still holding out hope that it manages to stabilize somehow and stick around, because I've tried all the other social media sites and none of them has kept my interest.

Why not use a restricted subreddit and have submitters simply email links to you?

I don't want to restrict the subreddit, for one. I have around 295k followers on Tumblr; I can't hand-approve all of them. Plus, restricting the sub would keep new people from finding it.

Plus what the other commenter said about formatting and poor workflow. The mods can just barely handle the current workload of reviewing ~30-40 posts per day, choosing 20 to post, and adding tags as needed. If we had to open an email, then open a link (and hope the link was functional and not malicious), then review the submission, then copy the submission, then open the "submit a post" dialogue, then paste, then format, then post... yeah, that's way too much.

There's an IFTTT workflow for sending Tumblr posts to Reddit, but unfortunately the API it uses doesn't preserve any formatting, so you end up with a giant text blob. I don't think it could handle image posts. If that worked, I'd happily start sending things to a subreddit.


Restricted subreddit means that only approved users can post, but anyone can view. Private means that only approved users can view.

Another option is to set all posts by non-approved users to be automatically flagged (hidden to non-moderators), and moderators can un-flag the posts. Which, I think, is the workflow you're looking for? All you have to do is set the spam filter to "All" in the subreddit settings.

You still run into the problem that a lot of people might not want to use reddit, and you'll just end up fragmenting the people following your blog and lose a lot of readers.


Tumblr posts are multimedia, have tags, etc. There's no straightforward way to package all that in an email.

Also, it's just not a simple workflow to copy/paste from email. On Tumblr you just push a button and the post goes up.


Imagine if you had a popular twitter account but you were running low on content to post.

In tumblr can have other people submit content to you to post on your tumblr. You just have to approve it or reject it.

They get visibility, you get a steady stream of content and your followers get a curated feed.


On top of it, they weren't entirely truthful about how the block would be implemented. A couple text-only tumblrs I followed had long been voluntarily marked adult, and the announcement made it sound like they'd be in the clear.

Not only were they not, the implementation was to simply turn on the safe-mode filter and remove the setting to turn it off, while still allowing the adult content on subscribers' dashboards. So they had no idea they were blocked from the public until someone told them, since subscribers could still interact from that one page.

I expect it to continue to drop as more realize this.


I've seen a post showing that you can change the value of the checkbox behind the UI element using "inspect element" and posting the form back. The server-side doesn't validate and you can disable the flag.

This is probably patched now but if not it might show you how much they care about this.


Verizon is just a terrible company all around, and they certainly don't know how to run a big social network. Everyone was predicting this when the acquisition happened, and it's come to pass exactly as the critics said it would.

Verizon may well be OK with killing Tumblr. It's far outside their core competency, and unlike their core competency, it's hard to monetize.


I don't see how Verizon had another option?

With SESTA/FOSTA they could be held liable for anything too sexual. I imagine Tumblr even as an independent agency would do the same.


The affected Tumblr communities weren’t enabling sex trafficking.

This is just puritanical American values being imposed.


> The affected Tumblr communities weren’t enabling sex trafficking

Of course they weren't, but SESTA outlawed a whole lot more than that. "Sex trafficking" was just the excuse SESTA's authors gave.


> The affected Tumblr communities weren’t enabling sex trafficking.

How is tumblr meant to economically filter those that do from those that don't? Algorithmic detection of pornography of any sort is a much easier (cheaper) problem to solve.


You're claiming that everyone is legally being forced into ceasing distribution of pornography. Clearly this isn't happening. It's only Tumblr that is doing this, suspiciously soon after being acquired. There's no indications this would have otherwise happened.

> You're claiming that everyone is legally being forced into ceasing distribution of pornography. Clearly this isn't happening. It's only Tumblr that is doing this, suspiciously soon after being acquired

Tumblr was acquired over five years ago.

But moreover, it's not only Tumblr that has banned pornography or nudity, drastically altered their content policies, or even shut down altogether in the last twelve months since SESTA passed. They happen to be one of the biggest, but I have literally lost track of how many other sites have responded in a similarly drastic way.

SESTA includes incredibly onerous penalties, including jail time, for even accidental noncompliance. On the other hand, there is no legal penalty for being overly cautious. So unsurprisingly, most websites choose the less risky and scary option.


> Tumblr was acquired over five years ago.

And then again, less than two years ago. Which acquisition do you think GP referred to?


> And then again, less than two years ago. Which acquisition do you think GP referred to?

If you'd like to read the rest of my comment, I explained why neither acquisition is relevant.


> Sell Tumblr while it is still alive?

As soon as SESTA passed, Tumblr's days were numbered. Any potential buyer could see that as easily as Verizon could. (And even if they couldn't, Verizon would have to disclose it during due diligence or else they would be opening themselves up to a massive lawsuit).


What if they sold it to a non-American buyer, who converted Tumblr into a company with no American legal exposure?

If the right offer were made, I find it hard to believe Verizon would be unwilling to sell.

Just a month ago there was a HN thread about sex censorship killing off safe spaces for LGBTQ folks and more. https://news.ycombinator.com/item?id=19061135

Tumbler really didn't do anyone justice. They're clearly reaping what they sowed. Unfortunately it's a lose/lose/lose situation for all.


> They're clearly reaping what they sowed.

They don't want huge traffic that they can't monetize. Unless they release revenue numbers that have also fallen by a lot, then it's hard to say what they've reaped.


For a website that clearly depends on being popular for having any hope at monetizing, a 30% traffic hit is huge and indication that the ban is a death sentence.

People made similar remarks as yours when Google+ imposed the real name policy. Google+ is now dead.

HN tends to think of all sorts of technical or marketing reasons for why platforms live or die, however the best reasons for why social platforms live or die are always social ones.


Tumblr didn't do it because they wanted to. They were forced by Apple removing their app from the App Store.

Apparently they had already decided to do it before the App Store thing happened.

https://www.vox.com/the-goods/2018/12/4/18126112/tumblr-porn...

> But a former staff engineer, who recently left Tumblr and asked to remain anonymous for professional reasons, tells Vox that the NSFW ban was “in the works for about six months as an official project,” adding that it was given additional resources and named “Project X” in September, shortly before it was announced to the rest of the company at an all-hands meeting. “[The NSFW ban] was going to happen anyway,” the former engineer told me. “Verizon pushed it out the door after the child pornography thing and made the deadline sooner,” but the real problem was always that Verizon couldn’t sell ads next to porn.


Why do companies refuse to have ads on a well known website on which some posts happen to be porn? Is it an american thing?

It’s not an American thing, most advertisers also refuse to advertise next to content that references extreme violence (think ISIS beheadings) and gambling content. I worked on the ads team at a company, and we constantly had to tweak our filters to ensure ads didn’t get served next to “adult” content.

> most advertisers also refuse to advertise next to content that references extreme violence

Apparently the Daily Mail Online is running this morning's NZ murder video on its front page. Right next to the ads.


Meanwhile I have to watch a pre-roll ad to see reportage about a terrorist attack.

> But a former staff engineer, who recently left Tumblr and asked to remain anonymous for professional reasons, tells Vox that the NSFW ban was “in the works for about six months as an official project,

Ah, so shortly after SESTA was signed into law.


Tumblr's porn ban has literally nothing at all to do with the App Store. The porn ban is because of SESTA/FOSTA.

That was (according to reports) because of the prevalence of child pornography on their platform, which they apparently didn't have the tools or staff to police. A blanket ban on pornographic content was cheaper and/or easier.

Presumably child porn was already banned on Tumblr, so how does banning all other porn as well help?

> I believe they're using some automated filtering (see e.g. https://www.wired.com/story/tumblr-porn-ai-adult-content/). I assume it's easier to train an AI to recognise pornography generally, rather than a specific kind -- and now that I think about it, the process of training an AI to recognise child porn sounds extremely unpleasant and legally dubious.

Quite the opposite - it's much easier to allow pornography and ban only child pornography than it is to create an automated system to detect pornography.

PhotoDNA makes it easy to find matches, which can then be used to uncover the networks of people posting child pornography, which are then added back to the database. It doesn't require any computer vision at all.

By contrast, banning all pornography does require computer vision of some sort, and that's much more difficult, as evidenced by how terrible the new Tumblr NSFW content detector is.


Maybe I'm misunderstanding, but PhotoDNA seems to be a tool for identifying reposts and edits of already known illegal images -- those that have previously been found and had their hashes entered into the database. So if Tumblr had a problem with original content (which could include underage users posting explicit pictures of themselves, as well as the worse things we think of when we hear the phrase 'child porn'), I don't think that would help.

edit: I found this quote from an article[1] published last November: "In its updated statement, Tumblr said that while every image uploaded to the platform is “scanned against an industry database of child sexual abuse material” to filter out explicit images, a “routine audit” discovered content that was absent from the database, allowing it to slip through the filter."

So it looks like they might have already been using PhotoDNA.

[1] https://www.theverge.com/2018/11/20/18104366/tumblr-ios-app-...


> So it looks like they might have already been using PhotoDNA.

Of course they were working with NCMEC (and therefore using PhotoDNA, which is owned by NCMEC); it would have been legal suicide for them not to. They were doing this for years, before they were even acquired by Yahoo.

> Maybe I'm misunderstanding, but PhotoDNA seems to be a tool for identifying reposts and edits of already known illegal images -- those that have previously been found and had their hashes entered into the database. So if Tumblr had a problem with original content (which could include underage users posting explicit pictures of themselves, as well as the worse things we think of when we hear the phrase 'child porn'), I don't think that would help.

How many people do you think have Tumblr accounts where they post only new, never-before-seen pornography of underage children, which has never been posted on any other blog before, and never once post a single photo that has been previously identified as child pornography[0]?

Of those, how many do you think are able to ensure that none of their followers ever repost/reblog that photo on any other blog which also happens to contain at least one other photo that's been previously identified as child pornography?

Of those, how many do you think are able to ensure that none of their followers have been previously identified as highly-connected nodes in the underground networks dedicated to sharing child pornography, and therefore pretty much exclusively post child pornography or follow people who they believe are likely to post child pornography?

Of those, how many do you think are able to ensure that nobody who ever sees one of those photos ever decides to download it and upload it as an attachement to an unsent email in their Gmail drafts folder, or put it in a private Dropbox folder (shared with nobody), or sends it on any of the many "cloud" services which also actively monitor for child pornography and do the same sorts of graph analysis to identify people who are using their services to store or share child pornography?

Again, once you understand how these underground networks work, and once you realize that this is mostly a problem of social graph analysis and not image recognition/classification, you realize that it's very easy to solve.

[0] Remember, not just new to Tumblr, but new to everyone who is working with NCMEC/ICMEC (which means every large and not-so-large company in the entire world that hosts user-provided content).


You are obviously better informed than me, but if the problem is so easy to solve, and they were already using the relevant tool, why was it still a problem? Are you arguing that they failed through incompetence, or that they were lying/exaggerating as an excuse to ban all pornography for other reasons?

> Are you arguing that they failed through incompetence, or that they were lying/exaggerating as an excuse to ban all pornography for other reasons?

Yes, SESTA was the real reason that they banned pornography. SESTA imposed way too much liability on them to be able to support it.

It had nothing to do with child porn. They are still liable for child porn. They still have to work with NCMEC and go through the same process for identifying, reporting, and removing child porn. None of that has changed.


Since people don't go through a visually obvious physical change at the moment of attaining legal adulthood, banning all porn leaves less cracks for child porn to get through.

I'm not saying it's necessarily the right policy, but it does.make a difference.


If they cared about making the filtering easy then I'm not seeing why they banned drawings too.

If you look at a lot of this IDing software, it grabs drawings too. Add some ok drawings in Photo in iOS and watch it add it to the faces list.

Interesting, because it seems like it will pick up drawings for free because it doesn’t distinguish between photos and drawings (Photo on iOS for example will pick up line drawings of people). I actually thought it would a harder problem to exclude the drawings based on some of these facial id programs.

A lot of drawings don't look at all like photos, and they are banned too. That's definitely not for free.

I really find that the child pornography angle was a convenient lie to push through the banning of porn on tumblr. It's my view that Verizon didn't want it, so it had to go away.

Child pornography online isn't a new problem and Tumblr must have been negligent in policing it.


I don't believe it was a convenient lie. I personally reported A LOT of posts and entire blogs for very very very obviously inappropriate underage content, yet they still remained. Tumblr was quickly becoming a cesspool they couldn't clean.

Every photo site since the beginning of user uploads has had this problem. This isn’t an issue if you decide to invest the resources into fixing it. That means hire people to look at photos. It means hooking in to the NCMEC / FBI’s database of known hashes. It’s buying (not even building) a content filter.

These things exist. Tumblr decided not to bother, which forced them to panic and nuke the site from orbit.

It’s a mess of their own making.


> Every photo site since the beginning of user uploads has had this problem. This isn’t an issue if you decide to invest the resources into fixing it. That means hire people to look at photos. It means hooking in to the NCMEC / FBI’s database of known hashes. It’s buying (not even building) a content filter. These things exist. Tumblr decided not to bother, which forced them to panic and nuke the site from orbit.

Tumblr had an entire team that did literally all of those things, for years. They did not "decide not to bother".


Did they really bother though? I mean, I’m sure they have people employed, but something is seriously mismanaged. How did they let it get so bad? This should have never gotten to the point that another company had to say, “WTF is with all the child porn?”

Saying you have team working on something, but not giving them appropriate resources, not taking the problem seriously, is same as not bothering. It’s same as saying, “We take your concerns seriously”, and then throwing the complaint in the trash. Their actions betray they’re true feelings.


Or maybe they don't want to subject their employees to having to police child pornography.

...child pornography was previously banned (duh), and it's still banned (duh).

Their employees still need to moderate their platform and ensure there's no child porn. Banning regular, legal porn has zero bearing on this.


I imagine it would take careful and meticulous observation to differentiate between legal porn and almost-legal porn. Dissolving the platform of porn entirely would streamline this process and potentially be automated. Banning regular porn has a greater than zero bearing on moderating child porn, and you're being disingenuous in pretending otherwise.

> I imagine it would take careful and meticulous observation to differentiate between legal porn and almost-legal porn. Dissolving the platform of porn entirely would streamline this process and potentially be automated. Banning regular porn has a greater than zero bearing on moderating child porn, and you're being disingenuous in pretending otherwise.

Why do people keep repeating this?

No, it's much more difficult to ban all pornography than it is to ban only child porn. One requires analyzing the content of photos and the other doesn't.