Jack Dorsey Has No Clue What He Wants

A Q&A with Twitter's CEO on right-wing extremism, Candace Owens, and what he'd do if the president called on his followers to murder journalists.
Getty Images

A conversation with Twitter CEO Jack Dorsey can be incredibly disorienting. Not because he’s particularly clever or thought-provoking, but because he sounds like he should be. He takes long pauses before he speaks. He furrows his brow, setting you up for a considered response from the man many have called a genius. The words themselves sound like they should probably mean something, too. Dorsey is just hard enough to follow that it’s easy to assume that any confusion is your own fault, and that if you just listen a little more or think a little harder, whatever he’s saying will finally start to make sense.

Whether Dorsey does this all deliberately or not, the reason his impassioned defenses of Twitter sound like gibberish is because they are.

Back in October, I sent a message to Dorsey to see if he’d be willing to sit down for an interview. I didn’t really expect a response, partially because he’d just finished a media tour a few months prior but mostly because my previous DMs to him looked like this:

Much to my surprise, he agreed.

Dorsey was busy in the time between my original ask and when we finally sat down together last week. Over the past few months, he’s been accused of hate mongering in India, accidentally ignored an ongoing genocide in Myanmar and was revealed to have consulted with far-right fringe figure Ali Akbar over the site’s much-criticized decision this past August not to ban Alex Jones (Twitter did finally ban Jones a month later). We had quite a bit to discuss.

My only real goal was to get Dorsey to speak in specifics, about anything. In almost every interview he does, he’ll lament his past mistakes and talk about his various high-minded visions for improving the platform: improving conversational health, reducing echo chambers, increasing transparency and about 10 other rote, buzzy phrases.

But press him for a clear, unambiguous example of nearly anything, and Dorsey shuts down. At one point, for instance, Dorsey explained that Twitter was working toward using machine learning to spot harassment before it’s even reported. When asked how Twitter is handling the problem in the meantime, Dorsey had this to say:

Most of our priority right now in terms of health, which is the No. 1 priority of the company, is around being proactive. How do we remove the burden from the victims or bystanders from reporting in the first place? It’s way too mechanical. It’s way too much work. ... But ultimately, we want to make sure that the number of reports that we receive is trending downward. And that will be because of two reasons. One, people are seeing far less abuse or harassment or other things that are against the terms of service. Or that we’re being more proactive about it. So we want to do both. So a lot of our work is that, and then better prioritization in the meantime. A lot more transparency, clearer actions within the product.

Those are certainly words, though none of them appeared to answer my question. It took some more prodding before Dorsey finally pointed to a specific action (that has not yet been implemented but that Twitter is... thinking about? It’s unclear):

What do you mean by clearer actions within the product?

Just, you know, finding the report button isn’t the most obvious and intuitive right now. So that certainly slows things down.

But what’s the alternative to that?

Making it more obvious? I don’t ... I mean, I’m not going to ... I don’t know what it looks like right now, but we know what’s wrong with it. So, you know, that’s what we’re working on.

In other words, the most the CEO of Twitter was able to tell me about specific steps being taken to solve the rampant, site-wide harassment problem that’s plagued the platform for years is that they’re looking into maybe making the report button a little bigger, eventually.

Or consider later, when I asked whether Trump tweeting an explicit call for murder would be grounds for removal. Just as he seemed about to answer what seemed like an easy question, he caught himself. “That would be a violent threat,” he started. “We’d definitely ... You know we’re in constant communication with all governments around the world. So we’d certainly talk about it.”

They would certainly talk about it.

Similarly, Dorsey knows that he’s supposed to say Twitter has made some mistakes in the past in terms of its priorities, but stops short of taking responsibility for the platform itself. In our conversation, I asked him about Twitter attaching “#falseflag” to news about this summer’s bomb scare.

Dorsey’s initial response was “we didn’t add that,” before trying to explain it away as people “gaming the system.” Twitter’s algorithm promoting misinformation isn’t some grand manipulation of the platform, though. It’s the platform doing what it was built for. It makes sense, then, that Dorsey finds himself unable to talk about specific solutions. How can you fix something when you’re not even sure what the actual problem is?

It seems clear that Twitter’s current iteration, a machine learning-curated hell, isn’t the website Jack Dorsey wants. He just refuses to say what that website actually is.

The conversation has been lightly edited for grammar and clarity. And if you work at Twitter, please feel free to get in touch.

First, I noticed you unfollowed me on Twitter within the past couple of months or so.

I did?

Yes, you did. Was there a reason why this happened?

I, uh, was probably going through my list. I’m not sure why.

Hm.

I don’t know. I probably ... I probably went through a bunch. You know, I always clean things up.

I see. Well, I want to talk a little bit about some of the work you’ve been doing with conservatives. Do you think there’s any merit to their claim of bias against conservatives at Twitter? Basically I think it’s conservatives.

Well, I think it would be easy to believe a perspective if you only look at particular things. And you look at actions based on who you follow and whatnot.

What do you mean?

Well, I mean, people follow people that violate our terms of service and who we take action upon, and if you’re only following those people and you’re not following anyone else, you tend to see that and the perspective is strengthened. As I said in front of Congress, we do make some mistakes where our algorithms can be super aggressive. But do we build bias into our systems? No, it’s not in our policies, it’s not in our enforcement, and it’s not in our algorithms. When we discover it, we remove it. And that’s a field of research that we need to continue to invest in, bias in algorithms. So the main thing that we’re focused on is how we stay transparent with our actions and continue to be impartial — not neutral, but impartial.

Right. So it sounds like you’re saying that there isn’t any inherent bias in the platform itself against conservatives, but you’ve done a lot to reach out to conservative groups. Are you just trying to mollify them? What’s your ultimate goal?

No, not at all. We did a bunch of conversations with the folks in the media that we — that certainly I — have never talked to before. So in the past, we do tend to stick with more of the financial press or the tech press. Both of which tend to be more on the other side of the spectrum. So, our default is probably to go to that, so we don’t inherently reach out to everyone and talk to everyone. So one is having conversations, two is getting as much perspective as possible. To me, I think it’s useful to hear perspective, even if I don’t believe it, just to hear what other people are saying. I value that, so I’m not going to stop. But it’s not an agenda to mollify, it’s an agenda to listen and hear, like, what’s being said and why it’s being said.

There was a Wall Street Journal article the other day that came out about you speaking with Ali Akbar about the Alex Jones stuff. Did you consult him about it?

You know, during that time, I reached out to a bunch of people. You want to get as many thoughts as possible.

What people besides him?

I’m not going to disclose, but as many people as I know that would have opinions on the issue. And I want to make sure that I’m seeing the entire spectrum. People who I know who would probably be more favorable towards his situation, and those who are completely against it. And I want to hear everything in between. That’s how you make good decisions, ultimately.

Well, Ali Akbar’s had a series of tweets, I’m just going to read a couple excerpts for you. “Anti-white comments from Jewish anti-Trump commentator Bill Kristol.” “Jake Tapper who is a Jewish left-leaning journalist.” “The conservative Jewish publication The Daily Wire.” He has a whole series of these, and he seems like a very specific kind of figure to reach out to. Were you aware of his past comments and his tendency to identify which members of the media are Jews?

I don’t act on all of his comments. I listen, and I think that’s the most important thing. I was introduced to him by a friend, and you know, he’s got interesting points. I don’t obviously agree with most. But, I think the perspective is interesting.

But do you think that by virtue of who you are and the fact that you, Jack Dorsey, are seeking input from this person, that it elevates him or validates his views?

No, no. I mean, if I followed his direction, then certainly. But it’s just input.

Well, before you banned Alex Jones, you said, “We’re going to hold Jones to the same standard we hold every account, not taking one-off actions to make us feel good in the short term and adding fuel to new conspiracy theories.” I’m assuming those conspiracy theories were the allegations of bias against conservatives on the platform.

Well, I think the conspiracy theory was that all companies were working in concert together to deplatform.

Oh, right, so general conspiracy theories about conservatives being targeted in media.

Right, this is what I was referring to, in terms of all platforms working together. We definitely collaborate on methods, but particular actions, we don’t.

But it seems like the desire to avoid fueling these was in the back of your head when thinking about these decisions. Is that accurate?

No. What I said is that there was an active conspiracy theory around all these companies working together. We want to state that we have a Terms of Service and that we are going to follow it. Then when we find that we need to take action, we’ll take action. But there’s no decision other than making sure that we stay true to our enforcement policies.

I also wanted to ask you a little bit about the apology you made to Candace Owens a while back. You said, “Hi Candace, I want to apologize for our labeling you ‘far-right.’ Team completed a full review of how this was published and why we corrected far too late.” I think you’d be kind of hard-pressed to find anyone who would say Candace Owens isn’t far-right, and I think she would agree with that if she was being honest. But even if you dispute that, getting an apology from the CEO of Twitter for something like this seems like an extraordinary step. I’m curious why you decided to intervene in this particular instance directly.

Well, I apologized because we generally shouldn’t be categorizing people. Our curation team should not be using our descriptions to categorize people. We should be describing what happened. We should be describing the instances, but we shouldn’t be categorizing people ourselves.

But even just calling someone far-right isn’t inherently negative.

I’m not saying it’s a negative. I’m saying we shouldn’t do it, even if it was a positive, we shouldn’t do it. We need to be descriptive as part of our curation guidelines, descriptive of what happened. Like, our whole role in that is to find the interesting tweets that show a story from all perspectives. The moment that we inject any sort of categorization, we’ve lost that promise.

You don’t think even just identifying someone as a journalist or an actor, just in terms of—

That’s dIfferent from what you said.

Is it?

That’s a role.

But it’s categorizing someone.

That’s a profession. That’s the title that they’re taking on that they self-proclaimed.

But far-right commentator is her profession.

Does she self-proclaim that?

I mean, she would probably call herself a conservative commentator, but either way it’s just a difference of degree.

I don’t know. When people self-proclaim something we might be more open to using it, but generally we should avoid categorizing people because we can be descriptive of the events.

Alright, well a lot of people — myself included — were frustrated to see that because, for instance, if someone tweets out our home address or phone number, it’s a crapshoot as to whether or not Twitter is going to do anything about it.

That’s unacceptable, as well.

Right, and we’ll get emails back from Twitter support saying that it doesn’t violate the private information policy.

It should. But again, we’re not in a great state right now with our systems because they rely upon reporting. So we’re not going to take any action unless it’s reported. And then we take action, and we have a whole queue that we have to get through. We’re moving to a world that’s a lot more proactive by utilizing machine learning. But that will have errors and mistakes. So we don’t feel good about anyone being doxed, certainly. We want to catch everything as much as we can, but there are limitations to how much we can do.

But what I’m saying is, people will very publicly share these instances when they happen, of Twitter saying that their address being posted doesn’t violate the policy. And I’m sure you’ve seen some of them before. So why did Candace Owens’ outrage about being labeled far-right compel you to address that so publicly, whereas the others might not have?

Well, we make other apologies, as well. But this was ... You have to keep in mind, you know, someone doxing someone else on the platform and us missing it is a huge miss for us, and we should correct it as quickly as possible. But we took something and broadcast it to everyone, everyone on the service, in a way that was against our guidelines. So, that’s why.

While you’re working on being more proactive about curbing harassment, there’s still the instances where it is being reported and not acted upon. What happens to that in the meantime?

So, I mean, a lot of our work right now is looking at the prioritization of the queues and making sure that, No. 1, we’re protecting someone’s physical safety as much as we can and understanding the offline ramifications of using our service. So that’s work in flight. Most of our priority right now in terms of health, which is the No. 1 priority of the company, is around being proactive. How do we remove the burden from the victims or bystanders from reporting in the first place? It’s way too mechanical. It’s way too much work. If people have to report, we should see it as a failure. If they have to mute and block that’s another degree, it’s a little bit less. But ultimately, we want to make sure that the number of reports that we receive is trending downward. And that will be because of two reasons. One, people are seeing far less abuse or harassment or other things that are against the terms of service. Or that we’re being more proactive about it. So we want to do both. So a lot of our work is that, and then better prioritization in the meantime. A lot more transparency, clearer actions within the product.

What do you mean by clearer actions within the product?

Just, you know, finding the report button isn’t the most obvious and intuitive right now. So that certainly slows things down.

But what’s the alternative to that?

Making it more obvious? I don’t ... I mean, I’m not going to ... I don’t know what it looks like right now, but we know what’s wrong with it. So, you know, that’s what we’re working on.

And what do you mean exactly when you’re talking about the health of the platform?

So it’s this concept of conversational health. So it’s what’s pinned to my profile. We kicked off this initiative to first try to measure the health of conversation. And then second, as we build solutions around it, how do we tell if we’re doing the right work? Because we don’t have a lot of great metrics as to whether the things that we’re doing are working well.

But how do you qualify conversational health?

Well, it’s in the thread. but I’ll describe it as ... We can measure the level of toxicity, for instance, within a conversation. We can measure the level of perspective?

Right, but how? What makes something toxic?

We have algorithms that can determine, based on the network, based on what people are doing elsewhere, based on the number of reports, based on mutes and blocks, whether this is a conversation that you’d want to stay in or you’d want to walk away from. And that doesn’t inform any direct action, but it can inform enforcement actions and whatnot, like when a human has to actually review. So toxicity is one such metric, we call it receptivity. Like, are the members of the conversation receptive to each other? We have variety of perspective as an indicator. We have shared reality.

How do you determine someone’s perspective?

Variety of perspective.

Right.

You have to ... Like, this is all conversations.

Right, but I’m assuming that comes from which people are involved in conversations, or is that not right?

Um, potentially. Right now we’re just trying to determine what the indicators are. Like, temperature on your body — that indicates whether you’re sick or not, right? So if you were to apply the same concept to conversations, what are the indicators of a healthy conversation versus a toxic conversation? That’s what we’re trying to figure out. We did this whole thing with outside researchers and RFP to get external help to determine these indicators. But this is all in the health thread, all the details.

In terms of Twitter itself promoting something, there have been issues in the past. Most recently when all these prominent Democrats were receiving homemade bombs, Twitter, in the related search terms, added ”#falseflag” to the bomb scare. Things like this happen somewhat—

Well, that’s a related hashtag, isn’t it?

Right.

We didn’t add that.

Well, you did, because Twitter’s algorithm picks it up. Is Twitter monitoring for when—

Yeah, we’re monitoring. We’re monitoring it. If we see something like that, we... We’ll act on it. But these are the algorithms. We need to constantly improve them and evolve them. They’re not going to be perfect, right?

Is this all just reactive? Or is there an effort when this happens to—

Oh, there’s a bunch you don’t see because we caught them. I would say probably the majority, but every now and then there’s going to be a new vector that we haven’t trained our algorithms around. So we have to be reactive in those cases.

But do you feel any sense of responsibility for amplifying this sort of misinformation? Because this isn’t just people saying something on the platform, it’s the platform elevating whatever it is. And even if you catch most of them, there’s still ones that get through and that have very real consequences.

Yeah, I mean, we feel a responsibility when people game our system and take advantage of it. So you know, this is not a ... That said, we’ll never arrive at a perfect solution where our system is un-gameable.

Right, but I don’t know if this is entirely gaming the system.

Oh, it’s gaming.

I mean, there’s lots of people who genuinely believe this, who genuinely think and want people to know that it’s false flag but aren’t necessarily trying to coordinate. And then Twitter picks it up because that’s what it’s built to do, to pick up what people are talking about.

Right, yeah, we do. And we should. We should show what people are talking about, but we need to be careful in terms of what links we make and what we surface.

I know Twitter just introduced a new tool for political ad transparency in India, but there’s still sort of this question of what Twitter will do if politicians actively misuse the platform. Has that ever happened where a politician has been removed, and does Twitter have any plans for what to do if that happens?

Um, I don’t know about the cases. We can ... we can figure that out for you. But yeah, I mean, we’re preparing for the Indian elections. It’s going to be the biggest democratic election in the world. And Twitter is heavily used by the influencers and the politicians and the government in India, so we’re very fortunate in that degree. And we want to make sure that we are doing what we can to make sure that we maintain the integrity of the conversation around the election.

Right. But what do you do when it’s the politicians that are promoting misinformation or—

We take action.

So then is there anything that, say, Donald Trump could do that would qualify as a misuse? Because I know the newsworthy aspect of it outweighs a lot of that. But is there anything that he could do that would qualify as misusing the platform, regardless of newsworthiness?

Yeah, I mean, we’ve talked about this a lot, so I’m not going to rehash it. We believe it’s important that the world sees how global leaders think and how they act. And we think the conversation that ensues around that is critical.

OK, but if Trump tweeted out asking each of his followers to murder one journalist, would you remove him?

That would be a violent threat. We’d definitely ... You know we’re in constant communication with all governments around the world. So we’d certainly talk about it.

OK, but if he did that, would that be grounds to—

I’m not going to talk about particulars. We’ve established protocol, it’s transparent. It’s out there for everyone to read. We have, independent of the U.S. president, we have conversations with all governments. It’s not just limited to this one.

All right, well, I want to move on to the some of the aftermath from your trip to Myanmar. Did anyone look over those tweets before they went out or was that just from you?

That’s from me.

Were you surprised at all by how people reacted, or were you taken aback at all?

Um ... No. I mean, I think ... I wasn’t overly surprised. You know, my intention was to share my experience, period.

In one of the tweets, you said part of the meditation technique was to answer the question, “How do I stop suffering?” I’m assuming that means in terms of the individual?

Well, no, that was ... If you read the tweet, it was Buddha’s question to himself.

Right. But do you realize how that sounds to be repeating that question and talking about ending suffering as Jack Dorsey, the billionaire, while the U.N. is calling for military officials in this country to be prosecuted for genocide? I’m just wondering if you see how your role is actually larger than just yourself.

I do, but I’m not gonna change the practice because of it and what people say. Like, this is the practice that Buddha laid out, and I’m not going to change it just because I have this particular role. I’m sharing what I practiced and what I experienced.

I guess what I’m asking is more ... do you feel like you have more of a responsibility now, because of who you are, to bring up these topics because you have this huge platform and influence?

Yeah. I mean, I would love to go back and really understand that dynamic. I went for one particular reason which was meditation. And that’s what I was sharing, that one thing right. It wasn’t to represent Twitter, or—

But you do represent Twitter.

I realize that, but I’m also human. And this practice is good for me and helps me learn and grow. So that’s what I was sharing, and certainly act on all the feedback and everything that was going on. But that wasn’t the point of this particular visit.

Would you do anything differently if you were doing it again?

In terms of the practice? No.

In terms of the practice, yeah, or how you discussed it when you came back.

Yeah, I mean, not bringing it up was a miss, but I really wanted to focus the thread on what I experienced in the practice. I think I did a good job with that.

There was also another incident a few months ago in India where you got in some trouble for holding a poster that said “Smash Brahmanical Patriarchy.” A lot of people see these as a sort of institutional ignorance, or that Twitter doesn’t really understand the responsibility of its role in the world. How how would you respond to that?

Well, I think, you know, we are always learning more about our responsibility in the world. But that particular case where I was given a poster and then someone immediately said “Let’s take a picture.”

Sure, but because of, again, who you are, are you ever more careful about any sort of photo you’re in because of how your image could be used?

Well, obviously not. I mean, what do I do, not accept anything from anyone? Not ever take pictures? I don’t know. What’s the solution?

I mean, that, I guess. But also, I know you talk a lot about trying to raise up different perspectives in the platform. How are you going to account for that?

Well, the biggest thing and I think we need to combat is filter bubbles and echo chambers. So, as an example, during Brexit, if you were to follow only Boris Johnson and Nigel Farage and all these other folks, you would only see tweets about reasons to leave. If we enabled you to do something like following a hashtag, like, #voteleave, 90 percent would be reasons to leave, but 10 percent would be reasons to stay. In the current mechanics of the system, we don’t allow that reality. We don’t even allow different perspectives because you have to do the work to find the other accounts. So you could say, well, people could just go the hashtag. People don’t do that. It’s not easy for them, and they’re only going to do what’s easy. But we don’t make it easy. So, that is one simple thing that we could do to increase the amount and the variety perspective. Where, it might be that they see that, they follow the #voteleave tag, and they see the reasons to stay and that further emboldens them into leaving. Or it might be the case that they say, wait a minute, why are we doing this? We don’t know. But we haven’t even given people a chance to decide and have that experience.

On a different note, I think you were out of the country for this, but were you made aware that Laura Loomer handcuffed herself to the Twitter building in New York?

I was.

What’d you think about her protest?

Um ... [laughter] I don’t know. I mean, she believes that we’ve done her wrong, and I respect the fight and pushing back on us.

Well, she wasn’t fighting that hard, she only handcuffed herself to one of the doors.

I don’t know the details but, um, yeah. I appreciate when people speak up when they think that, you know, someone has done them wrong. Speaking truth to power is something that has flourished on our platform, and she believes we’ve done wrong by her, and she took action. So, I respect that. I don’t agree with most of the things she says, but, you know.

And is there any situation at all in which you would decide to delete the site?

Now I remember why I unfollowed you! Because that’s all you DM me, “delete the site.”

Well, that’s ... Maybe half the time.

But how is that going to help?

That’s the question, though. Is there a situation where you would just decide that it’s better to be free of this?

Should we just delete all the negative things in the world?

Are you saying Twitter is a negative thing?

Well, that’s what you’re assuming when you say that.

Not necessarily.

What would you use if we deleted it?

I don’t know, I’d have a lot more time on my hands.

What would you do with that time?

I really can’t even begin to imagine.

I just ... I don’t think it’s constructive. I’d rather hear constructive ideas on what we could fix. We get a lot of complaints. We get a lot of issues, and they’re all coming from a good place of good intent. But we have to dig under and figure out what the patterns are that we need to prioritize and fix. Because we can only do so much at once. So when somebody constantly tells me, “delete the site,” it’s just not helpful. Whereas other folks tell me, “Hey, you know if you do this one thing you would just have a massive impact.”

Well, deleting the site would have a massive impact — but that’s fine, we can agree to disagree. I know we’re running out of time, but I just have two more questions. I know there was a report recently that you mailed some of your beard hair to Azealia Banks. Did that happen?

No.

You didn’t?

No.

That’s disappointing. And last, I’m just wondering, what use of your platform has horrified you the most, or that you didn’t expect the most?

I mean that we weren’t expecting any of the abuse and harassment, and just the ways that people have weaponized the platform. So, all that is horrible. And you know, we feel bad about that and we feel responsible about it. So that’s that’s what we intend to fix.

Popular in the Community

Close

What's Hot