Advertisement
Guest User

2022.11.09 - Elon Musk Twitter Space Transcript

a guest
Nov 10th, 2022
547
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 40.15 KB | None | 0 0
  1. Okay. Right on time. Hello, everybody. Thank you for joining us. My name is Robin Wheeler.
  2. And I have been at Twitter for over a decade on the sales team in various roles. And I
  3. am excited to be here and joined by Yoel Roth, who is the head of trust and safety, who's
  4. also been at Twitter for a very long time, and Elon Musk, our new CEO, chief twit, chief
  5. complaint officer. What else are you calling yourself today? Well, I'm on the complaint
  6. hotline. There you go. Yeah. Listening to concerns and trying to address them. And we
  7. appreciate that. Yeah, absolutely. That's exactly what today is. I wanted to kick off
  8. just by saying to all of our advertising partners, our content partners that are joining today,
  9. I want to reiterate that our commitment to all of you has not changed. Our teams are
  10. out there still in place and are committed and dedicated to providing the service that
  11. you've learned to love from this platform. So that is true. And our policies around content
  12. moderation and brand safety have not changed. That being said, there is a lot of change
  13. at Twitter. And a lot of it is very exciting. And that's why we want to have Elon here talking
  14. to all of you. He's spent, you know, the last several days talking to partners and answering
  15. their questions. And it's just really critical for him to hear directly from you. Now, I'm
  16. also, I am today representing the advertising and partner community. So we have tons of
  17. questions that we've been gathering. Our teams are gathering. And I will be representing
  18. that. That said, we've got some speakers that are joining as well that I'll call upon directly
  19. from the community. Our teams are getting questions in real time as well that we will
  20. be asking. So we want this to be a very open and honest dialogue. And we're excited. So
  21. with that, I will I want to kick off Elon with a question just, you know, it's been
  22. what 14 days almost. What has been your biggest learning in that time?
  23. Well, I think the biggest thing that I've come to alone is that there's tremendous potential
  24. that's untapped for Twitter and that there are a lot of really talented people at Twitter
  25. that I think can take company in a lot of interesting new directions. We really want
  26. to be, as I've mentioned before, politically the sort of the digital town square where
  27. that is as inclusive as possible, meaning like the as many people like, like, can we
  28. get 80 percent of humanity on on the on Twitter and talking and and maybe ideally in a sort
  29. of positive way, can we exchange instead of having violence, have words and and maybe
  30. once in a while, people change their minds. You know, the overarching goal here is like,
  31. how can we make Twitter a force for good for civilization? And, you know, just we'll just
  32. keep changing and adapting until that is what that is the outcome achieved. You know, people
  33. should look back on Twitter or consider Twitter to be a good thing in the world. Like I said,
  34. something that for the civilization that you're glad that it exists. And, you know, as I said
  35. in some of my tweets, I think we want to just be in vigorous pursuit of of the truth, like
  36. to be somewhat in the business of truth. Now, truth can be sometimes a nebulous concept,
  37. but we can certainly aspire towards it. And I think even if we can't get there completely,
  38. at least trying our hardest to get there is worth it is a worthwhile endeavor. So this
  39. is a big part of why I think it's important to try to get as many people as possible verified.
  40. So and then I want to kind of explain a bit about the sort of blue checkmark verification
  41. thing and why I think it is so important, in fact, necessary. So because I'm struggling
  42. with the question of how do you deal with millions of bots and sort of troll farms,
  43. including malicious actions by state actors. There's hundreds of millions of fake accounts
  44. that are created every year by Twitter. Most of them are blocked, but not all of them.
  45. The issue is that creating a fake account is just extremely cheap. It's maybe is a tenth
  46. of a penny or some very small amount of money. By sort of charging $8 a month, it raises
  47. the cost of a bot or troll by somewhere between $1,000 and $10,000. But there's a detail here
  48. which I think is appreciated by very few people that's also very important, which is it's
  49. not just the money, because you could say, well, wouldn't a state actor have $8 million
  50. a day to create a million fake accounts? Well, yes, they've got the budget. But here's the
  51. problem. They don't have a million credit cards and they don't have a million phones.
  52. That's the actual kicker. There's no way to overcome that. And we will be vigorously pursuing
  53. any impersonation, any deception. Another way to think of it is the high level principle
  54. is someone engaged in deception. If someone is engaged in deception, then we will suspend
  55. that account at least temporarily. In thinking of this like sort of an information problem,
  56. truth is signal and falsehood is noise. And we want to improve the signal to noise ratio
  57. as much as possible. So now there will be some bumps along the road here. But I think
  58. in the long run, this will work out extremely well.
  59. Hey, Elon, can I ask you about some bumps? Like specifically, you know, representing
  60. our advertisers and our partners, like we talked about this idea of an official label
  61. for accounts. Yes. And then I think there was a tweet today that said you killed it.
  62. Is there what's the update on that? Because I think this is definitely a concern from
  63. our partners, that, you know, there needs to be a way for them to identify their identity
  64. aside from just anyone that can pay the eight bucks. And and this is, you know, critical
  65. as they think about the future of their representation on the platform.
  66. Sure. So the problem with the official is that apart from it being an aesthetic nightmare
  67. when looking at Twitter feed is that it was it was simply another way of creating a two
  68. class system and and therefore sidestepping the it wasn't addressing the core problem
  69. of there are too many entities with that that that would be considered official or have
  70. sort of legacy blue checkbox. But I go back to what I said earlier, which is that we're
  71. going to be extremely vigorous about eliminating deception. So if someone tries to impersonate
  72. a brand, that account will be suspended and we'll keep the eight dollars and they can
  73. keep doing that. And we'll just keep keep the eight dollars again. We'll keep the eight
  74. dollars again. Great. Do it all day long. They will stop.
  75. So if the key point here is, is someone engaged in trickery, if an account is engaged in trickery,
  76. we will suspend it and they will try. Of course, they will try. But it starts to get expensive
  77. and they start to need a lot of credit cards and a lot of phones. And eventually they will
  78. stop trying. Yoel is I mean, Yoel, would you like to sort of add to what we're doing here?
  79. Sure. I think the key bit is what Elon just said. We know that bad actors of all sorts
  80. of types are going to keep targeting Twitter, whether it's to try to run cryptocurrency
  81. scams or to try to spread misleading content about an election. These are the threats Twitter
  82. has had to deal with for years. But what our goal is, is to try to change the cost benefit
  83. calculus for some of those bad actors. And there isn't one universal solution that's
  84. going to instantly solve the problem. That's not what the changes to verification will
  85. do. But they start to add more and more costs to adversaries. They start to give us more
  86. and more information. And eventually they start to turn the tide of what the security
  87. landscape on Twitter looks like. Yeah, exactly. Okay. Oh, go ahead. No, I'm just going to
  88. say it. So I say like just sort of stay tuned. And we're going to react dynamically to attacks
  89. on the system. There will obviously be massive attacks. There will be attempts at impersonation
  90. and deception of various kinds. Or just frankly noise. It may not be simply annoying. We want
  91. Twitter to be not just truthful but also interesting and entertaining. And we will stop anything
  92. that is not truthful, interesting or entertaining or at least relegate it to where you don't
  93. really see it much. And over time, and maybe not that long a time, when you look at mentions
  94. and replies and whatnot, the default will be to look at verified. You can still look
  95. at unverified just as in your Gmail or whatever. You can still look at the sort of probable
  96. spam folder. But that is you'll have your inbox of highly likely to be relevant. And
  97. then you can still look at all the others. But it will default to the highly relevant
  98. category which will be verified. Okay, this is good because you're starting to go down
  99. this path. And this is certainly the biggest topic that's top of mind for our partners.
  100. And it's this idea of content moderation. And I think everybody believes, all of our
  101. partners believe that Twitter should be a force for good. It should be a town square.
  102. All voices should be welcome. The concern is, what does that mean for content moderation
  103. for providing a safe environment? So I guess I'm going to pair these together. Like brand
  104. safety, it's critical to this industry. And it's been a core, core tenet and priority
  105. for Twitter as displayed by our partnerships with Garn and several other bodies. How are
  106. you thinking about content moderation and brand safety?
  107. Well, thus far our moderation policies have not changed nor has the enforcement of those
  108. policies changed. It stands to reason that if somebody is advertising that they do not
  109. want super negative information right next to their ad or content that may be inappropriate
  110. safe or if it's a sort of family brand having not safe for work content right next to it
  111. makes no sense. So we are going to work hard to make sure that there's not bad stuff right
  112. next to an ad, which really doesn't serve anyone any good. We're also working hard to
  113. improve the relevance of the ads. So an ad in the limit is information if it is highly
  114. relevant but if it is irrelevant it is noise. And if the ad is noise it does not serve the
  115. advertiser or the user. So I think brands should rest assured that Twitter is a good
  116. place to advertise. And if we see things that are creating a problem in that regard we will
  117. take action to address it.
  118. Okay. That's great. What about hate speech specifically? When you talk about bad things
  119. next to ads like...
  120. Yeah I don't think having hate speech next to an ad is great. Obviously.
  121. And I think this is what the concern is exactly of a lot of our partners.
  122. Yes. You know, not to hop too much on this sort of $8 verified thing, but the propensity
  123. of someone to engage in hate speech if they have paid $8 and are risking the suspension
  124. of their account is going to be far, far less. I mean think of it more like how much hate
  125. speech do you encounter if you go to a party or just, you know, you're at an event with
  126. people...
  127. Depends who's hosting the party. Just kidding.
  128. Well, I mean in most parties let's say. If you meet people in person how much hate speech
  129. do you actually encounter? It's quite rare. And so it's sort of the... But if someone
  130. can create, you know, a thousand, ten thousand, or a hundred thousand troll accounts that
  131. are anonymous and where there's no cost to engaging in harassment or hateful behavior,
  132. then you'll get a very small number of people that seem very loud. In general, I have a
  133. lot of faith in humanity. The vast majority of people I think are good. Not bad. They're
  134. good. But there's a small number of people who are not good. And if those small number
  135. of people who want to engage in terrible behavior are allowed to amplify their voice tremendously
  136. with fake accounts, then they will do so.
  137. So this is why I think that the only way for any social media company to solve this is
  138. with a mild paywall. Paywall for prominence. And then people will just default to looking
  139. at comments and mentions from those that are verified. And you really won't see much of
  140. the rest. So I think it's the only solution. I cannot think of any other path to having
  141. a good system.
  142. Thank you. I'm also going to ask Yoel. Yoel, you've sent a bunch of great tweets out recently.
  143. Can you just speak to where we stand with policies, but also the facts around what we're
  144. seeing on the platform today in terms of toxicity?
  145. Absolutely. So first, to again echo Yulan, our policies have not changed and our enforcement
  146. continues to be focused on being as proactive as we can be to mitigate harm to the people
  147. who are using Twitter. There have definitely been a couple of areas where we have seen
  148. people test the limits of what the new Twitter is, even though we haven't really changed
  149. anything at all. One of the most notable being a spike in hateful conduct on our platform.
  150. And I'm going to be sharing another update on this in just a little while. But we've
  151. been focused on protecting the folks who are using our platform, on shutting down hateful
  152. conduct wherever it emerges on our platform. And what we've seen is not only that we've
  153. put a stop to the spike in hateful conduct, but that the level of hateful activity on
  154. the service is now about 95% lower than it was before the acquisition. The changes that
  155. we've made and the proactive enforcements that we've carried out are making Twitter
  156. safer relative to where it was before. And so my ask of everyone would be, judge us by
  157. our results. And the results, the proof that we're going to be sharing, the data that I'm
  158. going to continue to provide, shows us that we're going to keep investing in making Twitter
  159. safer for everyone every day and in delivering on that vision of creating a welcoming platform
  160. that Yulan talked about.
  161. Thanks. So I want to bring up David Cohen, who is the CEO of the IAB, the Interactive
  162. Ad Bureau, which is, as we all know, one of the governing bodies of our ad industry. He's
  163. a very trusted name and partner, given this is such an important topic. David, what can
  164. we answer for you?
  165. Appreciate that. Thank you. I would just want to start by saying that we are rooting for
  166. you and for Twitter. So at the risk of that not being obvious, this is not a softball
  167. question and it's something that we're hearing from many of our members. So I thought it
  168. would be a good time to ask. We all have a brand. There is the Elon brand and how it
  169. shows up on Twitter. And there is Twitter as a platform and business that the world
  170. and the marketing community have come to know and love, warts and all. Those two things
  171. can sometimes blur. So the question is, how should we think about the coexistence of those
  172. two distinct but obviously related perspectives?
  173. Right. Well, I think if I say that Twitter is doing something, then I mean Twitter. And
  174. if I say I, then I mean me. And if there's any confusion about the two, then I would
  175. just ask me on Twitter basically. But obviously Twitter cannot simply be some extension of
  176. me because then anyone who doesn't agree with me will be put off. So Twitter must be as
  177. a platform as neutral as possible. That doesn't mean I'm completely neutral. That would be
  178. untruthful. I am not neutral. No person is.
  179. Right.
  180. But it is important to have broad acceptance of the platform, that the platform be neutral
  181. and as sort of as inclusive as possible to the widest demographic possible. That is the
  182. only path to success.
  183. Got it. Could I do a quick follow up? Is that possible?
  184. Yeah.
  185. Okay. This is a totally different question. In my experience and from what we hear from
  186. our members, 700 plus strong, brands are interested in basically five things. Scale, relevance,
  187. brand safety and suitability, ability to measure, understand what I put stimulus in the market,
  188. what does it do for my bottom line, and then an impactful creative canvas. Of those five
  189. things, where do you think Twitter is today? And where are you going to spend the majority
  190. of your time in the immediate term?
  191. Well, I think we're probably not doing great on any of them. Doing okay on some. We're
  192. terrible at relevance, I think. And one of the ways we're going to address that is by
  193. integrating ads into recommended tweets. So the relevance of recommended tweets is much
  194. better than the relevance of the ads, because they're two different engines. We need to
  195. have them be the same software stack. So I've reorganized Twitter software from having three
  196. different software groups to having one. And that's occurred just in the past week. So
  197. we really need to improve the relevance of the ads. As I mentioned earlier, in the limit,
  198. if an ad is highly relevant and timely, then it's really information. It's something you
  199. might actually want to buy when you want to buy it. That's great. But if it's something
  200. you'd never want to buy, then it's annoying and it's spam. And that doesn't serve the
  201. advertiser or the user. So that's incredibly important to improve that. So that's a major
  202. priority. And I think you'll see that get way better in the coming months.
  203. Appreciate it. I'll pass it back to you, Robin.
  204. At the end of the day, at a high level, Twitter needs to be useful to advertisers in both
  205. the short-term and driving demand and in the long-term, hence the brand safety. But at
  206. the end of the day, it's short-term and long-term demand is kind of what it comes down to. So
  207. drive sales in the short-term and protect the demand in the long-term.
  208. Got it. Thank you.
  209. Well, thanks, David. And feel free to jump in if you have more questions. I would just
  210. say, like, Ilan, I like hearing you say short and long-term. I think we've heard a lot about
  211. subscriptions. We know that's important to your strategy. But, like, can you say anything
  212. more about long-term and the role advertising plays within Twitter, both in the subscriptions
  213. piece as well as the non-subscriptions piece?
  214. Well, it's just this. I just mean that when I hear brand safety, what I think I'm hearing
  215. is that we need to make sure that the brand overall is protected reputationally in the
  216. long-term. So there may be something that drives short-term sales, but it's next to
  217. hateful content. And that may drive short-term sales, but it's ultimately detrimental in
  218. the long-term. So if I were to put myself in the CEO or CMO position of any advertiser,
  219. I'd say, well, I want to make sure we do drive sales in the short-term, but we're also not
  220. doing anything that damages our reputation in the long-term. So we also need to address
  221. both short and long-term factors.
  222. Great. Okay. One thing that we glossed that we didn't dive in enough on when we talked
  223. about content moderation was this idea of your content moderation council. I know you
  224. tweeted about that last week, I think. Can you say anything else about that? Where are
  225. we at with it? What is it going to look like? How will it work? I know that's top of mind
  226. for everyone.
  227. Sure. Well, I think we want to have an advisory council that represents a diverse set of viewpoints.
  228. That is representative of a wide range of viewpoints in the US and internationally.
  229. In the short term, I've only got the keys to the building a week ago, Friday. I'm moving
  230. pretty fast here, but take a moment to completely rewrite the software stack. But I can't say
  231. that the rate of evolution of Twitter will be an immense step change compared to what
  232. it has been in the past. If nothing else, I am a technologist, and I can make technology
  233. go fast. That's what you'll see happen at Twitter.
  234. Yep. That's true. Actually, you tweeted earlier today something about there's going to be
  235. dumb things coming in months. I assume that is... Well, because you're moving so fast,
  236. right?
  237. Obviously, the intent is not to do dumb things. We're not aspirationally dumb. We're aspirationally
  238. not dumb. But despite being aspirationally not dumb, we will still do dumb things. There's
  239. some element here of nothing ventured, nothing gained. If we do not try bold moves, how will
  240. we make great improvements? We have to be adventuresome here, and then I think we can
  241. make some really big leaps and have radical improvements. But these come with some risk.
  242. The key is to be extremely agile, and so if we do make a dumb move, or when we make a
  243. dumb move, because we're not going to always knock the ball out of the park, but when we
  244. make a dumb move, we correct it quickly. That's what really matters.
  245. Yeah. Well, and for the record, we're seeing record-breaking user growth on the platform
  246. since you took the keys. That's excellent. There's been a lot of conversation around
  247. define, and can you just talk about some of the stuff you're really excited about from
  248. a product perspective? Aside from, we've already talked about subscriptions, but what else?
  249. You've said to me to the organization before about video and all those kinds of things,
  250. so talk a little bit about that.
  251. Yeah. Video is definitely an era where Twitter has been historically weak, and it is an era
  252. that we're going to invest in tremendously. I did ask people what theirs was in Vine,
  253. not that we would want a resurrect Vine in its original state, but just would they want
  254. a Vine-like thing, but reimagined for the future? People are very excited about that.
  255. One of the things, if somebody does become paid blue verified, is that they will be able
  256. to initially use or download 10 minutes of high-def video, which will be expanding to
  257. 42 minutes soon, and then several hours as we fix a bunch of stuff on the backend servers.
  258. There are a bunch of fundamental technology architecture changes that are needed at Twitter
  259. in order to support a significant video, so we've got to make those core software upgrades
  260. and server upgrades in order to support a large amount of video, but we are absolutely
  261. going to do that. It's kind of a no-brainer. We also need to enable monetization of content
  262. for creators, and if we provide creators with the ability to post what they create on our
  263. platform and to monetize it at a rate that is at least competitive with the alternatives,
  264. then of course, creators will natively post their content on Twitter. Why not? Those are
  265. kind of no-brainer moves.
  266. Also keyed off of paid verified is now we know that this is someone who has been authenticated
  267. by the sort of conventional payment system. Now we can say, okay, you've got a balance
  268. on your account. Do you want to send money to someone else within Twitter? And maybe
  269. we pre-populate their account with and say, okay, we're going to give you 10 bucks, and
  270. you can send it anywhere within Twitter. Then if you want to get it out of the system, then,
  271. okay, well, now you need to send it to a bank account, so now attach an authenticated bank
  272. account to your Twitter account.
  273. Then the next step would be let's offer an extremely compelling money market account
  274. so you get extremely high yield on your balance. Then why not move cash into Twitter? Great.
  275. That sounds like a good idea. And then add debit cards, checks, and whatnot, and I think
  276. it will be just basically make the system as useful as possible. And the more useful
  277. and entertaining it is, the more people will use it.
  278. That's right. Amen to that.
  279. Hey, Robin, I've got a follow-up if that's okay.
  280. Yes.
  281. So I'm getting a tsunami of tweets and texts, as you would imagine, so lots of questions
  282. out in the world. One of them, I guess the headline is, there's a challenge with some
  283. of your tweets, Elon, and that they leave a lot to interpretation. You had something
  284. around truth versus high-quality journalism and news. Can you talk a little bit about
  285. kind of how you see those two things as different or the same?
  286. Well, I do think that we should be empowering citizen journalism. If you say, how is the
  287. Western narrative defined? Right now, I think it is overly defined by a small number of
  288. major publications, and that, I think, is not as good as enabling the people to define
  289. the narrative as well. In other words, elevating citizen journalism. I mean, I think we've
  290. all seen articles in major newspapers where we know a lot about what actually happened,
  291. and we know that what actually happened is not what is represented in that article. Now,
  292. then why would you think it's different for anything else?
  293. Got it. So this is not an either or. This is in your mind. This is an addition to. High-quality
  294. journalism has a role in the world and on Twitter, clearly.
  295. Absolutely.
  296. Okay, got it.
  297. No question. I'm not saying that we should somehow downplay the major publications or
  298. prominent journalists. I'm simply saying we should elevate the people and give voice to
  299. the people. Vox populi, Vox Dei.
  300. Understood. Thanks.
  301. What about fact checking and, you know, fighting misinformation?
  302. Yeah. So I'm super excited about the Community Notes feature, formerly known as Birdwatch.
  303. Birdwatch sounded a bit too much like, we're watching you. I'm like, no, let's just be
  304. chill, Community Notes. And actually, that was the original name of the product. It's
  305. awesome. And we're going to really go pedal to the metal on Community Notes. And the way
  306. it works, I think, is actually very exciting. In fact, Keith's not on, or maybe he is, but
  307. I highly recommend looking at the Community Notes feature. It's epic. So this is really
  308. going to help in improving the accuracy of what's said on the system.
  309. It's analogous to the way sort of PageRank works in Google, where the prominence of a
  310. web page is sort of proportionate to how much weight other prominent web pages give that
  311. web page. But it's easy. If you just search Birdwatch or Community Notes on Twitter, you'll
  312. see how it works. And I think it's a game changer, in my view.
  313. OK, great. I'm just getting a ton of questions as well, Elon. This one is specifically about
  314. the auto industry, which you happen to be a member of as well.
  315. I know a little bit about cars.
  316. Yes, I think you do. What can you share with this community that's concerned about data
  317. protection or how your alternative interests related to Tesla would kind of bleed over
  318. into this current role?
  319. Well I think it's, you know, I totally would encourage other carmakers to continue advertising
  320. on Twitter, and I would also encourage their Twitter handles to be more active, and for
  321. their CEOs and CMOs to be more active on the system. And in general, I would say for brands,
  322. I think brands should tweet more, executives should tweet more. I think that sometimes
  323. I would encourage people just to be more adventurous. That's certainly what I've done on Twitter
  324. with Tesla and myself and SpaceX, and it's worked out quite well. But I'm definitely
  325. not going to do anything which is somehow advantageous to Tesla, because that's going
  326. to totally turn off any automotive advertiser. So it has to be level playing field, or we
  327. won't get automotive advertisers. So, yeah. I don't know what else to say, but accept
  328. that we're just going to try to be as fair as possible.
  329. Awesome. Oh, go ahead, David. Were you going to say something?
  330. Yeah, I got another one. I'm pretty sure I'm going to ask all these questions. It'll be
  331. the last time I'm invited onto a Spaces, but here goes nothing. The checkmark used to stand
  332. for something. Now, anyone that pays $8 a month can get the checkmark. What's the process
  333. by which accounts are verified in this new world? Well, someone has to have a phone and
  334. a credit card and $8 a month. So that's the bar. However, we will actively suspend accounts
  335. engaged in deception or trickery of any kind. So it is a leveling of the playing field here.
  336. It will be less special, obviously, to have a checkmark. But I think this is a good thing.
  337. But like I said, if there's impersonation, trickery, deception, we will actively be suspending
  338. accounts. So I think it's going to be a good world. I mean, don't we believe in one person,
  339. one vote? I think we do. So I actually just don't like the lords and peasants situation
  340. where some people have blue checkmarks and some don't. At least in the United States,
  341. we fought a war to get rid of that stuff. So anyway, this is just philosophically how
  342. I feel. And maybe this is a dumb decision, but we'll see.
  343. Got it. David, were you going to follow up and ask if brands have to pay? Because that's
  344. a different... Well, I mean, that was one or... I mean, obviously, this is a double-edged
  345. sword. It's not clearly black or white. There's clearly another side to the equation. And
  346. Elon, as you said, you're going to try it. And if it doesn't work, then you'll quickly
  347. pivot. I think that's a smart approach. But yeah, do brands have to pay? Do marketers
  348. have to pay? Well, I mean, we are trying to be equal treatment
  349. situation. So... So yes. Yes. I mean, if somebody's really helping and not paying, I'll pay it
  350. for them. Okay. Good. We heard that here. We heard that
  351. here. All right. We'll send you the bell. Speaking of being equal, the other question
  352. that we keep getting is, do the same rules apply to you, Elon, that apply to everyone
  353. else on the platform? Yeah, absolutely. But I think we also are going
  354. to try to be more forgiving, provided someone is not actively engaged in fraud. If somebody
  355. missteps, then I think we should maybe give them a temporary suspension, but then allow
  356. them back on the platform. But if they keep doing it deliberately, then of course they
  357. should be permanently suspended. But I think we just need to... Forgiveness is just a very
  358. important principle. And as long as an account takes corrective action and does not do bad
  359. things repeatedly, then they shouldn't be suspended permanently. But if they do bad
  360. things repeatedly and deliberately, then they should be suspended permanently.
  361. Yoel, do you want to jump in on that, speak to how we do it right now?
  362. Totally. I mean, let's take a step back in the history of trust and safety stuff on platforms.
  363. For many years, the only thing that Twitter could do was delete tweets and ban accounts.
  364. That was our only tool for content moderation. And so we did quite a bit of that. We deleted
  365. a bunch of tweets and we banned a bunch of accounts. But one of the directions that we're
  366. trying to build towards is having more tools in our toolbox to be able to reduce the harmful
  367. impacts of content without always having to go to that step of a ban.
  368. And so in the coming days and weeks, you're going to see us start to introduce some of
  369. these new concepts and frameworks for content moderation. My focus and my team's focus is
  370. how can we enable as much speech as we can while preventing the potential harmful impacts
  371. of that speech. And as Elon said, sometimes the only way to mitigate harm is to ban somebody.
  372. But we think there's a lot of other stuff that we can do from warning messages to interstitials
  373. to reducing the reach of content that we haven't fully explored in the past. And you're going
  374. to see us move quickly to build some of these new tools and to integrate that with our policy
  375. approach.
  376. Exactly. That's very well said, you all. And I pretty much think we want a diversity of
  377. viewpoints within Twitter. Sort of a Lincolnesque cabinet, if you will. So representing a diversity
  378. of viewpoints. And at the end of the day, the success will be if people like Twitter,
  379. they will use it and they will use it more frequently and will get more people joining.
  380. And if advertisers and brands, if companies like Twitter, they will use it and they will
  381. buy advertising. And if they don't, they won't. And so the proof is in the pudding.
  382. And I think it will be a good thing. And we're really going to agonize a lot about what is
  383. right, what should be done, what is a force for good in the long term. And sometimes we'll
  384. be wrong about that and we'll, like I said, take corrective action. But really, I think
  385. we'll see. If we're doing a good job, we'll see user growth be high. We'll see advertising
  386. interests be strong if we do a good job. And we'll see the opposite if we don't.
  387. You talked a little bit about this earlier, I think, just in terms of not wanting certain
  388. types of conduct, like being the town square and allowing voices of all shapes and sizes.
  389. How are you thinking about choice on the platform? And certain people are comfortable with certain
  390. content, others are less comfortable. Can you talk a little bit about that vision and
  391. how that's going to come to life and when you think we can see that? Because I think
  392. that's really an important point for folks to understand.
  393. Sure. There's a big difference between freedom of speech and freedom of reach. So at least
  394. in the United States, we're big believers in freedom of speech. So somebody can say
  395. all sorts of things that we don't agree with and find unsavory. Like if you just were to
  396. go to Times Square right now, there's going to be somebody saying something crazy. But
  397. we don't throw them in prison for that. But we also don't put them on a gigantic
  398. bullboard on Times Square. So we have to be, I think, tolerant of views we don't agree
  399. with, but those views don't need to be amplified. There's a giant difference between freedom
  400. of speech and freedom of reach. And I mean, these are difficult moral concepts to grapple
  401. with. Like I said, do our best to do the right thing here, what we think is the right thing,
  402. and adjust course if that does not seem to be working.
  403. Okay. I'm just getting more questions. So I know we're going in different directions
  404. here. But some of our retail partners, we're excited to hear you talk about commerce and
  405. everything you just outlined. Can you say how this could come to life and how it could
  406. help merchants of all sizes accelerate their business? Because that's kind of what they're
  407. hearing.
  408. Yeah. I mean, we've got a lot to do on the software side. I can't emphasize that enough.
  409. So we've got to write a lot of code here. And we've got to change a bunch of the existing
  410. code base. But we want advertising to be, like I said, as highly relevant and timely
  411. and really approach in the limit, how do we get the ad to be as close to content as possible?
  412. I mean, if you're shown an opportunity to buy something that you actually want when
  413. you want it, that's great. That's content. It's like, wow, you just served somebody's
  414. need. That's awesome.
  415. On the other hand, then the other side of the spectrum is if you show somebody a product
  416. that they would never want, then it's not helping the company who's advertising. It's
  417. not helping the user. So we're going to be super focused on how do we get it as relevant,
  418. make that ad as relevant and useful as possible. We'll also be quite rigorous or aspire to
  419. be rigorous about any product that is not allowed products that don't work or are actually,
  420. you know, in some cases just, I mean, I bought a few products based on YouTube ads that didn't
  421. work. And I thought, damn it, YouTube should really have not allowed that ad. And on Twitter,
  422. we're going to be like, okay, we need to serve the user, we need to serve the advertiser,
  423. and when both are served, we have a good situation.
  424. So and then from a commerce standpoint, if you're able to buy things, you know, effortlessly
  425. on Twitter with one click, that's great. We don't want to make buying things inconvenient
  426. or require, you know, going through many steps. The easier it is to obtain the product or
  427. service that you want, the better it is for the user.
  428. Yeah. No one is going to argue with having a more performant product and solution and
  429. more relevant ads from our clients and partners. David, were you going to speak?
  430. Yes, I got another one. So clearly, Ilan, Twitter and you are moving quickly and decisively.
  431. You mentioned a content moderation council that is going to be put together. I guess
  432. the question is, how quickly is that going to materialize? And who is going to be comprised
  433. of whom? What kinds of folks do you think?
  434. Man, that's a hard one to answer. I think it'll probably take us a few months to put
  435. that together. I mean, certainly a lot of people that want to be on it. But this will
  436. be an advisory council, not a command council. It's basically so that, you know, the leaders
  437. of Twitter can hear what a lot of people have to say and just make sure that we're not sort
  438. of being numb to the pain of what people are feeling. You know, basically, are we listening
  439. carefully? But just going back to what I mentioned earlier with respect to the community notes
  440. feature in terms of accuracy and truthfulness, that's going to be very powerful. And I think
  441. it will obviate the need for a lot of the content stuff that currently is in place,
  442. I think. And look, I'm open to ideas. If you have thoughts here, that would be good to
  443. know. What do you think we should do?
  444. Yeah, we absolutely do. And we get feedback all the time, so we can absolutely do that
  445. offline.
  446. Okay, sounds good. I can just say that, like, the aspiration is very much to do the right
  447. thing. And I think the best evidence for us doing the right thing will be that more people
  448. are signing up, they're spending more time in the system, and that it's working for advertisers
  449. as well.
  450. Just a quick add-on to the content council. Will they also weigh in on account suspension?
  451. And how do you think about banning folks?
  452. I think weigh in is the correct word. You know, at the end of the day, I am the chief
  453. twit here, so the responsibility is mine. I think it's difficult to really say anything
  454. else would I think be disingenuous. If things go wrong, it's my fault, because the buck
  455. stops with me. But I would like to hear what people have to say, and then we'll make our
  456. decisions accordingly. And obviously, if I make decisions that people don't like, then
  457. advertisers will leave the system, and users will leave the system, and we will fail.
  458. I appreciate you saying that, actually, because I would like to know what you would have to
  459. say to the brands that are paused or holding on running right now, during this transition.
  460. Well, I understand if people want to give it a minute and kind of see how things are
  461. evolving. But really, the best way to see how things are evolving is just to use Twitter
  462. and see, well, has your experience changed? Is it better? Is it worse? As you all were
  463. saying, actually, we've been more rigorous about clamping down on bad content and bots
  464. and trolls, not less. So my observation of Twitter over the past few weeks is that the
  465. content is actually improving, not getting worse. I mean, actually, if there's anyone
  466. on the call who would like to speak up, if they think this is actually not the case,
  467. please say so. OK. Well, what I will say is that you have repeatedly said you want feedback
  468. and suggestions and thoughts. So this community, as you have seen, is not afraid to speak up
  469. and has plenty of suggestions and ideas and wants to be engaged with. So that's definitely
  470. going to be an ongoing as well as a firm next step.
  471. Yeah. I mean, I can't emphasize enough to advertisers, brands, and the best way to understand
  472. what's going on with Twitter is use Twitter. And if there's something that you don't like,
  473. apply to one of my tweets, and I'll do my best to respond. But I think it's actually
  474. getting better, not worse. And like the person putting, just use Twitter and see how it feels
  475. to you. I think it's actually going in a good direction. Yeah.
  476. Awesome. OK. I think we're getting close to the end here. So I'm going to give you the
  477. floor to say whatever you want, and then I can wrap it up.
  478. OK. Well, I'm probably being a bit repetitive here. But like I said, the larger goal is
  479. to do things that serve the greater interests of civilization and have Twitter ultimately
  480. be a force that is moving civilization in a positive direction where people think it's
  481. a good thing for the world. And the evidence for that will be New Year's are signing up
  482. and more people using Twitter for longer. And then I think also, if you were to use
  483. Twitter for an hour a day, that when you look back, you don't regret the time. That's actually
  484. also kind of important. You don't want something that's a hyper addictive, but then you look
  485. back and you're like, man, I kind of regret how I spent that hour. You want to enjoy using
  486. Twitter and find it entertaining, informative, funny. And then when you look back at the
  487. time you spent on Twitter, not regret it. And then I think that then we will have succeeded.
  488. Thank you to the entire team at Twitter, to YOL, to Elon, to everyone on this call, to
  489. our partners for asking the tough questions, for pushing us and to being with us through
  490. this transition. As I said in the beginning, we're here. Our team is here and we are committed
  491. to serving you as we have. We are committed to answering your questions and to helping
  492. you feel comfortable through this transition. And we want you to continue to push us to
  493. ask questions and to trust us. And we'll just continue to communicate the best we can to
  494. further build trust in this community. So with that, thank you everyone for joining
  495. and see you out there.
  496. All right.
  497. Thank you.
  498. Bye-bye.
  499. Bye-bye.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement