Nov. 20, 2023

Reacts: How Firing Sam Altman Might Lead To The End Of The World - Seriously

On the 17th of November OpenAI announced on their blog that Sam Altman was fired.

This sent shockwaves throughout the startup ecosystem worldwide. What happened next was even more fireworks. Greg Brockman, the other co-founder was removed as president of the board then Mira Murati was appointed interim CEO. Greg quit as well as four other senior people. Investors including Microsoft has been in the dark and Mira Murati and has since tried to bring Sam and Greg back to OpenAI. 

So what the hell is going on?

Wading through the speculation craziness of the last week are Chris and Yaniv joined by special guest and friend of the pod Emil Michael, former Chief Business Officer at Uber.

Was this because of AI safety from people with no skin in the game? An internal battle?

What is to become of OpenAI?

You DO NOT want to miss this episode!

Episode Links:

The Pact 

  • Honour The Startup Podcast Pact! If you have listened to TSP and gotten value from it, please:

  • Follow, rate, and review us in your listening app

  • Subscribe to the TSP Mailing List to gain access to exclusive newsletter-only content and early-access to information on upcoming episodes: https://thestartuppodcast.beehiiv.com/subscribe     

  • Follow us on YouTube

  • Give us a public shout-out on LinkedIn or anywhere you have a social media following 

Key links

Learn more about Chris and Yaniv

Transcript

Episode #90: Reacts: How Firing Sam Altman Might Lead To The End Of The World - Seriously.

Emil Michael: I would suggest that all these companies rethink, what do you have to do to succeed? Given the need for money, given what you've hired people to do and be more honest about that. To yourself and align your board members accordingly and your investors. because obviously misalignment inside of open AI caused a fissure, which even if Sam came back is going to set that company back by many months, if not a year.

Chris: Hey, I'm Chris.

Yaniv: And I'm Yaniv.

Emil Michael: Hey, I'm Emil.

Chris: And welcome to this special Reacts bonus episode of the Startup Podcast, where we want to react to one of maybe tech's biggest events of the year, I think, Sam Altman being fired from OpenAI, to help us unpack this We've invited guest host, Emil Michael, to join us. Emil, of course, was the chief business officer at Uber and had a front row seat to his own dramatic CEO transitions over there, while we were both there together.

So, welcome to the show, Emil.

Emil Michael: Good to be here again. on this Sunday afternoon, I'm looking at the clock just after noon Pacific because of all the deadlines that have been put on this drama that we'll talk about in a bit.

Chris: That's right. It's actually worth talking about the timing because we're still in the middle of it and it's a very dynamic situation. And so by the time we publish this, it may actually be a little bit out of date. but we'll try to turn it around as fast as we can. so let's, catch people up just quickly on what the hell we're talking about and least what we know so far.

And then we can dive into the details. So as of November 17th, it was announced, on OpenAI's blog. That Sam Altman was fired and that Greg Brockman, the other co founder, was removed as president of the board, but kept his job or was at least asked to keep his job. And Mira Murati was appointed as interim CEO. She was the CTO, I believe, and there's a CEO search underway.

Greg then quit. Four other senior people quit. Investors, including Microsoft, were apparently not informed. And now, as of our latest information, rumors are that the board's changed their mind, maybe, And I've asked Sam Altman to return, but, as of, the latest, latest thing we've heard, there's some equivocation and hesitation going on, and they've missed some deadlines to sort out some governance changes, which were his conditions for coming back.

All right, guys, we're now caught up. What the hell is going on?

Emil Michael: Well, this is pretty dramatic. Sam Altman is royalty in Silicon Valley. He is one of the most well known, well respected, technology leaders out there. He was president of YC. And then he became, the head of, OpenAI, which by the way was about to do a round of financing at $86 billion.

and just two years ago, this thing had no value on it. so just to set the stage of how dramatic it was, no warning as far as we knew, no indications. And, this board fired him and told him in a Google meet, which is not how you handle these things. So Silicon Valley is on fire.

Brian Cheski is tweeting about how ridiculous it is. Ron Conway, all the other royalty of Silicon Valley going crazy. About this situation.

Chris: it's crazy and you know, he's really well liked.

He's gone on a global charm offensive. acting more like a statesman than a CEO and, that's actually conversation I had. with Travis about how CEOs are suddenly expected to be statesmen,

and so he, kind of was on the front foot about that, let me go out there and make everyone comfortable with who I am and what we're doing. you know, he was just in London talking about, global AI safety with all the leaders of the world. he's not just some guy toiling in the boiler room.

He's the face of the revelation and revolution of AI. I mean, this is crazy.

Yaniv: Yeah, it's funny, we did an episode last week about... All the announcements he made at the OpenAI Dev Day. And there were a lot of comparisons to Apple and Steve Jobs. He seemed to be taking a lot of playbook out of Apple about how he behaved.

 I tweeted last night that, you know, the great man theory of history seems to have made a comeback in the last 24 hours. He was the CEO of his generation. and one way in which we did not expect him to follow in Steve Jobs footsteps is to be fired from his own company. now. The thing is that, you know, if you look at Apple from the perspective of now in 2023, it seems crazy that Steve Jobs was fired.

But at the time that he was let go, Apple was actually struggling in a lot of ways and he arguably was not running it well as a business. That is not true for Sam Altman, right? So this would be like getting rid of Steve Jobs after the iPod, after the iPhone, then you get rid of him. So this is like way, way crazier than what happened with Apple.

Emil Michael: And to make sense, like, tell me what you guys think, right? I've been following this religiously over the last day and a half. It seems like this board was made up of people who had no skin in the game. None of them were investors, not Microsoft who owns 49 percent of the for profit entity part of open AI.

And. That it was about AI safety and that Sam was going too fast and being too commercial, without regard for what could do and how could inhumanity, the Doomerists basically on the board said, you're out.

Chris: We have to be clear when we're talking about this. This is not a traditional Silicon Valley company, right? The board is, Part of a non profit and it was intentionally designed to be filled with people who are more academic or more altruistically minded or more safety minded.

they were intentionally left out of kind of having skin in the game so that their incentives would be aligned. They're not. necessarily commercial or operationally minded people. there's been criticism that it's too small and therefore it's probably too thrashy.

and their goals for the overall for profit entity are not maximizing profit move, fast, and break things Silicon Valley style. Blitzscaling  their Intention is very different. there was over the weekend, a lot of speculation about why this might've happened, everything from personal problems to data leaks to his interests in other companies to having invented AGI and, not telling anyone, but it seems that people are converging on this idea that it was the tribe of.

We're moving too fast, and this is too commercial and too Silicon Valley like, we want this to be more academic, more, safe, and more methodical. that your read after the events of the weekend?

Emil Michael: I talked to a lot of people in and around the circle. who are interested both financially and otherwise in this. And, yes, it seems to be a part internal battle with this person, Ilya, who was actually one of the co founders. OpenAI. And apparently there was something with his unhappiness in the role and how fast Sam was going and so on.

And then Ilya had convinced at least two of the other board members, one of which was the current CEO of Quora. and then there were two others who were sort of, most definitely academics, not commercial. And the four of them, know, made this coup. And the idea was just, like you said, it was moving too fast, too commercially, too many side projects by, Sam, which if you look at them one by one, one could argue they were supplemental to open AI's mission.

He's trying to find an alternative for NVIDIA, for example. not clearly trying to make money doing that just as, to grow the marketplace. So this was part personal and part ideological is the way I've, seen it and heard about it so far.

Yaniv: So here's the interesting thing, Emil, want to take issue with the term coup. And going back to what we were saying, this is about governance and about board structure. And I think there's some lessons here. This is not a coup, because this is the board removing the CEO, which is its prerogative.

That's one of the key roles of the board. If they believe the CEO is not the right person to be running the organization, they actually have a responsibility to remove them. And so that's what they did. This is not like some underhanded coup. It was the board doing the sort of thing that boards do.

Now, whether it was well advised to do so is a completely different question, of course. I think it comes back to what we were saying before is that constitution of this organization is very weird. It is a not for profit. That was founded by a number of people, including a lot of people forget Elon Musk, who was already kicked Out of all this basically to progress AI with a view of AI safety. And for folks who are listening, you know, AI safety is basically people have watched a lot of Terminator 2 and

they don't want AI to kind of take over the world, so they want to create a safe AI. And then there is a for profit subsidiary, which is, you know, what's creating ChatGPT and all of these things. And that's what, Sam was running.

And I think basically he was not playing ball. With the board's view of the mission and so I'm I'm not exactly being the devil's advocate here But I guess I would just point out

Emil Michael: totally disagree. it is a coup and let me tell you why.

So most. charters, non profit or otherwise, have a set of rules, and those rules, at least I've read a few online, not 100 percent certain that I've read all of them, but they require two days written notice for a board meeting, my understanding is that that didn't happen, and the other softer governance requirements of a board, which you're right, they technically have the right to move the CEO, but they have to do it With full information.

And my understanding is there was no prior knowledge of, Hey, Sam, you shouldn't be doing this. Let's have a discussion about it. Here's the mission. You're wrong. consultation with other stakeholders, even those that are not represented on the board. It's your responsibility as a board member to be informed and to follow the rules, and then you can execute against them.

This seems to be rushed, sort of not consistent with the technical rules and no consultation with any of the stakeholders, including employees. If all the employees leave, was that a smart decision?

Yaniv: I'm not saying they did a good job Emil I'm just pointing out that you know, it was the board that removed him. It wasn't like some kind of like,

Chris: You're saying it's not an external military takeover beyond

Emil Michael: Well, yeah. So what is a, what is a coup if not that?

Chris: Yeah, I mean, like, look, we can argue semantics, right? It wasn't some, faction effecting change that they were not asked or tasked to do, they were tasked in the governance model to do this thing, and they've done it poorly. I guess Emil and, through few degrees of separation, I have a sensitivity to, let's call it activist investors or boards who remove people who are, , intimately and fundamentally inextricably linked with the value creation that they're trying to take control over and to do it, in a way that seems short sighted and, hostile, when you take You know, one of the princes of Silicon Valley running the most important companies in the world who led to product breakthroughs and go, nah, we don't like what you're doing. we'll remove you now because we've decided that we know better than you it feels like a coup, even though it might technically not, perfectly aligned to that

Yaniv: like, we like to extract lessons for founders running their own startups here. And I think one of the lessons here is about, your cap table and your governance structure and what it means to be a founder and what it means to bring in outsiders as board members, as investors in your company.

to me, this is a story about a lot of things, but one of the most fundamental things it's about is what is your corporate covenant structure? And that might seem boring, but this happened. Because OpenAI is a weird organization, and we're talking about this as though obviously what it should be is a company that is maximizing progress and is maximizing future profit, and that is not how the board viewed it.

And again, I don't want to defend the board, I think they've done a terrible job, but they, did have a different responsibility than a typical board, and that is because of how things were created. It's worth noting, Sam Altman had no equity. In OpenAI. Talking about skin in the game. He had no equity either, and that's also because of the bizarre structure of OpenAI.

Emil Michael: Yeah, you're totally right that the bizarreness he created, he created this nonprofit structure. I think there was clips of him saying the board can fire me at any time. So,

Chris: point proven.

Emil Michael: Yeah, so you're right that he created it and there was purpose in not having any of the board members have economic stake, in that.

And that does raise lot of interest in governance issues. I mean, the SEC requires you to have independent directors at a public company, and by definition, they have no stake. In the success or failure, is that good or bad for any organization? that is a question. And when we, in our world, small private tech companies, you almost have the opposite, usually start with venture capitalists on your board who have an economic interest, and then over time, later as you get more mature, you add these independents.

so the lessons here, it's like it's mushy

Chris: Yeah. think it's important to note that it's not.

You shouldn't just be paying attention to your governance if you have a weird model like this, because nobody does, you should be paying attention to your governance, even if you have a standard model, right? And, again, not to belabor the point, but I think Emil, you have a front row, story about this happening at Uber, where alignments start to misalign. And agendas start to shift, and you can lose control of your own company.

Emil Michael: think what's different about this and you guys can tell me to shut up is. The Uber situation, there was a lot of buildup to that in 2017. There was a lot of things that the company was on a hot seat for and Travis on the hot seat for, and it culminated in his ouster here. There was no, there was no buildup, right?

And

Chris: But isn't isn't that, isn't that no buildup that we're aware of? You know, there, I may be misreading the tea leaves here, but if I'm the board who's obsessed with creating safe AGI and I see ChatGPT kickoff in a big way, and it becoming this massive consumer product. they've said multiple times, they thought it might be this curiosity.

They didn't expect it to blow up this way. And then I'm seeing, you know, API, billing and I'm seeing Microsoft partnerships and I'm seeing extension stores and GPTs. And I think there would have been heated debates in that boardroom for the last year. and so it's just been, managed quietly under the radar.

Don't you think?

Emil Michael: it's possible and I can't definitively say otherwise, but the rumor mills are companies of this size and importance rarely, contain months and months of existential debate about the direction of an 86 billion company, but it's possible.

Yaniv: I kind of agree with Chris that what it feels like is perhaps this slow motion, well maybe not that slow, but a game of chicken that's been happening where... Sam has been moving too fast for some time, and I, in hindsight, I wonder if some of his, like personality forward approach to leading open AI to being this famous CEO that statesman was part of his game of chicken to say, I am creating something so successful.

I dare you to stop me. I know you're not comfortable with this place. I dare you to stop me. And then they took the dare.

Chris: You know, this is, this is strangely more, analogous to what happened to me at Uber than what happened Travis at Uber. You know, the, idea was like, I'm going to make this work. There's a lot of resistance. it seems well meaning, but I'm going to make this work. And once it works, you have no choice, but to believe me.

And then it's like, and so there was these cordial, thoughtful debates. that Sam thought, you know, I can navigate and thread this needle. It's no problems. but in the end, there's a shift the mood or the, personalities involved. And it's like, actually, no, you're not going to do that.

Goodbye. and so maybe wasn't these screaming matches, but it was like deliberative arguments about how to proceed and Sam thinking he could thread the needle ultimately the board made a snap decision.

Emil Michael: yeah, but least this grant me this, it seems obvious with the number of people who have already resigned. And remember, this is like a 500 person company. my understanding is that at a hundred people, a hundred other people gave a deadline of 5 PM yesterday that they would resign and Sam wasn't going back that the debate, these wild debates didn't sort of thread through the organization, at least. Um,

Chris: Yep.

Emil Michael: And no one did the math or thought to think what might happen to the whole body. Cause if they were worried about that, they could have done a deal with sandwich. Like Sam, you're not the right guy to do this. We need to stay with our core mission. Why don't we have a transition plan? You're going to be an advisor.

We want to keep people you built this org. You don't want it to atrophy, there's sort of a middle ground to be had. Right.

Chris: That's

why it felt like, you know, that he had done something, that was, un reconcilable, right, and it just, it needed to be dealt with immediately. didn't feel like a mis And, you know, we don't know. We should be very clear here, right? There's a lot of speculation.

It's informed speculation. Emil, you're suggesting maybe you've got some access to those circles of conversation. It seems to be the misalignment problem. Misalign, it's strange for it to be a misalignment problem at OpenAI where they're talking about aligning AGI and ai and they couldn't align their own intent internally.

That, feels like an irony in and of itself. But if it in fact turns out to be that way, then it just shows even more that the board was really, poorly executing whatever it is they thought they were executing. so that's, that's a shame. Let's talk a little bit about, What are the implications for OpenAI moving forward? Let's talk about both scenarios where Sam comes back and Sam doesn't come back. And then, what are the implications for, startups and investors generally? the first thing I'd love to talk about is, at least in traditional Silicon Valley companies, being founder led is a superpower of startups.

It's something that we don't talk enough about. It's kind of assumed that a startup is founder led And it's kind of those benefits are taken for granted. I feel like in a lot of cases, but I think one of the superpowers of startups is not just the tech backed and VC backed and they move fast and they break things.

It's that the founder has the ability to move those things around like a speedboat, even though they may be Tens or, hundreds or thousands of people in a way that a, hired CEO needs to build consensus and try to prove out business cases and so on. So does this affect their ability to move fast?

Emil Michael: mean, undoubtedly, let's take your first scenario. They fired him, he's gone. And therefore, let's say a quarter to a half of the company goes and their new mission, because they're having interim CEO is obviously going to revert toward more of this slower approach by definition.

So that means that organization is not going to be on the trajectory it was. So if you were. doing this 86 billion dollar round and you could pull out and say, I'm not doing that round. This is a different company with a different trajectory and a different mission, and maybe they do want to actually go to their nonprofit roots and it may become uninvestable again.

and maybe that's what they want, but that I think would be the result, the result of Sam doesn't come back.

Chris: Well, it means the go slower tribe won as well, right? So the people who are left. Not only have lost a bunch of smart people, the people who remain are the go slow people, the academics. so you've lost your founder, you've lost his ability to raise capital and or shaken the, faith of investors, and you've lost your go fast people, excellent operators and executors.

that's a big loss for them and a big opening for competitors, right?

Yaniv: Yeah, I mean, I don't want to sound too much like a broken record here, but that might be exactly what they want. Right. we're, we're still using the language of, competitors and moving fast and like, this is not our mission. So that's okay. But, undoubtedly that's the case.

And, to go on a slight tangent and we can pull it back in I'm fascinated by the ideology aspect of all of this. And this is the second big tech trend that has been ideologically Impacted, I guess you would say. And I think of crypto, where there was a very libertarian ideology and a lot of folks who were doing it not...

For the money. A lot of folks were doing it for the money because of all the scams. But the core crypto folks had this sort of idealistic vision and this ideology, which I, personally didn't agree with at all, but they had it. And it never mattered because crypto, to be frank, has never been particularly successful.

This is what happens when an ideologically driven technology actually breaks out and becomes socially significant and becomes a big deal. Then the, factions, the people who are behind it, who are not doing it out of a profit motive, and they're not even doing it out of a sort of pure, hey, let's move fast and build a big company motive.

that chicken starts to come home to roost. And here we have, this fight nearly between what, at the moment these days it's sometimes called accelerationists and then, the AI safety or decelerationists or doomers or whatever you want to call it, who basically, like, the feeling here is we are developing something incredibly powerful.

So powerful that we need to be careful with it. That's what the AI safety folks are saying. And I think Sam, yeah, this is the core of it, probably was just like, hey, let's, let's fucking go. Let's, let's build something amazing as, as quickly as we can in the traditional Silicon Valley sense.

Chris: I love that you've broadened this out to ideology because I actually think this applies to everyday founders as well. You know, you'll often hear this phrase, this is technology in search of a problem. And sometimes what that means is that. The founder has this ideology about the world, like, I don't like the way it currently works, or if only the world worked the way I wanted it to work, then everything would be better.

And if everyone just switched to this one stop platform, it's all going to be okay. And they kind of come at the problem and they come at the startup from an ideological worldview, rather than from a, solving a real tactical problem. point of view. And oftentimes that's why they end up in these startups and with these products or these technologies that don't actually fix anyone's anything and can't find product market fit.

So even, for founders listening, be careful about your own ideological bent that you just want the world to be a little different because you say so versus somebody is experiencing real pain and real economic mediation and they need a real, disruptor or disintermediation there.

Emil Michael: Yeah. I mean, this accelerator, decelerator, optimist, Doomer lines have seemed to gotten drawn in the last six months in a really interesting way. You have like A16Z and Mark Andreesen in the optimist camp. And ironically, Elon is, is worried that this is going to end the world. AGI, Eric Schmidt warns about it.

So it's not like either side doesn't have like heavyweights on it. They do. But it's interesting that it's splitting along these lines and it seems to be very much part of this debate and the question is, is there going to be part of the debate going forward on AGI? Because my understanding is Anthropic, that's spun off, because it didn't like the commercial mission of OpenAI and they wanted to be scientists.

Now they're selling subscriptions too. And, so ultimately are we going to end up a place where All these companies are just going to pursue what they want to pursue commercially and the government's going to have to step in and do the safety stuff because no one's going to do the alignment thing. You know, it's impossible.

Chris: yeah, too exciting and too impactful a technology. To resist the urge to put it out in the world and make it useful, and to see people adopt it. Even if you don't care about the money side of it, care about making an impact.

Emil Michael: want to optimize it. They want it to do everything it can do.

Chris: right. There was someone on stage at the dev day that said, now my mom knows what I do. And like, he was really proud of that. And it's like, you know, sometimes it just comes down to like, I want to work on something. That people can touch and smell and feel. And some of these people have been researchers down in the boiler room, who've never seen the light of day and they're like, my shit's actually being used now.

and so that's, that's a powerful, powerful, aphrodisiac, if you will.

Yaniv: I just drew an analogy to crypto. Let me draw another analogy because I guess I'm, I'm an analogy guy, which is to nuclear fission. this is, the context around this, the accelerationist versus Doomer is like, we have just discovered nuclear fission and we can create a giant bloody bomb and end civilization, or we can have an infinite source of cheap power, right?

and that's what the argument has been. And, if you think about it. It is only government who can act as a brake, because if one organization is like, Hey, let's slow down, then another organization is just going to pick up the mantle. Like, if Sam doesn't come back, let's bring this back on point.

If Sam doesn't come back, he's going to take all his people and he's just going to start a new version of open AI, except where he is the majority equity holder. And there's no weird like deceleration of stuff. And there is no nonprofit and he'll just keep going. So the only people that can slow this down are government.

you know, that's what's interesting here is that, yeah, this whole debate I've been vaguely following and I found it a bit boring because I'm like, yeah, you know, people can debate stuff, but this is now broken out into the real world, and this is the first time when, I guess you'd say, the AI safety sort of view has actually impacted our lives, potentially, but I feel like it's, very much, a losing side,

Chris: Well, it's not just, government, it's governments, right? Because it's the same thing can happen, not just between companies, but between countries. And so, obviously Sam and others have been Calling for this, from day zero, it has to be like nuclear energy, a international body, that is running this governance.

Otherwise someone somewhere is going to pop up and then you still have to be worried about non state actors and, states that don't care about global rules. I do think that some of this conversation is a little bit academic, like there is going to be breakouts. Unlike nuclear technology, right, which requires uranium and massive capital outlays alone, I guess you need GPUs.

I think there is going to be pockets of unaligned interests. Who are going to race to the bottom, and weaponize irrespective of even global coordination around a rules based system. And I hate to be the old guy who's nervous about the future, but this makes this old guy nervous about the future.

And that's the first time I've ever felt this way about technology.

Emil Michael: Absolutely. So you think China is going to hold back their efforts for alignment, even if it were some multilateral, coalition? Hell no. I mean, this is, it's not going to happen. So you go back to, well, we just have to outrace everybody then so that we hope that, we can, you know, control the non state actors and the non compliers.

Yeah. It's a really hard problem, that's the extreme accelerationist view is that we should have no rules. If you have no, no rules, then that's really scary. was sort of one step behind that. He was like, no, we need government, we need rules, and we need to be accepted as many people as possible and try to not hurt our ability to outrun potential adversaries or non state actors.

Chris: Well, China was invited to, and attended and ostensibly agreed to some of the outcomes that came from this, UK meeting. that happened just

a

Emil Michael: yeah.

Chris: K. AI Safety Summit.

Emil Michael: you believe it?

Chris: this is what I was going to say is that it's almost worse. It's, the old like, yeah, we're definitely going to do it.

And it's like, are we really going to follow these things? Are they going to... Now you have to game theory this thing out. I think, given everything I've just said, you do need rules. I'm just not sure if the rules are going to ultimately matter. And so you need to try, but who knows where that

really gets

Yaniv: again, it's a really interesting thing. I think with, nuclear fission. It was actually an incredible success of global government and governance, you know, for the whole second half of the 20th century, people were constantly on the alert, not just for like the Cold War, like the Soviet Union, but for terrorists and rogue actors to somehow come up with a nuclear bomb, and I think we got through by the skin of our teeth.

As a, a species, that's not going to happen with AI. This is much harder to control. Much, much, much harder. It's software. And so I think we, yes, As much as I, I actually have a lot of sympathy for the sort of deceleration as to safety view. I mean, I watched Terminator 2 as well.

That gave me nightmares. You know, I was, I was an impressionable teenager, but it's not possible to control this thing. The genie's out of the bottle.

Chris: Yeah, look, I think the problem with the decelerationists, is... they want open AI to decelerate, or they want the US to decelerate, or they want good actors to decelerate.

but this is a nonsense position because nobody else, especially those with bad intent are going to decelerate. And so it's almost like the only, outcome here is to have the biggest stick. The smartest AI with the most pervasive awareness of what's going on, to create some kind of iron dome.

around the civilized, aligned world,

Emil Michael: let me add one more ideological piece. I I read the effective altruism tenets, and that's also sort of like getting attached to the decelerationist somehow. And the only thing I could tell why they're attached is because there's some notion of safe AI safetyism and otherwise what's kind of a charitable model.

They're like, okay, we, we want to do smart, measurable charity. And also we want Deceleration AI. It sort of doesn't fit together. I'm not sure why, but now EA is getting sort of attached to this deceleration thing. Do you understand it?

Yaniv: Not really, but again, sort of read this stuff and I'm like, the whole thing feels very undergraduate. It's sort of like, you have all of these debates and like, it's all theoretical and like, this is how the world should be and, so on. And yeah, that's right.

Effective altruism started as this movement of like, how many lives can we save per dollar spent? And it was very smart. It's like, you know, we should give mosquito nets. to people before we, dig wells because you can save more lives with mosquito nets than well. Okay, great.

I don't know how it got attached to this stuff, but again, it, all feels academic well, not even academic, undergraduate, like it's, a sort of intellectual parlor game that ultimately... Cannot have a real effect on the world. and what's shocking, actually, to bring it back to Sam Ullman, what's shocking is that it's managed to break through and at least temporarily have an effect on the real world.

But even if Sam's gone, it will not have any, even medium term effect on the progress of AI. So,

Chris: maybe even on the progress of open AI, right? Like Microsoft has commercial agreements with open AI they've invested a lot of money and have rolled out an exhaustive roadmap. Now they may switch vendors, I guess, but, there are a lot of investors and a lot of stakeholders who need to see them fulfill those commitments. otherwise I don't know what happens next.

 

 

Chris: But, this is all path of Sam doesn't return. Let's talk about the path where Sam maybe does return. this is also a difficult path, I think, for OpenAI, right? How does... Microsoft and developers trust OpenAI, in a face of like this company could change gears any moment, again.

And it's a really weird governance model. Now, apparently he's negotiating a change of governance for him to return. So let's say that board is gone and that model is gone, but what about those other co founders who took over as CEO and kicked him out? how do they trust each other? Are they out as well?

Emil Michael: Marati is her name, the interim CEO. At least I saw on Twitter today. That she hearted one of Sam's sort of cryptic messages. and my understanding is that she only knew the night before, so she might not be part of it. the Ilya person is, I think the harder question. I think the rest of the board is wiped for sure.

Almost in any scenario, whether Sam comes back or not, because they've not proven to be sort of good, even if they had the capability and the right to do what they did.

Chris: well, they haven't proven to be let's say good board operators, do it in a methodical, communicative way. Even if maybe they're aligned with their mission, ostensibly so at the, you know, you know, you could argue they, maybe they did the right thing, but they didn't do it well, or you could argue even they did the wrong thing and they didn't do it well,

Yaniv: definitely

Chris: any, any scenario, yeah, for any scenario, they're outta there. I think. Yeah, you're right.

Emil Michael: Yes. yeah, I think an interesting question is should they just admit it and say like, you know what, whoever donated to this charity, let's get them their money back and let's like go back to being a Delaware C Corp, and do the board you want to do, or is it going to retain this character?

And then does it even make sense? To retain that non profit y character. And who do you appoint to the board in that case? does that mean no investors? So does Microsoft get a board seat?

A corporate engine that has a financial interest in it. Something has to change

in the model, I think.

Chris: Yeah, I think maybe the model just needs to be a bit more fleshed out, right? You need more board members, you need savvier operators on the board, a mix of commercially savvy, scientifically savvy, ethically savvy people on the board, and you need to just compose it a little more cleverly. I think maybe what happened is that just everything blew up way faster than they expected and they didn't adjust the board appropriately.

and so I think there is a model there where they could try to retain. If not the exact structure, at least the essence of the structure. Also I agree with you, Emil, I think you were alluding to this. I think Elia maybe is the only one that has to go of the co founders, however, if you listen to Elon Musk, for example, he was the key hire and the technical wizard behind the curtain.

And so that also has some serious implications, although I'm sure they've hired plenty of smart people since then.

Emil Michael: Yeah. I mean, do you keep Brutus, Brutus, Miss Julius?

 

Chris: no,

Emil Michael: you them around?

Chris: No, no, I'm not saying that's a reason to keep him around. I'm saying still has long term implications for maybe that, co founding team and his wizardry. But got less visibility and, maybe he's less well liked even in terms of the public consciousness, that maybe you can get away with that and people see what he did betrayal.

Emil Michael: Well, you know, look, Sam seems pretty magnanimous. He could

say, Hey, this was a mistake and we're going to have a board structure. That's more consistent with our mission.

Chris: And that this tension is healthy, that this,

and

Emil Michael: tension healthy. Right.

Chris: a team of rivals even, right? like, I want you around to temper my, my commercial instinct. I think that's a really

Yaniv: yeah. Actually, I mean, like, you mentioned Sam being magnanimous. It raises a question for me, which is who is Sam Altman. Like, there are sort of two, towering characters at the moment in tech, right? There is Elon Musk, and there's a lot of commentary about who Elon Musk is and what he's like and his strengths and his flaws and whether you like him and whether you don't like him.

So far, Sam Altman is this golden boy. He just seems like a really nice guy. He really does. but... Is he as he appears? and the reason I ask that is because, Paul Graham famously identified him when he was just still a kid, basically, as like, this guy is a generational founder, he, has to have some sort of absolute steel in him, some absolute crazy ambition and drive.

And what does that actually look like? Is he what he looks like?

Emil Michael: you know, it's, I've met him, I helped them sell his first company, Looped it was called. It was, everyone's tried this, find your friends on a map kind of deal. it wasn't a home run and he sold, it and this was, gosh, I want to say 2009 when I was a free agent, someone introduced me to Sam.

And I didn't have enough interaction to sort of make a deep opinion on, but he was generous, kind, smart, accepted the failure of that thing, and moved on. And every founder I talked to interacts with them, especially, obviously more YC founders than not. He's helpful. He is, he's a founder guy, the ambition, the steel that you're talking about, it's sort of obvious only to me by the results at this point.

But so, everything he's done, he's sort of 10 Xed it and more that first loop company. but I don't know, I can't think of someone else I could say, well, that would be a better. leader for that, kind of mission. When you see Travis, you're like, that's the guy for that mission.

No doubt about it. When you see Sam, I feel the same way.

Chris: You're right, Yaniv, that, you couldn't have two more diametrically opposed personalities at the, top of the tech, consciousness right now, seems... almost trollish at this point. and doesn't give two fucks about anything, it seems.

you know, at least on Twitter, when you see him interviews on stage, he's still... Has that same awkward earnestness that he's always had. It's just like he's got Dr. Jekyll and Mr. Hyde going on. but yeah, Sam just seems thoughtful and almost like sweet like, you know, boyish.

the way he talks is almost naive, but then when hear what he actually has to say, it's just clearly very deliberative, very thoughtful. I just knowing nothing about him, like him. And I think that's the point and that's one of the main reasons I think why people are so upset and so concerned, because you want somebody who is effective at executing while effective at communicating while seeming to have their heart in the right place.

In charge of a company like this, He's like this kind of miracle of the right person to be in charge of the company like this in terms of putting everybody at ease.

Emil Michael: The other thing about this industry, which is very different. Is the amount of money required to actually, because of the cost of the GPU thing, if there was not that much money required, all these companies might be a little quieter because they're not out in the road. you know, Anthropic got 2 billion from Google and then 2 million from Amazon.

the numbers are staggering. and therefore you have to have a public profile. You have to be out there doing stuff. Your product has to be known and cool. Your team really matters. whereas. Had this been a tech thing where you don't need that much power, not much money, all this stuff might not be in as public and big in the consciousness as it is now,

Yaniv: that's a really good point. That's a really good point. And bring it back to open AI. Like why did the nonprofit allow this for profit entity within it? They probably see it as a bit of a cancer, right? Why did they allow that in? And it's like, they

needed money. They

needed the

money.

You're absolutely right.

Chris: so this is a good segue. Emil, you just started broadening it out to the other startups in the space, right? So let's talk about that. we are podcast to help founders, investors, and operators think about their own companies, right? And so let's use this as an instructive thing for, Anyone who might be listening, given this shakeup, whether Sam comes back or not, what are the implications for companies building foundation models, and companies that are building applications on top of these foundation models

Emil Michael: well, I think the clear implication to any company that's building a foundation model is they have to go relook at their charter. What I don't mean their formal charter. I mean, what are they aiming at? how are they dealing or thinking about safety deceleration, acceleration, and align their company and admit.

Whatever it is they have to do to get there. So if you're anthropic that left open AI, because you thought they're going to do scary things. So I'm going to have this sort of much more guardrail approach. But then again, now you need billions of dollars. It's potentially in conflict with that.

So I would suggest that all these companies rethink, what do you have to do to succeed? Given the need for money, given what you've hired people to do and be more honest about that. To yourself and align your board members accordingly and your investors. And it's a time to rethink all that because obviously misalignment inside of open AI caused a fissure, which even if Sam came back is going to set that company back by many months, if not a year,

Chris: But that suggests some of our worst fears earlier in the episode, right? Which is, Because of the scales of money we're talking about, you need commensurate scales of commercial success. And Amelia is saying you need to be honest about what it's going to take. what you're implying is it's going to take enormous amounts of capital and enormous amounts of commercial success, which means your lofty goals, such that might have any as one of these companies of being aligned and being thoughtful and being what have you, might quickly go out the window if, they were there in the first place in some of these later companies.

Emil Michael: I guess I'm saying that because I don't know enough to know, is there a middle ground? what is the other path here?

Chris: I don't know. I don't know that there is another, another path here.

Emil Michael: The other thing is, in 2021 and before where there was zero industry environment, founders got dual class control, they got board control. They were able to get a lot more and have the investors compete on how well they could treat a founder. And then once the economy for startups kind of changed in 22 and 23, investors got more power.

You got more board seats, less sort of, actual governance power for the founders. This might swing it back. As crazy as one incident, you could think, how could that switch it back? Cause it's not like the funding environment's back to where it was in 21. It's not even close. But this does give you as a founder, if you are hot or your company is hot or your idea is important, it gives you a little bit more power to retain control and retaining control has several dimensions to it, but you could do that and you point the example, the most important founder in the world, everyone loves him.

He still got thrown out from his company. and there's nothing you could tell me the words you say don't matter. I need actual contractual promises to make sure that that doesn't happen to me.

Chris: you know, I'm, almost, almost a maximalist when it comes to founder control, right? If the founder or founders created a vehicle or a bandwagon onto which other people hitch their wagon, I'm mixing metaphors now, other people jumped on board. I think it's theirs to run into the ground.

I'm being a little bit hyperbolic here. But short of, complete malfeasance or what have you. I really think the founder gets to make the calls and, this is maybe personal bias from personal experience, Emil. but it's like, you know, who the hell are you to tell me how to run the company?

you jumped on board, get on or get off. you know, I understand all the commercial realities and fiduciary responsibilities and stakeholder managements and all of that stuff. But when it comes down to it, if it's on the bubble, the founder, I think gets to win. And, I'm happy that that would be an outcome, Amir, which is that founders get to negotiate more of that upfront.

I think that's better for founders and by extension, better for the ecosystem.

Yaniv: I don't totally understand why this would give founders more power per se. So sort of break this down. I think, and again, this is the big lesson, right? when you bring on investors, there's a great book, Venture Deals, which I've got under my microphone now, bring it close to my face. And, you know, it talks about the fact that when you are raising capital, all the negotiations about, what's in the shareholder agreement basically comes down to ownership.

And control. Those are the two things you are dealing with. And I think, for a while now, the focus has been a lot more on the ownership piece than the control. Like you said, Emil, there used to be the, dual class shareholder structures and so on. And people are like, put that to one side.

They're like, it's a more investor friendly environment. we just want to get a decent valuation and so on. but I think what this highlights, despite the fact it's an exotic structure, what it does highlight is that control is important. The only way you Full 100 percent control is if you take no outside capital.

If you own 100 percent of the business, you control it. As soon as you take in outsiders, there will be some control provisions. And what you can do is you can negotiate them. So while I don't see this situation, and I'm happy to be contradicted, I'd love to hear your thoughts, . I don't see it giving founders more power.

What I think it might give them is more incentive to trade those things up. So when they are raising, They will not give away as much control, even if it comes at the expense of ownership or valuation.

Emil Michael: Yeah, agree. But there's a great Neval video out there where his lesson, and by the way, he had an issue with benchmark too, surprise, surprise, throwing out any opinions and his lesson after a long time is Neval, right? He's sort of another luminary. He said valuation for control on the margin.

because ultimately have to put a value on the control. And if the investor wants something for that, well, don't be cheap about it. Was his point, right? It's obviously that's little bit of a generic statement, but, lot of great people think that way. it's important and it has a value and it should be thought of just like any other term and my recommendation of founders who watch this is pay attention.

The details matter. It's not just do a class. It's not just board seats. You could do a board. You could put a thing in the stock proposal agreement that says for every one board seat I add, I get another one. So you had to decide an evergreen sort of board control thing. You could put in notice provisions that the notice for a meeting to change CEOs have to be 30 days in advance.

It means a lot of things you could build in so that rash decisions or coups or things, can't happen. And I think those are worth paying attention to.

Yaniv:

I think the reason, Emil, it can be easy to forget about this, and look, I'm not as close to the scene as you are, but my impression is even in the... Zero interest rate times, the control stuff wasn't really negotiated by most founders, especially new ones. And that's because Control only matters when things start going wrong, right?

So you have this sort of like blind optimism that you know, hey, we're investors and we're founders We're on the same side of the table. We all want the same thing. Isn't this wonderful? Then trading off valuation for control doesn't seem sensible because what you're really doing is you're buying insurance, You're like, Hey, everything is wonderful. Why would I buy insurance? You know, I'm going to live forever. I'm never going to get sick, but yes, imagine that the founders who are looking at dual class shareholder structures and so on in 2021 were repeat founders who got burnt previously. I think it's one of those things where until you have a scrap with the board or with investors, it doesn't seem that worthwhile.

And once you do, you're like, Oh, okay. Now I understand why control is so important.

Chris: Yeah, you know, I've grown to be a real fan of Mark Zuckerberg. you know, it's not a popular thing to

say.

But when

Yaniv: It's a slow burn.

Chris: No, well, you know, early on, I was actually, quite antagonistic to Facebook. I was part of the data portability movement and trying to get them to open up their data. and he and I actually exchanged words on the subject.

but over time I realized you don't bet against Mark Zuckerberg. He is really, really fucking smart. And I think one of the most underrated, smartest thing he's done is maintain control of that company all throughout the fundraising process. And I think much of what Facebook has achieved, and I think will continue to achieve, the root of that is because Zuckerberg has final ultimate control of that company.

that's what I talk about founders being in control investors really taking a backseat. that's, I think an example where he should be allowed to pivot rotate that company all day long. It's his, he created it. and it's off the back of his ideas and his execution.

And, you know, VR thing turns out to be a fool's errand, so be it, that's his errand to take. that's been an important part and underrated part of Facebook and Zuckerberg's genius.

Emil Michael: Yeah, and look, you remember Facebook and Google actually added more control to the founders after they were public. They created, like Google created two classes shares after they were IPO'd. Mark Zuckerberg increased his ratio of votes after, because they were so intent on making sure that nothing would happen to them.

And they've been right so far. That yes, he had that terrible quarter on VR. Some normal board might have gotten weird and... fired him and got, you

know,

Chris: right.

Emil Michael: the same thing at Google. They missed the cloud. you know, AWS kicked their butt, some normal board might've said, we need new leadership.

So are there bad sides to that? Yes. FTX, every founder will get that thrown in their face when they go in the next year and say, Sam Altman, they're like Sam Beckman Freed.

Chris: Yeah, Yeah, but there's a difference between corporate governance and financial governance, right? And governance over fintech. you can have all of the governance over the finances as you like. but in terms of decision making, I think it's a different, different kettle of fish.

So just a quick late breaking update here, guys, before maybe we share final thoughts. Uh, OpenAI is apparently in negotiations to reinstate Altman, and it's hit a snag over the board role. Apparently here, leaders want board removed, but the directors are resisting. Microsoft's, Nadala, is leading high stakes talks about his return and, basically they're quibbling about his role on the board and the structure of the board, and they're at an impasse right now, but, um,

Emil Michael: morning or by, you some announcement on Monday US time.

Chris: So as is going live, we may already have the answer to it, us for being

Emil Michael: Well, what are the odds What are, what are your odds of Sam coming back?

Yaniv: You first, Emil. You first.

Emil Michael: 90%, nine, 90%.

Chris: I think with Microsoft and the Silicon Valley elite VCs, at the table, who are biased towards operators and commercial success and deeply invested in the success of that commercial entity, and Sam's, general popularity, outside and inside the company, I agree with you, I tend to think it's in the 90 percent zone.

I think that board, Had very little credibility before and has no credibility now. and if they do not take him back, he'll just go start another one. And so I don't know what position they're negotiating from, because I think they're in an

incredibly weak place.

Yaniv: Well, here's the thing. Okay, so, I think the board is toast. The probability of the board going is well north of 90%. The probability of Sam returning is, I think, a little bit lower, in my view. And what it comes down to, and part of the uncertainty is simply my own ignorance here. would really like to know what the OpenAI Foundation charter is.

And again, this comes down to governance. This is really the world against the non profit charter. If, it's fundamentally the case that, as incompetently as the board acted, they were actually acting in accordance with their duties. Again, this is a not for profit, this is not a fiduciary duty.

This is a duty to the mission and to the charter. If the charter fundamentally gets in the way of Sam being able to do what he wants, he won't come back. There needs to be a fundamental way of saying, Okay, when Sam comes back, it's not just that the board is gone, it's that the shackles are off.

And I don't know if that's possible in the current governance structure, so I'm gonna peg it at

 

Chris: So this is really interesting, actually. I have two thoughts on that. The first is, I think one of the main reasons that he may not come back is on him. if he doesn't come back, I think there's a significant chance that it's his decision that he wants to go, you know,

Yaniv: Yeah, that's,

yeah,

Chris: his own calls.

Yaniv: That's my point.

Chris: Right. and I think the second thing that you're alluding to, which I want to, to unpack is there's an argument to be made, whether they executed it poorly or otherwise, that the board has done their job, if the charter is to Against, Reckless. Actions on the path towards safe AGI, you could argue SAM coming back and the board getting fired as a failure, right?

Because they have been punished for doing the thing they were asked to do. They did it poorly, they executed it in a hasty way, but that is actually in and of itself a bit of a concern. Because that means the safety checks on OpenAI and on AI more broadly. are lifted, and that any semblance of, hey, we're doing this in a measured way is almost shattered, I think.

Other than Sam himself is thoughtful, the governance structure failed to rein in what they believe to be unsafe, reckless behavior. That is very concerning, eventuality.

Emil Michael: Well, let me be a devil's advocate and say, is there the nuanced place? Where Sam says, no, I've been lobbying governments to put guardrails here. I've been saying we need alignment. OpenAI itself can't abide by alignment that doesn't exist. And that's why I went before Congress to do it. So the mission of OpenAI is still the mission, but we have to be commercially ready to be the best there is while advocating for industry wide rules.

Chris: I think that's a little bit disingenuous because the argument from him has been, look, our company is set up in a very special way to protect against reckless behavior. And because we're such good guys in this great governance structure, we also want the government to almost enforce that universally, you know, within the country and within the world.

but by saying, well, actually I take some of that back. We don't need a clever governance structure and we don't need to protect the independence of this board who fired me. let's get rid of that or let's reverse that. But we'll now rely only on the government to, protect the world from us.

that feels a compromise situation.

Emil Michael: then let me counter what have they done that's unsafe so far?

Yaniv: We don't know.

Chris: Well, here's the thing that some people thought that this was the result of them having discovered AGI and freaking out. there were recently, I think in the last two weeks, people tweeting out of the company going, three times I've been in the room where we have pushed the veil of ignorance back and, it just happened again.

And people are speculating like, what the hell did he just see? Uh, you know, what is going on back there? and how fast are they going to ship it? So we don't know, we don't know what they've cooked up back there.

Yaniv: let me play conspiracy theory... Devil's Advocate. This is not my position, but if I were an evil genius who wanted to accelerate an unsafe AGI, that might look exactly like Sam Altman's playbook. The question is like, who is Sam Altman? He's like this lovely guy.

Everyone trusts him. He's like, Oh, I'm working with government, which by the way, it's like, this is exactly what all these big companies have been doing with their tax dodging for years. They're like, you can't expect us to unilaterally disarm. we're working with government. We want to pay more tax.

We just can't do it on our own. And it's like, well, yeah, you could, you could. Right. And so you play that game. You seem like a really nice guy. You seem really cooperative. You know, that's never going to go anywhere in the meantime. You, basically develop AGI and you become so lovable you think that the board can't fire you, but you've got all the best signs, you've got all the best people, you've raised huge amounts of capital. That is what I would do. I would be Sam Altman if I, if I had the talent to do it. Heh heh heh

Emil Michael: I read the same quote you did, Chris, about the veil of ignorance being lifted for the fourth time in the last few weeks, and then this app store developer conference, which seemed to have been a catalyst for this thing too. Let's suppose the board wrote a decision, say, we will not deploy this, Sam, you will obey that we're not deploying this or something.

They could have done that too, not to go back to the governance thing. But, I think you were raising the point, Yaniv, that when he comes back, does that mean that the guardrails are off if he comes back?

Because

Yaniv: have to mean that.

I don't think he would choose to come back otherwise.

Emil Michael: yeah. Right.

Chris: the very least, it means the guardrails proved ineffective that last time around, right? But I mean, again, you could argue that they've made a hasty decision that was ill advised, but they ostensibly acted as guardrails in their, you know, Presumably. We're making a lot of

Emil Michael: presumably presumably he was going to do something that they thought wasn't ready, the world wasn't ready for.

Chris: Right. So in their best judgment, they made the call. If all, the speculation is to be believed. and so that didn't work. at the very least, you could argue that the board made the call. Fail to do their job in that one case, and how will it change to make it better next time?

You could argue that the idea that they should do that job at all has been undermined, and that maybe that any, future board can continue to do that job after a restructure. So, we are speculating a hell of a lot, but I think One of the takeaways here is to pay close attention to what happens next and to ask the question, what happens to those guardrails now and the people who remain in place and do they have any legitimacy and can those guardrails continue to operate?

yeah, it's going to be a fascinating, even just a couple of hours and a couple of days to see how this plays out. and of course, over the next, years,

Emil Michael: yeah. and back to startups who to your advice, it is a moment to make sure you're doing, and you're saying the things that you want to do. And you're actually doing those. And it's not like a, shell game. I don't know if you saw this letter that General Catalyst wrote, which other was staggering by, uh, Hamant, who is one of the most well known venture capitalists around, and he had 35 VC firms.

Basically signed sort of what was a decelerationist manifesto and so I'm like, what does that even mean? So they're going to only invest in companies that agree to what, so there's a lot of fogginess in the air here that I think has got to be getting cleared out. And the Sam Altman thing will maybe be a catalyst for that.

Chris:  You talk about Emil, Knowing what game you're playing and, this is almost a theme of the whole podcast. we talk a lot about people running small businesses who think they're running startups and people who think they're running startups, who are actually don't understand what venture scale is, and are making hedging decisions and, compromising their own vision.

for what they think is a pragmatic or, realistic outcome. And I think this is true, as we've been saying throughout the episode AI, if you're pretending that you're playing some kind of aligned game of altruistic, nonprofit, and a game that requires enormous amounts of capital and involves global.

players who are not interested in deceleration, you're fooling yourself. and as a society, we need to recognize that many of these conversations are actually just academic. of guardrails and, slow down. We need to brace ourselves for an accelerated path towards alien intelligence, intelligence that is not our own.

and, in many ways that's Sam's entire point. We are releasing this early and often so that the world can get comfortable with it and we can see what happens when it makes contact with reality. And I, for one think that that's almost the only move at this stage.

because, otherwise they'll spring AGI on the world and God knows what happens.

Yaniv: but let me say this, like, I don't believe that, decelerationism is the solution, but there is a real problem here, right? And what I'd say is, you know, tech, tech has been important for a long time economically from a productivity point of view, but the last Couple of years of the first time tech has really become important Civilizationally and geopolitically right like I actually think the first time this happened was with TSMC over in Taiwan It's like oh, okay chip manufacturing is suddenly a global geopolitical issue And now AI is the second time which is you know, what it is true.

it is not inconceivable It's not like totally crazy that we could create and unaligned intelligence that destroys human civilization. That is possible. We are dealing with nuclear fission here. Now, just turning your head away and saying, well, we're not, really going to do it, is not a solution, but it's a real problem.

And I think the only thing I'd say about is this shouldn't just be cheerleading towards the future, we still need to be thoughtful. And I don't have the solution, but to pretend there is no problem, there is no risk, I think is foolish as well.

Chris: Yeah, one last quick point here. You know, we've been talking about, racing towards AGI and acceleration and, I want to be clear that there's a lot that can happen. In very short order, long before AGI, right? you know, I really think this can start to accelerate polarization and the breaking apart of democracies the way social media has been contributing to, right? Tristan Harris, who made the Social Dilemma, he's been advocating against social media, and now he's advocating, against AI, or at least the dangers of AI, he talks about social networks were humanity's first mainstream contact with AI, because the algorithms are a form of AI and they are at the heart of the social media problem.

And so, if you assign any blame social networks for the polarization of society and the dumbing down of the electorate, I mean, this is only going to accelerate. you're talking about just using AI in political and military warfare, and this can change the status quo, long, long, long before you have Terminator 2, as you touched on at the very beginning of the episode, Yaniv.

So these are very short term concerns. Very short term. and when you consider the exponential curves at play as well, we're gonna, we're gonna learn, learn about this, from personal experience over the next, let's call it 6 or 12 months. and it'll, it'll be interesting to see how it plays out.

Emil Michael: Sadly, we being governed by fools. So I, I mean, I, I do like by confidence and this is sort of the part of me that's scared of like, okay, I understand logically from a game theory standpoint, you have to outrun the next guy because. Like there's no other choice. but then I hope there's a governance piece on top of this technology.

And, I hope it's as broad as it can be to capture people with malintent, but I can't see that happening. Do you see that, that capability anywhere in the government?

Chris: again, it's it's a multi layer problem, right? There's, the corporate governance, the country governance, and then there's the global governance. and then there is the unaligned or non state actors who are always going to slip through the net.

And maybe we could argue that that takes too much capital or too much intelligence to get it done, but, they can also use some of the tools those non state actors don't need. The state of the art AI to cause havoc, right? they don't need to be at the bleeding edge.

And so really it's almost an all or nothing game. You need a competent global governance model. and at the rate of everything's moving, and likely to move at the governance level, I just don't see it I hope I'm wrong. I hope we're wrong.

and something miraculous happens here to, guide it in the right direction.

Yaniv: Yeah, well, there's this old curse, right? May you live in interesting times. I think we are definitely living in interesting times.

Emil Michael: Okay. Well, thanks for having me on and thanks for this talk on a Sunday and we'll see who's right tomorrow sometime soon, uh, on the scam album piece. and the rest of it is a good conversation of something to look out for. Thanks for the time guys.

Yaniv: Thanks, Emil.

Chris: Yeah. Thank you for joining us, Emil. And you know, we're, loving this format of reacting to the news. So stay tuned to the podcast you may just see more of Emil and, more conversations like this. very excited to see where this all goes.

don't forget, if you've listened to even a few of our episodes and gotten a lot of value, you have implicitly signed up to The Startup Podcast Pact, which says, please rate us and review us in your favorite podcast.

Podcasting app. Share us on your favorite social network and tell your friends. It helps us grow the show and ultimately help more founders, operators, and investors build better Silicon Valley companies.

Yaniv: Thanks a lot, Chris, and thanks for joining us, Emil.

Emil Michael: Good to talk to you. See y'all.