OpenAI DevDay 2023.
A conference that sent reverberations throughout the startup ecosystem worldwide.
Welcomed back on The Startup Podcast to discuss and debate it all are Jeremiah Owyang, general partner at Blitzscaling Ventures and founder of Lama Lounge, and Ben Parr, founder of Octane AI, AI Insiders deeply embedded in the Silicon Valley startup ecosystem. With Chris and Yaniv they explore the top takeaways from the OpenAI DevDay.
From why GPT-4 Turbo is light-years better than GPT3.5, how to make your own super customized ChatGPT, to quality of life improvements for developers and tinkerers at home, interoperability, multi-modality, switching costs and more, this episode has it all!
The quartet also share hot takes on the future of AI startups and what startups in the AI field should be thinking about doing right now!
Stay on the pulse of AI, don't miss this episode.
Episode Links:
The Pact
Honour The Startup Podcast Pact! If you have listened to TSP and gotten value from it, please:
Key links
Learn more about Chris and Yaniv
Reacts: AI Insiders Discuss OpenAI DevDay 2023 - Everything You Need to Know!
Ben: there's a reason why OpenAI is the fastest, apt to get to 100 million in history, things like blockchain have always been blocked by horrible UX, real difficulty for users to get started with.
Nothing is easier for a normal human to use than chat. A grandmother can go and like, chat to something and be able to go and use it.
Chris: Hey, I'm Chris.
Yaniv: I'm Yaniv.
Jeremiah: Hey, I'm Jeremiah.
Ben: I'm Ben,
Chris: And today we have a very special episode where we're going to react to the OpenAI Dev Day announcements that happened on November 6th, 2023. They announced some awesome stuff. some incremental improvements, some stuff that maybe changes the game. So we wanted to get two of Silicon Valley's best and most involved in AI and the AI ecosystem.
Jeremiah Ouyang and Ben Parr. Old, old friends of mine. You might've heard them on the show before. Hey guys, welcome to the show again.
Jeremiah: Hey, welcome back.
Ben: It's a party!
Jeremiah: the last time we were on this show, Ben Parr said some crazy stuff that there's going to be a billion dollar company with three employees. And I put that on LinkedIn and it went viral over a million views, which is a lot for LinkedIn. And a lot of people got really pissed off. So I really wonder what kind of things that Ben Parr is going to say that pisses off the world today.
Chris: Yeah, that episode and that idea kind of went viral. there were people have written entire academic papers and studies and, they've included in books and things. so yeah, I think, when this crew gets together, we uncover some, brain bending facts and, trends about AI.
It's pretty cool.
Yaniv: So hey guys, I haven't actually met either of you before, so I'm like the awkward new guy at a party, and I'm just gonna try
to ingratiate myself.
Jeremiah: Oh, hey. Pull up a chair
Yaniv: maybe we can say now with
GPTs and all of these agents that we can have a zero person unicorn, see if we can get that to go viral.
Chris: one up ourselves.
Jeremiah: possible.
Chris: But before we get started, actually, Jeremiah and Ben, would you mind introducing yourselves? A little bit of history, but most importantly, of course, how you're involved in the Silicon Valley AI ecosystem now.
Jeremiah: Hey, I'm Jeremiah. I've been in Silicon Valley for 27 years. I was an intern in.com, so I've been through five waves of tech. and I really try get in front of the latest technologies. my first investment in AI was in a company called Octane AI in 2017.
And since then, I've joined as a GP at Blitzscaling Ventures, along with Chris Ye and advisor Reid Hoffman. I'm running an AI fund. Investing in AI startups. I also run an AI event called Lama Lounge. I did seven events this year. The last one, two weeks ago in Palo Alto, there was a line around the block and around the corner to get in.
We had security issues. Too many people were trying to get in. So it's one of the most popular AI events.
Ben: it was a fun time. I went to visit. So, hi all. I'm Ben. I was the editor of Mashable at one point, which is how I know Chris and Jeremiah, but in the AI world, I started an AI company back in 2016 with my co founder, Matt Schlick, called Octane AI, which, Jeremiah is an early investor in. And I've been in AI for nearly a decade. starting out in the very first chatbot platforms in the very early days, long before. Large language models were readily accessible we powered the chatbots for Jason Derulo and Brick Ross and Rune5 and L'Oreal and most of the major celebrities and a couple of big brands.
these days, Octave AI is a profitable AI company that powers AI for thousands of e commerce brands and is used by lots of huge Shopify brands, Jones Road Beauty. Kizzik and so many others. And yeah, Momofuku, there's a lot. And so we have been powering AI. and working with OpenAI and using these tools for forever.
And I've started doing some more investing into early stage AI startups as well. So, everything AI I've been hands on and on the ground for almost a decade in AI.
Chris: Awesome guys. So who wants to take a stab at talking about what actually got announced before we dig into the details and what the implications are for founders and investors?
Jeremiah: Well, I wasn't physically there. Ben Par was.
Ben: I will go first, since I was physically there, and I think I was one of maybe 450 people who got the invite, which I do feel honored. Thank you, Sam, and, opening eye crew. they clearly are going down that Apple route of announcing major things in the same way that Apple would with WWDC.
the first announcement was GPT 4 Turbo, which is a much more advanced version of GPT 4. You have a context window that can handle basically hundreds of pages
Jeremiah: 300 it said.
Ben: It's 300 pages of a book, essentially, in terms of, like, the amount of content that it can go and create and process.
It's getting bigger and bigger and cheaper and cheaper, which is another one of the many announcements. the biggest ones are, there's a lot, they have, like, a new thing, Copyright Shield, to help, protect a copyright for those larger organizations who are concerned about it. The biggest one that people are talking about is, the GPT App Store and GPTs.
that is the ability to create your own super customized version of ChatGPT. I just made one that I'm going to have to post on my Twitter shortly called VCGPT that, will evaluate your startup deck and investment as if it were a VC, in a very discerning way. it's one of many experiments that I'm running.
My co founder, Matt Schlicht, has actually launched several. He just made one and announced one called, Bloggy. And it is super detailed, and it will write you an incredible blog post after reading your blog or your website. And it works remarkably well. I've been using it, I'm using it to write my newsletters.
Chris: the major theme for the ChatGPT improvements, we're really about making it better for developers, right? with function call updates and JSON modes, it returns structured content for your API calls, reproducible outputs, lower cost and faster response rates.
These are all quality of life improvements and really in many ways necessary improvements to make the API more useful for third party developers. And the things that I personally ran into when trying to build an app powered by. the GPT API it just eliminated a whole bunch of problems that we were wrestling with.
So that's very cool. But for end users, for people who want to see a material end user experience change, the GPTs were definitely the name of the game, right? And just to unpack that a little bit, They're essentially kind of custom flavors of ChatGPT. And you can get in there and really.
customize them. And they showed examples from Zapier and things, which are really hardcore modifications of ChatGPT. But if you are a non developer and you want to create your own GPT, you can actually do it with no code. You can actually just chat to the thing and ask it to behave in certain ways.
And that is goddamn amazing. And we've been playing with that for the last week half. Right guys?
Jeremiah: Yeah. And I've been playing with yours, Chris. I was just talking to it about startups. There's some other features too. they announced a marketplace that will have revenue share. And this is a really important business model development. one of the things that we look for at our firm, Blitzscaling, is companies that can scale really quickly. And so they have a marketplace where after you guys create those GPTs, and I was playing with yours, Chris, I was playing with yours, Ben, you could upload it.
And then there could be a revenue share to where you would. I would guess offer it as a premium service and I would pay a few dollars for it. so this is an example of user generated content. it's fast way to scale it in the marketplace. there's other features too. I saw GPT 4 or Turbo, just improved.
So yesterday I uploaded an Excel sheet. I don't think that was a feature prior. And then I asked it to analyze the data and graph it. And it did it just like that. And it created articulated bar charts by time.
Chris: That's incredible.
Jeremiah: Yeah, it's really incredible.
Chris: my specialties is building ecosystems and marketplaces, especially developer ecosystems and marketplaces. And, anybody in an ecosystem wants three key things. They want utility, distribution, and revenue. And so of course the utility, the first thing that did was Deliver generative AI that works, and just blew the lid off.
Everybody's thinking around AI that utility is incredible. offer it through an API through a UI. And as you just said, Jeremiah, they're now adding the other two pillars, distribution through that marketplace of GPTs. And then they alluded to, revenue through the revenue share within a month.
Of course, these are all rudimentary versions of all of those things, but they now have the three pillars of vibrant ecosystem and nothing builds a moat around a business like an ecosystem. Uh, and while everybody else is now going to chase them for the generative AI part, Apple, I'm sure, and Facebook and what have you, the first to scale in this new technology paradigm is going to be hard to beat.
Cause
Ben: A lot of interesting open questions with how the marketplace will work. How many people are actually subscribers to ChatGPT because you gotta be a subscriber in order to access the GPTs. That's how they do it in the RevShare, it's kind of like how Skillshare works, where... You get a percentage of like the overall usage.
they haven't released too many analytics yet. I've seen a couple. My co founder has a popular one called Simpsonize Me. It was his first one. a bunch of people have used that to turn their photos into versions of the Simpsons characters. There's a lot of questions like that in terms of like...
what are the ones that are going to really take off and what are the ones that are going to feature and how much could you actually make? Cause if it's a couple hundred dollars for most people, you're not going to have too much investment, but if someone has a popular one and they talk about how they made 10, 000 their first month from open AI, you're going to see a flood of people trying to do every GPT they possibly can,
Jeremiah: there's still room, guys, for other marketplaces to emerge from Anthropic, Inflection, Google, and Amazon. They don't have a lock hold on the market. Yes, they have 100 million users, many are freemium. But there's going to be room and there's going to be an opportunity for interop of agents, too. So, we've seen this game before.
Ben: I, I kindly disagree in one part, which is the stuff that you do on. The GPTs isn't easily interoperable. Now, actually, if the prompt works in the same way in another area, I guess it is kind of interoperable, but not in the way where like, you could go and have the GPT agent working in Anthropix or something like that.
But, most of it is not code. Most of it is just really detailed prompt engineering, That is easy to go and move over.
Jeremiah: Exactly my point.
Ben: I see it a little bit more like the Apple and Android ecosystem though. We're like, yes, there is Roo, but. If you build an app, a mobile app, the first is going to always be iPhone, and it's going to be the same thing here.
Then, I suppose you would go to Anthropic, assuming that, their audience does keep growing.
Yaniv: Think I think it's very easy to make the analogy back to Apple, Where you're like, okay, first they launched the iPhone. That was great. Then they opened up the app store with RevShare and blah, blah, blah ecosystem. And, 20 years later, iPhone is still the premier platform.
but I wonder if, whenever you make an analogy like this, there's a risk of, Not noticing the differences. And, I wonder if one of the big differences here is there's simply no hardware, right? Like for Apple and Android, yes, they had the app store and the lock in from that. but the lock in was much stronger than I think right now, something like OpenAI has with ChatGPT, which is like, you can have a tab open with OpenAI and another tab open with Anthropic, another tab open with Google.
And so, I think it's, easy to say they're taking a leaf out of Apple's playbook and to say, this is about creating an ecosystem, but does the ecosystem have the same strength, of the ones that we're sort of analogizing to?
Jeremiah: Early days, guys. the launch was November 30th, 2022. It hasn't even been one year. So anything can happen.
Chris: So before we dive too much into the implications, right. what might happen and how founders should think about this too much. Let's talk about, The experience of playing with it just a little bit, right? I think the demo on stage was pretty magical, but... when I started playing with it, I got access to it and I jumped in immediately, it was breaking like crazy.
Everything was, throwing errors, it couldn't save, I couldn't upload documents, it couldn't parse documents. And I pinged, few friends who work at OpenAI and, one of them said, there aren't enough GPUs in the world to handle the scale and demand that we're experiencing right now.
it's just, it's impossible. water bursting out of the pipes and things breaking everywhere. And it's still struggling to handle me uploading certain amount of docs. Are you guys experiencing that?
Ben: It's gotten better. early days of anything, it's gonna have those issues. There aren't enough GPUs right now, but the technology, like, it's Moore's Law. Like, it's not moving at the same pace, but they are getting, better and more efficient.
The GPU making is going up in that direction right now. So things I do think will get better, not worse. Now, this also depends on how powerful is GPT 5 and how many more models will be out there and how popular will those models be? There'll be more bottlenecks like that. but it will eventually subside as I think everything catches up both, the improvements to the systems that.
Companies like OpenAI are doing and the increase in chips and GPUs that is happening and scaling up right now.
Chris: Elon Musk said a recent interview with Lex Friedman that the initial bottleneck was data, then it was GPUs. And he said, very soon it'll be transformers. It's like step down power transformers. He said, we're, running out of them. and then he said it would be power.
He said there is an impending catastrophe, of power with electrification of transport and GPU power and, electrification of everything else. I actually saw a speech where he, talked to gathering of, all the executives that run all these major power companies.
He's like, please make more power. whatever your projections are, like triple it or 10X it, you guys have no idea what's coming.
Ben: Can we just ask an AI to invent a new method of power? just to be clear to everyone, I always have to remind everyone the existing AIs that we're talking about right now, incredible, But you cannot ask GPT for, to invent a whole new mode of power that humans have not invented or figured out, because it is only a predictive machine that is predicting the next most likely word and it has no true intelligence at least yet.
I do think we're going to get to the point . In the next, let's say, decade or two, hard to tell, where AI gets strong and good enough where. It could start creating things in ways that humans simply cannot. And we're already seeing hints of this. AI could do deeper analysis and make predictions in things like gene therapy and in biotech and in some hardcore areas, and it's more pure machine learning.
But I want to get to a point where you just tell an AI, here is the information of the universe. We need to figure out FTL, faster than light travel. What's the step by step for it? And then it just... Comes up with it, and it gives us humans the instruction to do it. And then it can also tell us, are there actually aliens, and where exactly are they. That's what I want.
Chris: You and me both.
Ben: it's, commander data, but in real life.
Jeremiah: Speaking of data, have you guys tried Pi?
Ben: pie is delicious.
Jeremiah: not that pie. so pie is made by inflection ai, and disclosure. We have financial ties to them. So pie is a, it's, it has empathy. And if you recall in Star Trek, the next generation over time, data develops or emulates or simulates empathy. And so when you talk to it, and if you tell it, it's sad, it'll say, oh, I'm sorry to hear that.
Let's talk about it. And engages in a dialogue and it's tone changes and a voice to voice. ai, it's very, very strong. I encourage you guys to check that out.
Yaniv: Yeah, I mean, one of the things that occurs to me is, the philosophical debate over whether something is truly intelligent or truly has feelings is a philosophical debate in the old fashioned sense, right? It's something for philosophers. I think the only thing that matters is, does it pass, right?
And the fascinating thing for me with Just that last year, and Jeremiah, you're right, it's only been a year, but it feels like everything's accelerating.
Is that, I would say, ChatGPT easily passes the Turing test. And everyone's just taking that in their stride, and just... Moved on, right? it's like it used to be this holy grail and then leapt past it.
And everyone's like, yeah, cool. It passes the Turing test, but it's still not intelligent. And with the speed of everything, you're like, what will the next year bring?
Chris: especially when you consider that the tool itself can help the tool builders build better tools. There are multiple experts. Or build them on their own. So there are multiple exponentials at play here, right? Exponential computing power, exponential model development, the model itself helping to build itself.
the pace of innovation is nearing, I think, singularity. And as you said, Yaniv, we just smashed through the sound barrier. And everyone's like, Oh yeah, whatever, what's next? And it's like rate at which we're able to get comfortable with, sudden massive change, is in itself to me a bit mind boggling.
Jeremiah: Satya and Sam were on stage on Monday and they were both very clear, they're building AGI, artificial general intelligence, equivalent to human intelligence, and therefore empathy as well.
Ben: My,
Jeremiah: That's where they're headed.
Ben: my strong suspicion is that they will announce a, physical product next year. probably a phone because I think Microsoft wants to get back into phones. not a pin If anyone ever saw this week, there was another company that announced like, their AI pin, which was previewed at TED earlier this year, which has gotten mixed reviews.
700 we'll do a cool like projection on your hand. but if you ask a question like, hey, what is this book? Buy it for me. You know, there's a whole bunch of stuff like, which thing do you buy it from? What's the price? did you actually get the best price? All sorts of things that normal humans really care about.
also gotta pay for T Mobile on top of it, but I think that it'll be very different when, OpenAI comes out with their device because, you know, rumored that Johnny Ive is the one designing it.
Jeremiah: Makes sense.
Chris: so let's actually use this as a, segue into next topic, which is what are the implications for founders and investors and, looking at the pace of innovation that they're demonstrating, right, just over the last 12 months, I think we can now start to see a cadence and I think that curve will bend up into the right, as we said.
and, you know, as we've also said in previous episodes, Fundamental technology platform shifts reshuffle the deck in terms of winners and losers. And so the question is who can build, the new native implementation to solve the next generation of problems. that suggests that the hardware implementation may not be a phone, a phone works in the internet app age, but I think the AI pin or smart glasses or other form factors may be more AI native than a phone.
before we, dive into that, I'm wondering if this AI technology shift actually does reshuffle the deck because. existing tools with massive distribution like Microsoft's Office Suite and Google's Suite are just implementing AI and, everything that's left will be open AI.
Is there a chance that open AI plus the existing Fang, whatever owns everything and this new innovation doesn't kind of get democratized and allow startups to thrive?
Jeremiah: I don't think so. I'm investing in early stage startups and one of the things that we're looking for, one of the criterion is access to exclusive data.
So 80 percent estimated of the world's data is behind a firewall. That could be a corporate firewall. That can be a government firewall. That can be a military or it could be a personal cloud. And right now, OpenAI doesn't have access. It hasn't trained on that. Yet. And so that's an opportunity. And many of those organizations that I just mentioned, they're hesitant to connect all their data into Amazon, Google, Microsoft, OpenAI, although it was very clear, Sam Altman said that they don't train off the data from the API.
There's still just concerns and there's HIPAA reasons as well. So there's a number of industries, healthcare, wellness, finance, military. Gov in other countries that are not going to want to plug into that. They're going to need their own models, when their own data is in their own co location centers separate, and they're going to need different features.
So that's what I'm looking for,
Chris: obviously, you know, when it comes to healthcare and health data, sensitive military data and what have you. let's put red line around that and go, okay, those use cases are going to
require something different.
Jeremiah: And finance.
Chris: Maybe finance, but I think OpenAI's response to that would be, look, just use our generalized model on your proprietary dataset you're all good. Or here's an enterprise version or a hosted version, upload your data here and we're good to go. they're already showing aspects of that, right?
With , ChatGPT for enterprise and with GPTs, you can upload your own dataset in which most use cases, many people will readily do. they're closing off all the exits faster than I've ever seen before.
Yaniv: one of the things that sort of been on my mind as we're having this conversation is the question of interface, right? Because GPTs, it's still just a chat interface. We're talking about OpenAI releasing a phone. We're talking about where's the nexus of competition going to be.
And, as incredible as ChatGPT is, the question is, is chat really the interface that we all want to have into all of this knowledge, or is there a huge opportunity for innovating on the . UX around this new technology or on top of this new technology, rather than just saying, you know, it's just about like the tech and the data, which is where the conversation often goes.
Chris: Yeah, that's where my mind goes.
Ben: So I'm going to both agree and disagree here, and I will agree that the UX actually does extraordinarily matter but I will say there's a reason why OpenAI is the fastest, apt to get to 100 million in history, and it is because of the UX, because things like blockchain have always been blocked by horrible UX, real difficulty for users to get started with.
Nothing is easier for a normal human to use than chat. A grandmother can go and like, chat to something and be able to go and use it. Now... the only other kind of interface I can immediately think of is, like, stick something directly into your brain. But, like, the reality is, humans understand language, And, when, this GPT stuff was, put into the form of a website like Jasper, it didn't take off in any meaningful way compared to ChatGPT. And ChatGPT is much preferred than Jasper, which I would say they've spent a lot of time on the interface. So, I'm just curious, like, there's another interface you
Yaniv: Yeah. If I had another interface in mind, then I'd probably be off building it. I think there is an opportunity for genuine innovation here. And is the first I'm hearing that OpenAI are planning to release a phone or a device. But what's the interface in that thing going to be?
Because, you're right that at some level... Chat is an incredibly intuitive interface. It can also be quite clumsy. for example, prompting is an art and actually not a very intuitive one to many people. And so I think there are a huge number of limitations to chat as an interface.
and I agree that just plugging existing paradigms on top of GPTs or on top of LLMs doesn't seem to work very well. Jasper sucks. And like every rapper I've tried sort of sucks. But someone needs to come up with a way of harnessing these things and creating applications that have less friction and are more accessible, allow people to access the full power of the model more effectively than chat.
Ben: Have you tried the voice version of, ChatGPT? It's incredible.
Chris: Yeah, in the intonation and the voice, and it even has glimmers of empathy and tonality and emotion. I mean, it's incredible.
Jeremiah: standard now though.
I mean, that's 11 labs. Pi has that too. I think
Ben: No, no, but no, no, no, Eleven Labs isn't, it's not, again, Eleven Labs is the ability to create voice, the open AI, and like, the intonation, it's just the fact that you can access the full power of the model by just chatting with it. You can, like, press the button in your car, which I have done, and just have a back and forth about stuff.
And it's incredible and it's stupidly useful.
Chris: Combining the power of ChatGPT's model, the power of that model with the, intonality of those voices is just a magical combination,
Jeremiah: Jarvis, Samantha, you can see where this is going
Chris: Absolutely. it feels so tantalizingly close right now. And, Yaniv, I want to agree with you. I've said this multiple times over years that chat and chatbots are a terrible UX.
There are these beautiful, elegant, high bandwidth interfaces that really matter, for specific use cases. yeah, ChatGPT hacked the human interface, which is language. And that's why it was so compelling and so exciting.
and it's kind of the thing we always dreamed about, as you said, Samantha and, the computer and Star Trek and Jarvis and so on. But I do think yes, there's an opportunity to have proprietary data in certain use cases where the data is private. But I think an equally large opportunity is around the UX and, chat is, not the optimal UX for many, many use cases.
Jeremiah: here's my take guys. It doesn't matter. modality that you want, that's how it's going to interact. today I was taking photos live from GPT4 and I was asking you what's in the photos. It got it right. Uh, 50 percent of the time. So it's got visual, it's got audio, it's got text. When I use Pi, I leave the mic open and I have a dialogue with it. And the mic is just open for like an hour. I just have a discussion. So whatever the interface you need, we don't need to argue which is the right interface. It doesn't matter because it's going to switch. Yeah. It's going to be all of them.
And as it gets more context around us, then it's going to interpret and anticipate what we want. And that's the future.
Ben: everyone
Yaniv: you, you, may well, yeah.
Chris: Yeah. When I say the UX is, area for innovation. I don't mean that there's going to be one UX. I
mean, depending on the use case, depending on the scenario and depending on the person and their, context and their preferences, there will be a UX for that. Just like there's an app for that now.
but speaking about the device, do think if they just go ahead and build a straight up phone, Maybe that's worth doing so that there is something familiar for users to put in their hands and it's a small cognitive leap, but I do think that's missing an opportunity to rethink the UX in an AI native way.
And I think the AI pin, if you guys haven't seen that, go check it out. not... necessarily the final and ideal expression of an AI device, but, we're in the car phone era at the moment of AI devices these are the first. and so I think it is more interesting than just going straight to a phone.
personally think the most incredible expression is going to be. Ultimately, smart glasses. it's gonna see what you see, it's gonna whisper into your ear, it's going to tell you what you're looking at, and it'll present what you need on the glass in front of you.
as soon as we figure out that waveguide problem and the battery problem, I think we'll be, seeing everybody wearing smart glasses,
uh,
Jeremiah: So I think the pin, AI pin, it's rather large
Chris: half the size of a, cigarette box.
Jeremiah: so over time it'll just get smaller and smaller and smaller.
Be shaped and stylized kind of like a Delta and then it'll be like your Enterprise pin. Boop boop.
Jeremiah: Saad gets it. Saad gets it.
I don't think Ben Pye ever watched Star Trek.
Ben: Me? Never. You're right. I've never watched Star Trek.
Chris: of my friends in Silicon Valley, Ben Parr is the one that's seen the most Star Trek and nerded out about it with me the most. Everyone else just would look at me like I was weird.
Yaniv: Correctly so.
Jeremiah: I think a lot of people in Silicon Valley like Star Trek. In fact, the Federation headquarter was in Sausalito.
Ben: until it was destroyed in 26. I can't remember which year it was. It's been destroyed more than one time, I believe.
Yaniv: Okay. All right, nerds. All right, nerds. we're getting back on track. Jeremiah, I'm really keen to understand, I was sort of suggesting interface and you're like, no, AI will figure out the interface for you. So what is the nexus of competition, you're investing in founders, right?
And, one thing that everyone is seeing with AI or at least, folks from the outside are like, there's obviously going to be a lot of money made here, but it's a scary
place to invest. Because no one knows where the motes are going to be, where the differentiation is going
to be.
So, what's some of your thesis?
Jeremiah: Yeah. So we are looking for exclusive data as being one key thing that I mentioned. We're also looking for startups that will have ownership their particular niche. It's got to be a large enough market. and within that, we're looking for. Network effects, and we've identified there's six types in the AI space, and we're also looking for products that have built in viral effects so they would spread.
we've already identified these things in some of the AI startups and then distribution channels. And then we actually will go and score those. And there's a certain, threshold point that we need to see that. Now there is a risk in this market, of course, that there's going to be a winner take most, like OpenAI is going to do that.
but I think they have stiff competition. There's going to be multiple ecosystems. There's going to be Amazon. There's going to be Google. There's going to be Apple. There's going to be NVIDIA. Then there's going to be many open source versions that are hovering around Hugging Face. there is going to be a large ecosystem.
Just because OpenAI is first doesn't mean they're going to sweep everything. Yes, They're moving quickly, but this is code red at Google. This is the number one priority. Even at Salesforce, almost all the engineers are shifted over to AI to combat Microsoft on this as well. So there's going to be a fierce competition.
this is just the start, guys.
Ben: I want to just note that we keep talking about very specific portion of AI, which is gendered AI, which is a very small portion. of deep learning, which is a very small portion of machine learning, which is a very small portion of the overall AI puzzle. And if I like had a whole thing, it's like this tiny piece that we talk about all the time because it's the most visible and it's the newest innovation.
but there's a lot of other innovation and other parts of AI that are happening right now. And I suspect we're going to be talking about other types of AI in the next couple of years. I'm doing a deep dive into causal AI, for example, and other kinds of ways to. not just create text, but to understand our world in lots of different ways.
And so there's an infinite number of startups as a result of that. Now within generative AI, you're building a wrapper. It's going to be very hard, but there's really clearly companies that are doing something on top of that in very specific verticals, as we spoke about, like healthcare or real estate or whatever else it is.
And those are much harder to replicate, and they have all additional requirements there's specific things that those industries want and specific sales cycles. And that's not a thing that OpenAI is going to do. And that's not a thing that Google is going to do. They're going to be the APIs people use for that.
And same way that Apple's not going to go and build an app for every single ecosystem, that's what the App Store is for.
Jeremiah: last point on that. Most of the large corporations that I know are using open source LLMs and they're training it on their own data in their on premise or co location centers. have definite knowledge of that in the banking and pharma space.
Chris: Yeah, think what I'm saying is, Jeremiah, you're talking about the competition that OpenAI has. And I think it's clear that there'll be fierce competition at the foundation models level in the ecosystem building. level, and all of these big behemoths in tech and in domain specific areas like healthcare and defense, they're going to fight it out.
And there, be multiple large winners. that much is clear. What is less clear to me is when you talk about AI generating its own UX. and writing its own code and connecting to the majority of human knowledge through APIs and public data stores. It's not clear to me at the application layer and at the business layer where the real white space is.
and how, some of these behemoths that win the platform war can't have their equivalent of ChatGPT generate an incredible app to track your cycling or to interrogate your medical records or to, plan your war for you. I'm now getting into more sensitive areas, but like even day interactions, right, that are less sensitive, less, regulated and less what have you.
I'm not saying it's not possible. I'm saying it's not as clear to me that like previous cycles, it's a slam dunk that there's lots of room for lots of niche startups to do lots of specific apps when the model itself can generate half of this on the fly.
Jeremiah: Going back to the point, 80 percent of data is not public. There's still a lot of data out there.
Yaniv: I guess maybe to fully articulate sort of contrarian position is that OpenAI might be in a much weaker position than they seem right now. Which is, you could say, well, okay, we have these open source models and as you say Jeremiah, they're being deployed everywhere. and they're getting better and better, And then, the ecosystem effects that OpenAI has right now, I would argue, are still relatively weak. That's the point I was making before. It's like, yeah, you know, they're rushing a device out. Do you remember when Facebook tried to make a phone? Because they're like, yeah, we need to control distribution into people's hands.
and they failed, right? And so you could say, well, OpenAI are basically a proprietary hosted stack. They've got great technology, but then again, open source is catching up pretty fast. are they like Sun Microsystems getting their lunch cut by Linux or something like that where you're like, okay, the models are just table stakes.
Like the tech for that, as incredible as it is. Is out there, and OpenAI might be a little bit ahead, they might be better at certain things, But, everyone's playing that game, and the competition is so fierce, and the amount of
human ingenuity that's going into it is so, so great, that there's no durable competitive advantage there.
But,
Ben: I disagree. takes a lot of computing power and it takes a lot of
work to have the models trained in the way. And it's taken like, look, Meta is dedicated to open sourcing their models and they're one of the very few companies that do have enough servers and data in order to compete. And that is probably the only reason where, like, I think there will be a place for open source, but, people go where the money is.
OpenAI, especially if the app store, the GPT store works, people are going to keep on uploading more data, training more of their GPTs, uploading more information, continues to improve the models. I think that there is a world where OpenAI continues to be, for a long time. the problem is, like, in most worlds, the open source alternative just doesn't take over.
Like, yeah, we talk about Linux and it powers a bunch of backend stuff, but... Where none of us are probably, running this on
Linux right
Yaniv: but, but, no, but that's exactly my point, and I think maybe this is the interesting discussion, is, is this more like Unix, or is this more like Facebook, right? Like, are the network effects... accruing powerfully, or are they not? Because you could say none of us use Linux. Yeah, sure, none of us use Linux on our machines, but we're all using Linux all the time,
because every service that we are interacting with is powered in the backend on Linux.
Like, every major cloud is Linux. And so you could say, well, in the same sense that Linux is a technology Are similarly a technology that You, can
leverage for all sorts of things.
Chris: We're talking about commercial success and business outcomes versus the success of a, technology that's sitting behind the scenes. So like in terms of who has accrued the most value on the stack, Linux is not it. and in terms of where you want to invest and what do you want to capitalize and what business bets you want to make.
and so I don't think Linux is a good outcome for OpenAI. And I also think that we're discounting one of the most valuable things. on the table, which is Sam Altman and his team. like Zuckerberg, like Steve Jobs, these kind of people who have insights were first to build an incredible team dedicated to and passionate about a specific space have an incredible unfair advantage.
And when you marry that with the reach And bottomless money of Microsoft, who have access to all of that proprietary data. You just said, Jeremiah, because they're running all of the legacy shit systems that are out there and Gov and military. And so you're talking about, I think, one of the most important and incredible partnerships in the history of tech.
And actually Sam Altman said it on stage. He's like, I think we have the best partnership in tech.
Jeremiah: Yeah, he said that. However, there is another faction forming. So, Meta is partnered with Amazon who is backing HuggingFace. And they're kind of pushing open source as a group. Amazon is not open source, but they've been backing the open source AI community for a while.
And they're going to probably come together as a collective and have the narrative, that open source is going to win versus close. And I've heard that from the founder, from Clem. He said that at the, Woodstock event. So the battle hasn't really started yet.
Chris: what is the question? Win what? Right? The open source, no, the open source libraries don't have, an app store and Revshare. They don't have a hundred million users in a UX that everybody's using. They don't have brand equity. They don't have, incredible leaders and product thinkers like Sam Altman and
other people over there. So you can win the technology layer. You doesn't mean you win the ecosystem layer and the business layer.
Yaniv: but this is exactly my point about Unix not sure if totally landed yet, which is yes Linux wasn't a commercial success for Linux, but that's open source The point is that Sun Microsystems and those guys got destroyed by it and going back to what we were discussing earlier about the announcement of the OpenAI App Store and GPTs and so on said it's it's very early and I agree with you and what I'm Asking and I, you know, it is a bit of a, a straw man, a bit of a contrarian, devil's advocate view is to say, well, OpenAI, sure they've got a hundred million users, but how durable are those a hundred million users?
Are they accruing moats? Are there network effects? Is this app store actually going to work? And if not, are they gonna be left? Basically holding this technology, which effectively becomes a commodity to which the value does not
accrue.
Ben: well, we should debate this forever. We probably shouldn't. We just have to wait and see, but no matter what, it's very consumer facing because in the end you're chatting with this interface versus like, you know, Linux, you don't really do that directly.
Really, you see it in terms of speed and in terms of like, does it stay up? But this one it matters more if you have the absolute best technology because the front end consumer experiences that. There's a real difference between talking with, An early LLAMA model versus GPT 4 Turbo.
There's a difference between GPT 3. 5 and GPT 4 Turbo. A huge difference, an enormous difference.
Jeremiah: So let's talk about switching costs. Let's talk about the moat for consumers and switching costs. Okay. So there was a time when Facebook was the dominant king of social media and now it's not cool at all to be on Facebook. It's like considered, not cool. now What are the switching costs when it comes to your personal LLM?
It's actually very low because you could do a Google Connect or some type of Connect, and your data can be sucked in, all your emails can be sucked in, all your tweets can be sucked in, all your messages, all your data, and then a model could train off of it. So the switching costs, the barriers here are pretty low.
Chris: part of the way this plays out maybe is it's not multiple AIs. Or at least the user is not interacting with multiple AIs, but maybe it plays out that is one, you know, Samantha from Her or one, Jarvis from Iron Man, and the switching cost there, I think is really high, Jeremiah, because once I fall in love with my Samantha or become a buddy with my Jarvis.
I'm not going to suddenly go, callously retrain another AI, I'm going to carry that guy or that girl or that thing with me into all of my interactions. And so AI becomes a personal tool rather than a, vendor tool, depending on the use
Jeremiah: Depends if you own your data or not and where the data source is, but I hear your point.
Ben: We just don't know yet what the switching costs really will be because we haven't experienced that level of anything quite yet.
If you have a bunch of your own personal GPTs. that you're using every day, the switching cost becomes higher open AI does have that. But flip side, you're also right, Jeremiah, you can have a large language model train pretty quickly, as long as it can access, like.
Stuff
Jeremiah: Very very few of these foundational models have long term memory yet.
Ben: there's still some memory problems. Even me trying to make my VCGPT, forgot some of my early instructions, despite, like, lots of, intricate prompt engineering, which is why my next one will be dumb and stupid, and I already know what it's gonna be And it's gonna be
Chris: Yeah.
Jeremiah: obviously that will be solved.
Chris: but we're talking about the trends, right? And these little glitches of memory are quickly going to evaporate. But I've absolutely experienced that myself, but they solve that in the most hacky, but interesting ways you give it these permanent instructions in a, config.
And it, knows all the key facts about me that, that really matter. It knows what industry I'm in and how I want it to answer. I've asked it to stop telling me about its cutoff date and stop being so polite and equivocating and be concise and use technical jargon. And it's now like, I'm literally training this thing in a very explicit way.
And you can see that becoming far more nuanced and the memory window becoming much bigger. And I'm talking in months, not years,
Chris: before we wrap up, let's do some hot takes.
What are the crazy, the provocative, the interesting outcomes that some people may be not talking about? last episode, we talked about, a billion dollar company powered by three people AI. what else can we think of might change the game moving forward?
Ben: One
day there will be a startup that will be built entirely by AI, maybe one person gave one prompt, and the entire thing is run by AI. Look, you want the crazy ones, like, having real conversations with top VCs about whether capitalism will exist itself in 20 years, so, let's see what
happens in the next 20
Yaniv: In a world of what, super abundance kind of breaks capitalism, doesn't it?
Ben: Superabundance does break capitalism.
Chris: I saw a headline from Chamath, I didn't read the article yet, saying that there's a real argument we made that , the job VC is obsolete and dead and that the capital requirements of these new companies is quite thin and, there
isn't a lot of capital to be deployed.
Yaniv: How much of open AI raised?
Chris: Well, again, there's foundational models and then there is applications, right?
Jeremiah, you've created a great diagram that talks about the layers and the stack. I think I helped you to some degree with some of that. the foundation models, that takes billions of dollars. Building an app takes an afternoon and some natural language prompting.
Ben: I suggest everyone just get as many gold bars as they can and bury them underground somewhere in a secure location.
Chris: Bitcoin,
Jeremiah: what do you need gold
for? It doesn't do anything for you.
Ben: gold is a great conductor for microchips.
Yaniv: That's right.
Jeremiah: So it shouldn't be in your ground.
Ben: Yeah, you need it to sell when the entire currency of the world is chips.
Jeremiah: So here's my hot take. AI will significantly impact the future elections around the world in two ways, bad actors will be using it for deep fakes, obviously, and people are going to get fooled. And secondly, people are going to turn to it to get information and then make decisions.
There's going to be some advanced AIs that are going to make recommendations on who to vote for based upon your demographics and context and needs and the issues that you wanted for and it's going to match you. Most people can't keep track of all the issues. In the next election cycles, AI is going to have a significant role in deciding the winners.
Chris: so I want to take that, I just want to push that out so far into the future as it may be absurd, but I, think actually there's a potential for a new kind of democracy and a new kind of civilization where, you said, Jeremiah, people can't track the issues, but your AI can, it's the idea that representative democracy needs.
People to track all the issues. these rare few who are voted in once every whatever, pick your cycle two, four, five, six years. And they're going to go represent your interests based on your geography, not based on your circumstances, your interests, your intents, your politics, but your geography, I think an anachronism and the idea that you or your agent or your proxies, can be assigned to vote on every decision and every.
important material thing I think is actually the future of civilization. Now this is not going to happen in the next 10 years or a hundred years, but I'm very excited that idea where we are actually represented by, our actual interests at a granular level.
Yaniv: boy.
Ben: mean, that could go both ways, couldn't it? I mean, look, I mean, I'll give you one other, like, if anyone ever watched the Angelicis Evangelion. the Magi Computers, like, have making all the decisions in terms of what should happen and running governments. is that possible? It's not impossible.
If people don't trust humans, why not trust the AI to make decisions? They make them fast. they have more information and knowledge.
Jeremiah: I'm unsure how true this will happen, but let's break out the logic because people are going to rely on AI to get information, to research about different products, and eventually e commerce will be integrated advertising. As we know, it dies. It dies because people can use AI to get that information.
And in the near future, the AI is going to make decisions for us. And even within parameters do shopping for us. So we might see the death of, Google, which is the majority of the revenue is ads.
Ben: If you want to go pitch Jeremiah, your new ad tech startup, please go to, what is it, jeremiahblitzscaling. vc, something like that, please. He really wants it. It's the one vertical I feel like I'd never hear a VC be like, I'm really excited about ad tech. I've never heard that.
Jeremiah: hot take. Most, things that visit your corporate website will not be humans in three years. It'll be AI agents going there to get information and bring it back to humans and they will consume that in a contextual summarized way within the AI app. Get ready. Most of your users will be agents.
Yaniv: can I add a sort of corollary hot take, which is AI will communicate with each other in English.
So rather than using, some sort of machine optimized protocol, because they are retrofitting themselves to our existing human generated systems, websites. Emails. you've already got, like on social media, you've got AIs writing posts and then other AIs responding to those posts.
Even though, in the same way that the whole internet has been built on JavaScript, which is a terrible, shitty scripting language, but because of the path dependency, that's what's there. Even though English is not a great way for machines to communicate with each other, they will make do.
Ben: don't know if I fully agree because there has been examples of early research where did start inventing their own languages before they were stopped. I think that. They will still end up inventing their own language at a certain point, especially as they become, sentient, and who knows how many years, that might be a more emotional question, but they are definitely, in some scenarios, inventing their own languages.
Chris: so a another couple of existential hot takes here that are fired in the future. I think people wonder, is this the end of. human civilization with AIs taking over, I think there's an argument to be made as like, why is this not just a continuation of evolution? Why are we, biased towards biological evolution?
. And then the people say, well, can AI become sentient? I think AI will become super sentient. I think it'll become aware of All things everywhere at the same time. and so not a question of will it become sentient.
It's a question of what does super sentience look like? And how do we factor into that existence, if at all?
All right, you guys are in the epicenter in Silicon Valley. Yaniv and I have been there, worked there, and we're thinking about how to help founders and investors all around the world apply this news, apply these trends in their decision making.
So in just maybe a few bullet points for each of you, what would you do over the next, let's call it six months to skate to where the puck is going set yourself up for success in a post generative AI world? given these latest updates from OpenAI.
Ben: I will take a first crack at that. first of all, six months is a long time in AI world. right now, try a bunch of GPTs, try to make a bunch, learn how the thing works. It's awesome. You should just do it. Make your own personal one. I have a personal one that, I can have it answer all the questions that I get all the time.
It's fantastic. In terms of what to do over the next six months, in the end, I always just remind this to all the founders I speak with, the business fundamentals remain the same. do you have a target customer? What is that target customer? what is their key problem?
What is their budget? what kind of problem do they have? What is your moat? What is your defensibility? and in the end, whether it's AI or not AI, the business fundamentals matter. Like, are you generating revenue? Are you pie in the sky going to burn up trying to burn a whole bunch of money getting to something that is unlikely to happen and that has not changed at all.
So, more than anything, keep your burn low, find product market fit, know your target customer and exactly what they're trying to buy, make revenue, generate revenue, have defensibility. that'll always be true now, or six months, or whichever crazy thing OpenAI launches when they make the OpenAI bird that just follows you around and searches like flashing images and all that, your target customer will still matter.
Jeremiah: great focus Ben Parr. So what I'm looking at is Skating where the puck is there's gonna be multiple ecosystems open AI and Microsoft do not own everything I'm positive all the other Giants are gonna build their own and there's gonna be multiple battles Secondly, I don't think for a long time. There's gonna be a single omnipresent AI.
I think we're gonna have multiple personas We'll have one for work We'll have one for our personal, our work will assign one, we might have one for health reasons that's tied to specific type of data. I don't think we'll have just one, maybe in 10 years, Chris, but not anytime soon. So the that I'm looking for in startups, do you have access to exclusive data?
Again, 80 percent of the world's data has not been trained by OpenAI. are you building network effects, and then viral effects into your product. And that's what we're looking for in those types of companies.
Yaniv: the game hasn't changed, and that's part of the point of this podcast, right? Which is the technology changes and AI, much more than say blockchain, I believe, is a fundamental paradigm shift in the same way as a shift to mobile as a paradigm shift.
And, you know, we might get to AGI and the singularity and then the advice will change. But, until then it's like, yeah. You're building a startup, you're playing the same game that you've always played, right? You just have a new base of technology. and I nearly feel like, unless you're in the, picks and shovels game, you shouldn't think of yourself as an AI startup.
You should think, what problem am I trying to solve for my customers? And now that we have this incredible new tool. How do I use this tool for helping customers? I look at ChatGPT.
It's 20 bucks a month. if you are not spending 20 bucks a month For ChatGPT you are crazy. Like, if you are in this ecosystem and you don't want to spend that small amount of money to get, first of all, access to this incredibly powerful tool, it's way more powerful than the free version.
But secondly, just stay up to date with what's actually going on, then you need your head read.
Ben: pay for GPT Pro. Pay for it. Pay for it.
Seriously. If You can
afford it.
Chris: Well,
Ben: it.
Yaniv: what I actually see is that, yes, there are a lot of people who are not spending those 20 a month. problem ultimately is one of human agency, right? That's why I think about chat, maybe not being the be all and end all of interfaces, because you give people an empty ChatGPT prompt and they often don't really know what to do with it.
that power is sitting there, but you need to guide people to it. So, for me, it's like, okay, we now have this technology that is. Powerful and opaque and that actually breaks people's mental models. Like, it is way smarter than humans in some ways. It is really dumb in other ways. So how do we actually create a product that leverages this technology in a
way that doesn't expect your users to be truly high agency because people are just not.
Ben: do you know what the interface might actually be
And it's less the chat, it's more the proactivity, because ChatGPT is not proactive, having the proactivity is what's really missing right now. Someone reaching out being like, hey, I suggest you do this. I understand that you, like, have a big important meeting today.
I've, drafted a thing for you. That's what we are going to and what we absolutely need. And that's kind of like, maybe that's what the actual next interface is.
Jeremiah: 365 Copilot, they've announced it's going to have many of those features. So right after a meeting, it'll start to push, action items and summaries and push it into people's inboxes
Chris: Yeah.
So I want to take this further in, my summary and my advice is, throughout this episode, I've been arguing
that this may be a winner
take all market, but that's to some degree trying to provoke conversation. I really believe the other thing I said, which is that new tech paradigms really reshuffle the deck.
So this is the time to jump in and be part of the next wave. And in fact, you're probably late and you need to get in there real quick. and you know, we've all said a version of this, but you need to pay for ChatGPT, but not just pay for it, use it, experiment, learn, really dig in, interrogate it, do some prompt engineering, understand its limits and its, I would also say build something. It's the best way to really learn. and it's the best way to be positioned. To have something in the wild and then pivot and adjust and be relevant as the trends become more and more apparent. because you know, remember the truth about luck, right? It's opportunity meets preparation and
execution. And so by building your own app, building your own GPT, building your own knowledge base, you will increase your surface area for luck, in this very Complicated mess of trends and transitions that are occurring and you've got to be ready for it. And just to show that I'm eating my own dog food, my own advice, I created a GPT, which is like my own AI bot.
Go check it out at chrissaad. com slash startup AI. And I'm creating my own app, my own standalone app, which, I haven't mentioned anywhere else. So this is a worldwide exclusive. Go
check out getwingman. ai.
Yaniv: Holy shit.
Chris: getwingman. ai. And that one is kind of proactive. It's a browser extension. That'll follow you around as you browse, and as you hit, news sites and news articles, particularly political ones, it'll help give you a fresh perspective on that news.
Devil's advocates and logical fallacies and biases, and it'll tell you about the news outlet. So, install it for your, mum and dad who are, re sharing fake news and, uh, and help cut through the noise and the bias of the news.
Jeremiah: Nice job, Chris.
Chris: All right, boys, it is always incredibly fun to have you on the show. Hopefully this will become a recurring thing. Jeremiah, Ben, how can people touch base with you, follow your work, get involved, are you guys on the interwebs?
Jeremiah: Yeah. my DMs are open on all my socials, you can just hit me up, I'll see ya. Doesn't mean I can respond to everything, but it's there.
Ben: that is so dangerous. Please do not send Jeremiah, inappropriate messages. I'm telling you all now, do not send appropriate things. I'm serious. I guess, I guess my DMs are also open. at Ben Parr on literally everything. B E N P A R R. But just go to benparr. com and subscribe to my newsletter where I talk about AI things.
And I got one coming up about, all the stuff at OpenAI Dev Day. go to benparr. com.
Yaniv: So Ben Power, is it okay to send inappropriate things to you via DM?
Ben: Uh, it depends where they make me money.
Chris: That is a really good answer. and if you want to stay in touch with everything, the Startup Podcast, please follow the Startup Podcast. Pact, which means we'd love you to subscribe to our newsletter. You can find it at tsp. show, rate us and review us in your favorite podcasting app and tell your friends about us on your favorite social network, which I'm sure Jeremiah and Ben are going to do for us after this show.
Uh, and that'll help us grow the show and help more founders and more investors do a great job in building high growth, disruptive startups. All right, everybody. It's been tons of fun as always. Thanks for joining us.
Yaniv: Thanks guys.
TSP has over 100 episodes! Here are some good ones to start with.