Copy the formatted transcript to paste into ChatGPT or Claude for analysis
Okay. Hi everyone and welcome to the
2025 HVS Entrepreneurship
Summit. Um we're super excited to have
you here today, especially because this
is our reinaugural entrepreneurship
summit. Um there hasn't been an event
like this at HPS since precoid. So we're
really excited to have this relaunch
again and have you guys all here. Um so
if you haven't taken a seat yet, take a
seat, settle in. We're super excited for
a full day of programming um here with
you today. Um special shout out to our
co-heads of Summit, Lindsay and Anita.
Um you'll meet them in a
second. Um we would not be here without
them today. I know it's been months of
um work. Their leadership and vision um
is why we're all here today. Um so we're
super excited um to kick things off. Um,
and so the HBS club, um, we're one of
the largest student clubs here on
campus. Um, which is a great testament
to the entrepreneurial spirit here at
HBS. Um, I'm Jen. I'm one of the
presidents of E-Club. Um, and I'm here
with Hunter, my co-president. Um, and so
we were excited to kick things back off
here and bring everyone together from
both HBS, RC's, EC's, alums, as well as
community leaders. So, we have a lot of
uh student representation from all
across the Boston area. We have people
flying in from all across uh the nation.
So, excited to have everyone come here
today. Hopefully, you'll meet, you know,
a new friend, a connection, a potential
entrepreneurial, um venture. Um and so
with that, um I'm going to turn it to
Hunter here um to kind of kick things
off uh for our first keynote.
Awesome. Thank you, Jing. Um, again, so
grateful to Lindsay Nanka for all
they've done. Planning a full day
conference with no roadmap is no easy
feat, and they've done it with a plum.
Um, and we hope and expect this to be
the foundation and the groundwork for a
number of great summits and conferences
in the years to come. Um, to me falls
the pleasure of introducing our first
keynote. Uh, first, our moderator is
Patrick Chung. He's a managing GP of
Xfun, which is the VC fund that was
founded in partnership with the Harvard
Engineering School. He was formerly a
partner at NEA. uh and he's a rare
triple Harvard alum holding not only MBA
from HBS but also a bachelor from the
college and a JD from the law school and
then of course the man of the hour is
Arvin Cernovas the CEO of Perplexity
Arvin was born in Chennai India he holds
a bachelor's and masters in electrical
engineering from IIT Madress as well as
a PhD in computer science from UC
Berkeley uh and Perplexity in case
you've been living under a rock for the
last few years is a AI powered
conversational search engine that's been
taking the world by storm. Arvin
co-founded Perplexity in 2022 and in the
few short years since, Plexity has
achieved over 600 million uh monthly
queries and a $9 billion valuation with
investors uh with investments from the
likes of Jeff Bezos and Nvidia. Um and
yes, if you are wondering, I did use
Plexi to help write this intro. Uh
Arvin, who's going to be on stage in one
minute, a heartfelt thank you from all
of us for flying across the country to
be with us. I know we're all really
excited to hear the insights you have to
share from the trenches of company
building uh in the Genai world. Uh
please welcome me in joining Patrick and
Arvin Arvin. Thank you so much for
flying all the way across the country to
be with us. Thank you for having me
here. It's my it's actually my first
time at Harvard, so very excited to be
here. So, we are all intensely
interested about uh perplexity, about
AI, about the future. Um but I wanted to
back up a bit and just talk about your
origin story. Um and then we'll talk
about perplexity and then we'll talk
about the future. Um so, uh unlike many
many founders, you are actually an
academic. We sit here at Harvard, one of
the great universities of the world
whose mission is to uh seek truth,
create knowledge. Can you just trace
your history from undergrad to through
academia? Yeah.
Um I mean like f firstly like my uh the
cultural roots for for me was to always
like see seek knowledge even more than
wealth. Uh that's how we grew up back
back in Chennai and India. And uh even
now like um I I would say my parents are
even more proud of my PhD than
complexity.
U they still use Google quite a lot. So
Oh no. Um so that that tells you like
what they truly care about. Um and and
and so a lot of people kind of think
like
academia and business are like two
separate things. uh there's a lot of par
like things you can learn from academia
and apply it in business uh and uh so I
I in IT I did my um electrical
engineering degree which you know I know
now I'm feeling good about it but then
back then I always felt like oh I should
be in computer science like uh all my
the cool kids are in computer science so
I need to also like get into all these
coding competitions uh and I tried my
hand at comparative coding uh wasn't
good at it. Uh Johnny who's my
co-founder, he's also here. He was
actually world number one at uh II and
uh and I think he represented Harvard
also at ICPC. So there like a lot of
better people even back then. So I I
never made it to the ICBC world final.
So I I Is that your biggest regret?
Well, not not anymore like you know, but
definitely wish I was good at it. Uh I I
tried my hand at a machine learning
contest. um with literally zero
understanding of what machine learning
even means just trying different
algorithms for the scikitlearn library
and uh I I essentially ran brute force
my way into get winning the contest uh
but that was it a college contest yeah
it was within within the college uh just
open to all the students um and uh I
think Yahoo probably sponsored that
event and uh I got I got but the money
wasn't the important part of it it's
mostly
It was fun to actually try something new
where there were no guarantees of
correctness. You just had to try a bunch
of things. And what was the contest? I
mean, has echoes of what you're going to
be doing for us this afternoon, which is
a pitch competition that you will judge.
Yeah. Correct. Contest was like they uh
had a bunch of photos and and and and uh
wanted to classify, but they wouldn't
actually give you the photos. They would
give you a bunch of numbers that they
featurized and you have to like predict
on an unseen test set. and uh no deep
learning on neural nets back then just
just uh simple algorithms and it that
that's how I got started MLow and I I I
did an internship and um they asked me
to build a recommener system for for a
healthcare website. Did you win the
contest? Yeah, I did win the contest.
Okay. So, you were an electrical
engineering undergrad. You entered a
machine learning undergrad contest. You
won it and that and did what did that
say to you? I majored in the wrong
thing.
Actually, electrical engineering is very
easy to onboard into ML even even easier
than computer science. Uh because uh I
was already familiar with concepts like
convolutions and like signal processing
uh all these lowass highass filters. Um
and so like all the Python programming
and numpy all this stuff came pretty
intuitively. uh whereas for computer
science friends of mine they were more
focused on competitive coding. Uh they
would always write for loops for every
single thing. Uh that's not the best way
to do uh ML. So I I think uh that
definitely helped me a lot to get
started even sooner. Uh so I don't
regret the whole electrical engineering
uh beginnings of it. U and so I got my
internship. I learned more people. Where
were you an intern? Uh I was just doing
an internship in India at a startup. Uh
I finished it pretty fast. Uh like
building a recommener system. Uh that's
when someone told me hey look this all
this linear algebra is cool but you you
got to go study neural nets. It it
doesn't work but you still got to go
study it. Uh and and so that's how I
started watching all these online
lectures. Uh Andrew Ang's lectures on
YouTube. Uh and then someone told me,
you know, Andrewing is cool, but there's
a British guy who speaks really slowly,
so watch it at 1.5x. Uh his lectures are
even better. That was Jeffrey Hint's
lectures. Ah, uh and and so uh that's
how I learned all my basics and then uh
did my research um and luckily got an
internship with Yashu Benji, one of the
tutoring award winners. He wrote me a
letter uh and that's how I got into grad
school at Berkeley. So and yeah so that
was my start from undergrad to get in
coming to the US. Uh once I got in here
uh I was lucky to get internships at
OpenAI, Deep Mind. Uh that completely
like uh humbled me a lot because I
actually thought I was good and then uh
people around here were way more
intense, hardworking, um way better at
actually not just coming up with new
ideas but implementing them and getting
them uh working or writing all the code
to like scrape data sets everything end
to end that's happening here. Uh so I
that that that taught me a lot and like
u pushed me more towards like
entrepreneurship not just like you know
coming up with new ideas but it's very
essential that you actually show them
working in practice on on a practical
data set. Was that the um what was the
kernel that because you were on this
academic track you actually completed
your PhD at Berkeley? Yeah. And so were
the internships the one that said okay
that that kind of dislodged you from
that default path? Yeah. So completion
Yeah. Like I know I know what you're
trying to say like being a dropout is
cooler uh than completing the piece. No,
I'm not saying that. I'm not saying just
for the record I'm not saying that but
uh let me
see I think by by the time my my my
adviser was pretty uh open-minded. So I
went and told him look I know you I came
here to do a PhD in reinforcement
learning but the basically what happened
was I got an internship at open AI and I
I I had like some five or six very fancy
reinforcement learning ideas and Ilia
Sutsker the former chief scientist there
literally told me all the ideas suck. He
told it to my face exactly like this.
And obviously I was very upset. Uh wait
that nobody usually tells it this way.
So maybe he's actually right. And um and
then he took me to a board and just drew
two circles, one big circle and inside
it a smaller circle. And so the big
circle is generative unsupervised
learning and the smaller circle is
reinforcement learning. And you don't
need anything new. You just need to do
this in sequence. and throw a lot of
comput at it and train on all of the
internet and then you'll build general
intelligence. Uh so don't don't work on
all your ideas and and wait really like
nobody had any clue at the time but he
he just saw the future and that ended up
being right. That's essentially the
recipe for chat GPT. Uh but I I at least
got you know went back to my adviser and
said hey there's this new thing called
generative unsupervised learning. Let's
go and study it. and and and and then I
we we taught a class at Berkeley on that
and I learned learned that myself and I
did more internships to like write more
papers on that topic and I combined the
two together that that ended up being my
thesis. So that that that that was very
essential. So when I got more into
critical thinking uh I I started looking
at startups that can you know like
potentially be new in AI. Well, why just
to pause there, why look at startups,
why wouldn't you have just uh stayed at
a big tech company? You were at Google,
you were at DeepMind, you were at
OpenAI. Um what was the kernel of it
that made you decide I'm going to start
my own thing instead of keep on with one
of these big tech companies? So, I I I
did my PhD at Berkeley Silicon like you
know, pretty close to Silicon Valley. Uh
I'm not not definitely not like making
it up like I I I really thought thought
that TV show Silicon Valley was pretty
cool. Oh, no. it it um was everybody was
watching it around me and and then I
thought it was very funny but then
someone actually told me hey look I know
you're making jokes about the show all
the time but realized that some people
might take it personally because it's
actually pretty true uh it's not just
like a humorous show like a lot of
people were very depressed watching it
because it reflects Silicon Valley in
almost a brutal way but uh I think that
whole idea of like starting something uh
from scratch u and and and getting in
the hands of users was very attractive
and the particular idea in that TV show
that was explored was around lossless
compression
uh which is very directly tied to
generative AI because if you you know
they they also talk a lot about how you
can use neural nets to do it. So that
was actually the idea I really wanted to
start up first on like losses
compression with generative models
except nobody wanted to do it with me.
Mhm. And then co happened. Uh so I I I
just and I told me to go come to open
AAI and work there and and it wasn't
very clear where AI was clearly heading
to like a product phase. It was still
pretty research oriented. But then like
products like GitHub copilot started
spinning out from from OpenAI and
Microsoft and it was like first real
time and I could use it myself like I I
knew AI is already being applied in
industry like Facebook's recommener
systems or like Google search was using
you know all this link ranking they were
all using a lot of neural nets but you
couldn't visally feel it. Uh, I think
you started feeling it more when when
you used it yourself. And GitHub Copilot
was this AI that would just autocomplete
your code when you're writing code in in
your editor right there. So when you're
when you feel the AI, there's this whole
phrase feel the AGI or feel the AI. Uh I
think you start to realize, okay, it's
now is the moment to try to start a
company. And uh you kind of have to be
at the right place at the right time. uh
we were very lucky to be that where AI
was just beginning to work. It was not
fully working yet that it would be too
late. And so uh if we assemble a really
great group of people uh and and gather
a little bit of funding and and and get
the right seed idea, we could
potentially build a product and benefit
from all the advances that happen in AI
and all the new investments that are
going to come in. So that's kind of how
Perplexity was started in August of uh
2022. Wow. Cool. Just as a little aside,
um Mike Judge, who is the producer of
Silicon Valley, was looking for uh
adviserss and I happen to be an adviser
on season one, that show. So, it's a
little Are you the show? Uh no. Uh I'm I
Well, I No. Uh but he wanted he just
wanted Silicon Valley Gossip in order to
satarize. Um and I was the apparently
willing provider. Um but Okay. So, you
started this company. Um, and how did
you how did you find your co
co-founders? How did you find your first
investors? Um, and it'd be interesting
to kind of as we transition from your
academic background to the startup.
Yeah. What parts of your academic
upbringing influenced the company that
you built today.
Uh, so so co like first of all like uh
Dennis Yas is he's a co-founder and CTO
of Perplexity. I knew him from um one at
least for a year or more before we
started perplexity from school or Yeah.
Yeah. He he didn't he didn't go to
Berkeley but he he went to New NYU and
uh he wrote the same paper as me like
one day apart so and and uh but who
published first? Uh technically he did.
Okay.
Uh yeah, actually it was he became a
very good hireer then in the sense like
he he actually gave us a heads up saying
he's publishing it uh because he had
heard we're working on the same thing.
Uh so we actually rushed and and pulled
an allnighter and got our paper out the
next day. Okay. So uh that's that's how
we we got acquainted and then he he
became a visiting student in my lab uh
remotely because of co and so we used to
discuss a lot of ideas back then. So,
and he knew uh Johnny uh who uh over
overlapped with him at Kora u back when
humans used to answer people's
questions. Now AIs do that. So Kora was
a platform where other people would come
and answer your questions. And then it
was um assembled a great group of
competitive programmers and um so they
they overlap with each other there and
worked on this thing called Kora digest
which automatically picks the questions
that are most relevant to you and sends
it as an email which I still like get
these days and I sometimes it's pretty
useful even now. Uh, and I I I um he So,
Dennis was like, "Oh, Johnny's
potentially leaving his job at Tower and
looking for something to do in AI, so
maybe we should Well, it's one thing to
identify great people, but how did you
rest them from their fancy jobs and from
their So, in this case, in this case, we
lucked out. He he was actually looking
to leave. Okay. So, uh so it was more
like convincing him that we would be the
right next step. Actually, he took a
pretty bold bet because we had nothing,
right? Is this Johnny or Dennis or both?
Uh, so Dennis decided to leave MEA. He
was he felt like he he had an urge to do
a startup at at least some point in his
life. But once Dennis signed up, he
said, "Okay, let's convince Johnny like
a snowball." Yeah, exactly. Uh for
Johnny, I think he was deciding between
us and some other startup. Um and and we
had nothing. So uh I think he just took
a bold bet like we were initially
working on uh like like using AIS to
answer questions about data sets like if
you had your employees salary database
and you want to ask like how many people
earn more than a certain amount all this
text SQL uh that that's basically what
we were working on u we had no idea if
it had any market impact or business
model in fact I still don't think it
does that particular idea but the most
important realization is that uh in a
startup when you have an initial group
of people, the number one thing to do is
just just iterate and do something. Um I
I I've seen many founders spend at least
6 months to a year uh in the idea maze
going around and around and not getting
anywhere and not knowing what it takes
to actually ship something, get it in
the hands of people, see them use it,
learn from that and then go and update
your hypothesis about the world right uh
that's the right way to do it. uh at
least in software uh and and and so that
that that really helped us like we we
got a lot of dopamine from watching
others use it and that was also how we
raised uh funding from a good set of
seed investors where we would just show
the demo like people would ask for a
deck I would just say I don't know how
to make decks I still don't know how to
make decks I haven't made a deck in ages
actually only one deck I made was for
series A funding uh that was any ace
round um which I don't I think the deck
was great, but I I somehow like
convinced them early was good enough. Uh
but after that, I never made a deck. Uh
it was all like I I you know, let me
just do a demo for you and tell you tell
you what we want to get to. And u that
worked really well for us.
Cedron and especially what happened is
we would scrape Twitter back when it was
like available in the form of an API and
um pre Elon Musk CEO
days. Now uh we would put that into a
bunch of tables and let people ask any
questions about Twitter. So when the
search existing search on Twitter never
really worked and suddenly you got
something that worked, people got really
got the idea that oh large language
models are really going to change
search. It doesn't matter how exactly
but it's really going to happen. So
that's how we attracted great seed
investors Gil Natt Freriedman um Andre
Karpati like Jeff Dean also uh so a lot
of people invested and like that helped
us go and recruit some really good
engineers founding engineers who even if
they didn't trust me as a founder or a
CEO to like figure out things they
trusted the fact that okay a great group
of people are putting money into this
and the found the co-founders are like
very technical and great engineers so
like what you know I don't I probably
won't get an opportunity to work in a
team like this again. So they came in
and somehow we uh iterated iterated and
launched the core idea that okay search
itself like the fundamental software the
king of all software the one that's used
for the most in the world surface area
and usage wise uh can be changed where
if people actually wanted to evolve
search from typing in keywords to typing
in questions or asking questions for
your voice and you just get an instant
answer and you want to trust what the
answer uh of the AI is whether true or
not through sources which again comes
from academia because the first thing
you're taught when you write a paper is
don't just write whatever you want like
make sure you have a peer-reviewed
citation to it. So we just applied that
concept in the context of an LLM uh and
and got it to work over a weekend hack
weekend. Yeah. Yeah. I mean by the way
like a lot of the intro was already
built for for doing this in a weekend.
It's not like we uh literally started
from no code to like launching it. uh
but we could quickly iterate had a
discord server with and slack bots uh
tried it ourselves saw saw our friends
using it and that ended up becoming
perplexity so I've heard you say um many
times that perplexity is not competitive
with Google um and I wonder if you could
just help us explain that um you know it
feels like if I want the answer to a
question there are substitutions um so
can you explain why you don't think
you're competitive with Google So, uh I
I think it's not competitive with Google
in the sense that it's not going to take
away the core
uh Google search behavior. By the way,
most of like at least one to two billion
searches a day are just like one or two
words right now. Yeah. Uh people just
literally typing in you go to
trends.google.com and and and you scroll
down instead of rising you click the
top. you you most of the times it's just
weather, Reddit, Instagram, YouTube,
Twitter, you know, these one-word
destinations and people just type that
on their Chrome uh search bar and it
goes to the first link and then they go
there. But that's pretty magical, isn't
it? Because Google's actually reading
your mind. You have a query. Yeah. Uh
you're typing one or two words and
Google's proposition to you is we're
going to be able to read your mind from
those one or two words and give you
exactly what you want in the first page.
Yeah, that's the user intent like like
really recognizing oh the user wants
like just quick information. So we're
not necessarily going to like compete
there. I mean we potentially we if we
own the front end then all those
searches would go to us but we're not ne
that there's nothing to improve there in
terms of the experience because uh
Google's already doing a good job. But
what they're not good at is answering
questions. Like if you really wanted to
know, okay, I'm actually going to uh
Harvard today and um looks like it might
rain like what what is the best thing
for me to wear, you you're going to get
a lot of links. Mhm. But you're not
going to get a synthesis of the
information present in each of them to
give you a targeted answer to that
question. Uh instead what you would do
before perplexity was you would type in
the weather and then you would think
about what to do and then you would type
in something else related to like you
know where what is the best thing to
wear for like like rainy weather you
would read that and then you you would
be doing all this cognitive work now the
AI is doing it for you and so it's much
more intuitive are you actually
comparing yourself to Google 1.0 Z if I
could even call it that because today on
Google search of course you can ask a
natural language question and not not
yeah I know what you're talking about AI
overview or the AI mode doesn't always
work uh sometimes hallucinates
um like like the famous she's sticking
to the pizza thing uh definitely you
know they're improving it the core
problem there is they cannot make it
work for every single question you ask
because they lose a lot of link clicks
Mhm. Um in including simple things like
finding the sports score. Uh they're
actually putting ticket master ads right
now because they need to make money,
right? Uh and and their global traffic
on search is not actually increasing
anymore. So the only way to juice uh
more ad revenue out of search is to
actually put more ads per query.
So answers, direct answers is the is is
is completely misaligned with like
showing more link ads. Yes. It's very
hard to like make both these products
work together. You need a new user
interface, new experience, new front
end. Uh and and and in the user's mind,
they're they don't want this clutter.
You get like a knowledge panel. You get
some ads, you get some links, you get
some answer, you get some uh pictures.
It's it's too much clutter on Google.
The thing that it does incentivize,
however, is for the people who are
putting the links up. Um, they have an
more of an incentive, it's argued, um,
that Google uh, to to produce
highquality trustworthy information. Um,
there are some who say, you know,
perplexity reduces those incentives
because when you reformat and summarize,
you provide less incentive for those
original content creators. Yeah. to
actually put something verifiable out
there, the things that you actually site
to. And so how do you respond to that?
Yeah. So
the number one thing is like we we
ensure that the sources are right there
at the top uh for any query related to
news. We have a different UI that uh we
call as a trending UI where the news car
the source cards are even more prominent
that you can like like literally click
on them and they're very easy to click
on. Um, we're also like having a
publisher program where we share revenue
made on a query with the publishers. And
and and by the way, this is something
Google never did is they make a lot of
ad revenue. They tell publishers, we're
giving you traffic, but uh they don't
share the ad revenue with the
publishers. Instead, they make the
publishers put another ad protocol,
AdSense, which is putting ads all on the
sides and the pop-ups and things like
that, which is what like really
frustrates the user from going to any
site because they don't want to see all
this clutter. They just came came to the
landing page to read the news. Uh, and
and if publishers can run that without
putting ads on their site, they would
get even more traffic. But Google
doesn't want that, right? So I think we
we are trying to change the paradigm of
even how the business model works here
where if we do put an ad like a
suggested sponsored question and make a
a dollar on that query, we're going to
share we don't we're not going to take
all of it. We're going to share some of
that with the content providers who will
use the sources for that question. And I
think that's a much more scalable way to
like incentivize publishers to keep
creating new content. Okay, cool. Um
what do people get most often wrong
about complexity?
Well, there's this whole thing of this,
you know, oh, this is a wrapper. It's so
easy to build in a weekend. You can
always build like a 8020 or 7030 version
of a perplexity like clone in a weekend.
Mhm. Uh that's due to two things. One is
there's a lot of coding tools out there
to like write code faster. And two is uh
if if if we couldn't have done it in a
weekend like we would not even be here.
So it it it makes sense. But the thing
that people don't realize is after doing
that first launch, we've done so much
more work. Uh we've we've worked on so
many different verticals. We've built
our own models. Uh today if like every
model provider stopped offering us
models, we can still run the product
like very little degradation in in in
quality for most of the questions that
are being answered today. Uh and then we
we have like being building a big index
of our own. Uh we're investing in our
own infrastructure to crawl the web. Uh
and then we're uh also having all this
modern research agents that don't just
quickly pull the top links and summarize
but actually do it sequentially like an
agent and browsing the web and looking
at links and figuring out the next plan
uh exposing chain of thought. So there's
a lot of work we've put into the product
that uh makes it so complicated now the
code base is so big uh that like if it
is a wrapper it's a really really
valuable rapper. Yeah. Yeah. Right. Um,
why shouldn't Apple acquire you? Well,
uh, it's more like, do we want to sell
to Apple? That's the question to ask.
And, and if you were to put your mind
into Tim Cook's mind or Apple's mind, do
you think this would be a I mean, if if
I were to tell something to Tim Cook, I
would say uh, let Apple Intelligence
call Perplexity. We have our app on the
App Store. Um, we have on on the Android
OS, we have an assistant that can call
other apps and play songs and videos and
set reminders and send emails and make
phone calls and all of it stuff already
working. So, if Apple intelligence opens
up uh developer access to more apps and
and and it can call perplexity, uh we
would love to work together and make
Siri actually work. That Okay, you heard
it here first.
Okay. Um, you are a bidder for Tik Tok.
Um, can you explain that? What is the
logic behind bidding for Tik Tok? Yeah.
So, we we wrote this as our our vision
in a blog post uh on our website. Um,
the core idea is that people waste a lot
of time on Tik Tok. Uh, and and and and
I think the feed can be more productive.
I mean that doesn't mean like taking
away all the fun and like uh you know
turning a party into a library but could
be a party still you know like like a
more healthy mix of like useful content
uh and and um uh making sure
misinformation can be handled with like
things like community notes. Uh we have
an ask perplexity bot on X right now
which despite severe rate limits has
grown a lot in usage. more around 100
million impressions. Um and um it's very
clear that like the sort of social usage
of AIS will increase uh and AIs are even
faster than humans facteing things. Uh
like like like essentially because of
perplexity factchecking is is is can be
made as a software service. Now you you
go to the web, you read all these
things, you look at what the user is
saying or the video, what whatever the
person is saying, take the transcript
out of it and and add all the sources as
additional footnotes to give people more
context. So So you think it makes sense
so uh to buy Tik Tok because you can uh
add footnotes you can site. Yeah, lot
you can you can add a lot of context.
community notes can be more natively
part of the platform and the search box
on Tik Tok is how a lot of the the next
generation is searching the web. They're
not using the web in the way like we use
it. Uh like like especially when
checking out our restaurants, they don't
actually go to Google Maps or Google.
They just go to Tik Tok. And so a lot of
searches of the next generation is going
to Tik Tok search bar. And so it makes
sense for something like Perplexity to
be natively part of it. uh and and so
rebuild the algorithm, make it more
productive, make sure misinformation is
handled with something like community
notes and uh build a better search
experience natively with Tik Tok and and
and give more competition to Google this
way. Cool. Um what uh I guess what
beliefs did you have when you were an
academic that you no longer have as a
very prominent CEO? And then we'll
discuss the future and then we'll open
up for audience questions.
Um I mean as an academic I I I really
thought that it's important to spend a
lot of time thinking about the idea uh
and now now I believe more in like
action produces information. Uh but some
of the things are still staying the
same. So even in academia
uh most people don't have the discipline
to try smallcale experiments. uh they
they just want to like come up with this
grand idea on a whiteboard and then uh
run it and then it all works and you you
write a paper because that's how like
Hollywood makes movies about academics
like how you know you see the the
beautiful mind movie you know he gets is
that your favorite movie about an
academic that's one of my one of my
favorite movies there also the movie on
Stephven Hawking so like it's a lot of
movies have been made where you just
knock on the professor's door once you
have the idea and then you're solved not
goodwill hunting
Google hunting is also amazing. Okay. Uh
so the the thing is like what what is
actually in works in practice is several
small scale experiments iteratively
done. And I think it's that's the same
even in the startup world. Um even for
perplexity let's say like we're working
on so many different projects nobody
knows like what's really going to like
pick up in usage and even if something
immediately doesn't pick up it might not
be a failure. So looking for the right
signals and actually having the critical
thinking to uh create the hypothesis
test is something that is very academic
in nature. So there is something deeply
academic about doing a startup. Okay. Uh
just continuing with this movie theme uh
today in AI you have doomers, you have
accelerationists and technoutopians. Um
where do you lie on this spectrum? Are
we headed for Avengers: Age of Ultron or
the first I guess twothirds of the movie
Her?
I mean, I I'm definitely like more pro
exploration.
Um like I hope I hope we end up with a
utopian outcome. I don't I don't know
exactly like what'll happen.
Uh my sense is that like if we end up
with an outcome where AI feels like the
iPhone where like like you know the same
phone is being used by the president or
or someone here in this room uh and and
so the same kind of AI signals available
on that
indeed indeed. So hopefully secure
enough. U so if if we can use that
uh if the best AI or like almost the
best best-in-class AI is is widely
accessible, I think we will not end up
with any crazy outcomes. But if there's
an AI that's needs an insane amount of
compute to use and like only few people
control it, that could lead to some
terrible outcomes. Do you think as the
models commoditize we're headed to a
monopolistic vision like that or and
kind of an oligopolistic vision where
you have several players that somehow
keep each other in check
open source is the only thing that can
keep people in check here. Uh and and
that's been the trend in the last year
is anytime one of one or two of these
closed labs have launched a great model,
people always freaked out like what's
going to happen if these two labs
essentially control the future of AI. Uh
and then someone open source just drops
a model that's as good and just free.
Anybody can take it, download it, host
it themselves, fine-tune it, uh distill
it, make it smaller. All these things
are possible. uh and and credit to like
Deepseek, you know, they did a
tremendous job at like launching a model
that's not just like a traditional open
source model, an open source reasoning
model and reasoning is considered the
next frontier in AI. So that's all being
made available and and it's not just a
model. They're writing technical papers
detailing how they got there. So someone
can actually replicate the science too.
So, uh I'm I'm not too worried as long
as people keep publishing open and open
source like still a thing. Uh I'm not
too worried about like ending up in this
monopoly or all the poly scenarios.
Okay. And then uh just one more question
before we get to the audience. Um which
is um you know we are I guess for a lay
audience. So I I'm going to ask you to
restrain the inner PhD after
Transformers. What do you think for a
lay audience again are the most
promising frontiers where you would
expect the next breakthroughs to come?
You mean so you mean architecturally?
Um I don't know like there there's this
thing called state space models and like
like um a hybrid of like convolution
type architectures and transformers that
some people have tried.
uh I would just assume that like those
things will lead to some gains but uh
the big labs will still copy it like
change some of the existing
architectures. I think the real
breakthrough could potentially come from
like figuring out extremely long
context. So currently all the AIS are
doing something called the retrieval
augmented generation in including
perplexity. It's called rag where the
model itself doesn't have the full
context to answer your question and so
it pulls the relevant context from some
data store be the web index or some some
other data
index and puts it into the prompt and
then answers your question. Uh but your
life like say 10 years of your life uh
all the context in it cannot be
compressed that easily. uh and and what
if you wanted to chat with an AI in the
same way you chat with a friend that
you've known for a decade where you
don't have to keep starting new chats
and to talk about different things. It's
all like one single stream of chat. Um I
think that's very hard to do right now
and and and um uh so figuring out
extremely long context like million or
10 million tokens or even infinite
tokens uh with like like what is the
right structure to store all the
memories is still an open problem and
and it could still be a transformer
based solution for that. Uh but no one's
really figured it out. Great. Okay, with
that let's do some audience questions.
So we have um people with mics right
there um and right there. So just raise
your hand and uh let's Okay, great. Hi
there. Thanks so much for coming. Um my
question is about compute and thinking
about the limitations of you know
particularly Nvidia's compute and the
GPU and I know that perplexity for
instance with sonar has started using
Cerebrris. Can you speak to what you
think we need in the compute ecosystem
right now in the chip ecosystem for
companies like perplexity to scale?
Yeah.
Um I I still think we are pretty compute
constraint uh as a field uh based on
what people like like OpenAI and and
Grock and all these companies are saying
which is they're running out of GPUs to
ser the the requests are more than what
they can serve. uh even for perplexity
like when we brought in deepseek onto
ourselves and and things like deep
research uh we ran out of compute pretty
quickly and we had to like work with
other people to help us there and the
cost per query is actually going up and
so we need more compute to figure out a
ways to reduce the cost temporarily and
then that will lead to like less compute
needed to serve the same product but
then more people will be using it or
more usage will be there per user. So I
I still see it as like you know
increasing demand for compute in the
next year or two for sure. Uh that's how
it seems like to me. As for Nvidia was a
cerebrous like I think Nvidia is a more
robust resilient hardware that can serve
any kind of model be it a sparse mixture
of experts model or a dense model like
the llama. Uh cerebr is more suited for
the dense models right now. uh they
still have a lot of work to do but it's
definitely like very interesting that
like for at an inference level you can
do a lot more make your product go a lot
lot faster uh if you have like
specialized chips for transformers
okay do we have another question uh yes
how about I can't really okay is there
someone over there oh do you have a one
over there okay great
hi so my question is about the ethical
AI so recently like Lon Musk launched
one um his own AI system called grow and
he says that it is the most open and
most blunt AI system. So my question is
like there's like major companies they
are having AI and how we are ensuring
that these company are not going to
handle or take over the narrative of the
whole domain whole public kind of like
they are they have their own agenda like
Elon Musk he can manipulate the whole
Twitter or anything something like this
like about the ethical AI how you insert
that no not a single party or single uh
company control the whole AI narrative.
Sorry, could you uh you know I have to
admit that I did not fully grasp the do
you want to restate your question
perhaps more concisely okay so recently
like there's companies uh they are
launching their own AI recently Elon
Musk is launched his own grow AI right
and it is said that it is most blunt it
is most honest AI ever so how can we
ensure that not a single company take
over the whole AI narrative like they're
controlling the opinion of public
so your question is like how how can
avoid like a monopoly where this is one
AI everybody uses. I I I think I think
open source is the best uh way to like
ensure that uh because uh o what open
source essentially does is nobody can
charge an insane amount for any AI uh as
long as the open source model is as good
or even better than the closed closed
lab AIS. as for uh you know having an an
honest AI like I think the right
solution is what perplexity did in fact
I see that that as one of the major
contributions of perplexity is like
truly uh having a like like whether it's
hacky or principal solution doesn't
matter but uh the best way for people to
like ensure they can still verify and
trust what the AI says is give sources
uh and and and expose the chain of
thought. So, so we have both of these in
our products. So, as long as we push
that sort of uh UI and UX for everybody
and if that's what people want, others
also adopt the same thing. Like you can
see other AI chatbots are also adopting
sources are also exposing the chain of
thought. So that way like like nobody
can really um monopolize this or run the
UX in a way that's non-transparent. So
you need like like like products like
ours, open source models and like plenty
of
players, a lot of wellunded players in
the ecosystem right now that like it's
very unlikely we end up in a scenario
like that. Okay, maybe something from
this side of the room who's got a
question here. Yes, just
hi Arvind thanks to the wonderful panel.
So my question is SEO has been a billion
dollar like multi-billion dollar
industry for the past two decades. How
do you see it evolving with products
like you're building SEO? Yeah.
Um, I think like there'll be less need
to do SEO going forward, especially in
products like deep research where it
it's spending so much time thinking
about stuff and and can even literally
have a user system prompt that's there
where ignore content that feels low
quality and like just designed to hack
the search engine and truly think about
like what I want. I I think it SEO will
definitely not be able to fight such a
system. Uh I I would even say SEO only
existed so far because the amount of
compute applied per search query is very
little, right? Uh like you literally do
the traditional ranking compute and then
you you render the ranked order list. Uh
but in AI like you can actually make the
LLM think even longer. Uh go through
whole chain of thought uh iterative
search. So, uh, SEO won't be able to
fight that. The best, essentially SEO
needs a black box of like, oh, this is
potentially how the answer is going to
be, and so I'm going to like hack my
content for that. Uh, but if if the
blackbox comput is so expensive, it's
very hard for you to do. I think we have
a question down here
in the front row.
Thanks. Um, I just wanted to ask to what
extent are your plans shaped or perhaps
even constrained by the cost of power,
electricity. I mean, we're already
hearing companies like Microsoft
actually wanting their own nuclear power
plants. And that's the level of energy
apparently it's going to take to do a
lot of this stuff. What is that going to
do to shape the future of AI?
Um, I I I think like I think it's
definitely cause for concern right now.
like like can we just keep having more
data centers we are computed right now
and comput is essentially power
constraint uh my hope is that like two
things happen one is the next generation
of chips the black hole chips are going
to be more compact uh so you probably
don't need as many uh server rack access
right now because each chip will have
more compute packed into it so that'll
make it more dense and uh that'll
probably need that'll be more power
efficient also uh and the Second thing
is distillation. There's this concept
with AI called distillation where you
can take a very expensive model and and
make it smaller by taking its
intelligence and and cloning and it
teaching a smaller model to behave the
same way as the larger model. Uh so
that'll reduce the compute cost. So
together with more compact chips uh and
and and like smaller models, I think we
can slowly bring down the power and like
compute requirements to serve the same
product again. Okay, let's see. Oh, how
about you? And right there. Yes. Oh, do
you want to get a mic is one's coming
behind you. Hi, thank you so much for
your uh talk and I actually enjoyed also
your conversation last summer with Lex
Freeman. I just want to um ask you a
question. What are so you talked on u
conversation with Lex Freeman about
focus? Um and I want to ask what are the
key strategic bets perplexity is making
in next one to three years and what are
some areas you've consciously decided
not to prior not to prioritize to
maintain that focus. Okay. Um I think
like the key uh like like we don't think
three years ahead. I think mainly
because AI is so uh fast moving and and
and fast changing that it's kind of
pointless to plan for the three years
from now. Just out of curiosity, what is
your time scale that you do think ahead
in AI? We still we still do like
quarterly planning quarterly. Yeah.
Okay. Um because you believe two
quarters out you're not going to be able
to predict the future in order enough to
run your business. Yeah. Uh that's a big
statement. Definitely I'm not able to
and uh and and uh I don't think others
are either. Um because okay like one
thing I'll admit like this is the end of
the quarter. uh the first three months
this year have moved faster than uh the
last two years for me like like the
level of changes nobody saw deep coming
nobody saw things like deep research
happening this fast uh nobody saw things
like agents at least like slowly
beginning to work u and I think uh all
of this is happening so fast that like
you got to keep updating your mind on
like what is the right things to do that
said we have a very clear plan on like
expanding having uh the utility of
perplexity on all the things that common
people search for uh which we call as a
vertical structured answers like like
you know going beyond a wall of text. uh
we don't want to just give you a textual
response for queries related to weather
or sports or finance or shopping or
travel or local like like this needs its
own interactive visual uh structured
experience and um needs an answer that's
like very like high quality uh and and
like high relevance and and so we're
working on all these challenges to make
our product like even better uh and so
that will make the product not just
useful for likeformational queries that
that are related to knowledge and
research but also like day-to-day life
use cases uh commercial use cases and
and and I think that'll set us up well
for like the next next generation where
like people are going to transact
natively they're going to like just do
their research on like uh what products
to buy or hotels to stay and and and and
once they get it very well structured
and answer they can just buy the stuff
right there. So that's going to also
change the future of business models in
internet. So, we're working on that. Uh,
we're working on our browser, which is
going to come out very soon. Uh, where
there's going to be more agent- like
searches, uh, searches over your own
personal data, not just the web. Uh,
your calendar, your emails, uh, your
past browsing history, uh, your
workspace tools like Slack or Notion.
Uh, all of that will be context for
perplexity to answer questions about
you, learn more about you, and like get
more personalized to you. So, we're
working on all these like pretty hard
problems. uh and um uh so those were the
right like like calls we made to
prioritize. As for dep prioritize, I
think like you know we've done many
projects that sometimes didn't pan out
the way we wanted uh and and we learned
from it and like iterate on it like some
so we we had an experiment to like grow
our traffic through SEO uh through this
project called perplexity pages but
Google obviously like danked a lot of
our content. uh not clear for what
reasons but uh uh you know you could
guess why u but like we learned from
that and we quickly adapted it. So now
pages became part of another project we
are doing called perplexity discover
feed which is our version of like giving
you content even without you asking for
a question uh because people are curious
but they don't always know how to
channelize that by you know typing in a
prompt. It's more work. So what if
Replexity does that work for you where
we go and generate like 10 or 20 queries
on your behalf and uh package it into
like really visually readable content
and give it to you right there. So that
uh uses all the stuff we built for
pages. So we we we we tend to be like
pretty adaptive like nothing is a
failure or anything in perplexity. It's
it's always like learning signal and
like we take that and like try to adapt
the product more. I think we have room
for one more question. I see right
behind you there, but all right. Hi.
Hello. Uh so you've been talking a lot
about uh compute resources and obviously
I mean we keep seeing better and better
GPUs and obviously uh companies are
trying to I mean as they mentioned right
like nuclear power. What do you think
about exploring some sort of other
technologies? I know that quantum
computing for example is in very early
stages. Do you think that potentially it
could be worth it, you know, to uh
explore other technologies to to train
models and potentially, I don't know,
quantum computing?
I I I think it's still very early. Uh
the the breakthroughs that have
happened, a lot of noise that's been
generated in quantum recently have been
more around like very academic
benchmarks. Oh, like can I can I get
this type of computation to run on the
quantum computer uh and uh without any
errors. The error tolerance was the
recent breakthrough that Google
announced, but that's still on a class
of computations that are very uh small
academic benchmarks and nothing of
practical relevance yet. So, uh for a
product company like ours that's more
focused on just innovating on the core
user experience, uh investing our energy
and effort, something that's uh five
maybe 10 years down the line is not the
right decision right now. But I think it
makes sense for the big tech companies
to to have teams working on this.
Okay. To close it out, are you good with
a few rapid fire questions? Sure. Okay.
Do you watch the television show
Succession? Yeah, I watched it. Which
Roy sibling would you choose to run
Perplexity?
Probably Shiv. And why? Sorry, I said it
was rapid fire, but quickly. Uh, I think
she's the most composed out of the out
of the It's a low bar, but with those
two those two others, but okay. Long or
short Google.
Uh, just buy the S&P 500. That's my
advice.
Very diplomatic. Um, okay. Describe your
management style in just one emoji.
emoji.
I can tell already. It's not the
exploding head, is it? No, no, no. Um,
poker face.
Okay, good one. Um, what's the most
surprising app that we would find on
your phone right now?
Surprising app? Yeah. as like an app
that not many people have. No, just what
whatever you think people would think.
Oh, Arfin has that on his phone.
Can I can ask? Oh, you're actually going
to do it. Okay. Can we get a closeup
shot of
U surprising app?
See, you're a very predictable person. I
think
well I I I have a lot of unread messages
but that's not surprising I guess.
Uh not rapid fire. All right. I don't
know. I I I think like my my phone is
pretty
um maybe I can tell you what app I don't
use that a lot of people I I don't I
don't have
um Facebook.
I don't have Messenger. Okay. I I don't
have a Facebook account and I I kind of
recommend this. So short meta in other
words. Okay. Um what motivational cliche
do you
secretly say to yourself in your head to
keep you going?
Um, I I think like there's this thing,
you
know, I know I know he's a controversial
guy now, but I I I still think there's
something very respect worthy about what
he said years ago, Elon Musk. Mhm. Where
uh there's an interview, 60 minutes
interview with uh him and someone asked
him like
um why why didn't you give up? you know,
when the third rock could fail. And he
said, "I don't ever give up. I would
have to be dead or
incapacitated." And uh I think there's
something very powerful there that all
entrepreneurs should learn, which is uh
it's only over when you think it's over.
And until then, uh you can always figure
out a way, no matter how hard it feels
at that moment. Um you you
can it's really only over when you give
up. That's a good one. So, that's that's
what I I personally use to motivate
myself. That's a great one. Um, and
what's scarier, AI hallucination or VC
questions? VC questions.
Okay. Well, before our program ends, I
want to uh thank our two sponsors, JP
Morgan and Gunderson Demer, the very
best in the business. Jesse Bardau,
Laura Stoflel. Thank you so much for
providing support for this event. Um,
and just to just to soft close, Arvin,
we have a bunch of very young people in
the audience here who are just beginning
their journeys. Um, and so just to
close, can you tell us what has become
clearer to you as you've
aged? Not to say that you're aged, but
just as you're that that you've just
simply aged. What's become clear? Aged a
lot last two years. So, um, I think
what's become clear is like keep your
composure.
uh stress is going to be there if you're
doing a company, but uh it's never as
good as it seems like at that moment.
It's never as bad as it seems like at
the moment. So deeply internalize this
like when some things are going really
well like don't get too far ahead and
just constantly you know like uh attend
parties uh like stay in the moment and
all that like like be somewhat paranoid
when things are going too well and also
don't be too stressed out and uh you
know demotivated when things are not
going well. I love that. Party on is
your message.
very appropriate for Harvard Business
School. Um, Arvin, thank you so much for
being with us. Thank you so
much. You were great.

Nikhil Kamath