Tesla’s FSD Just Became God-Level: Competitors Are Finished
By Brighter with Herbert
Summary
## Key takeaways - **FSD V14 Solves Autonomy**: Most Tesla investors agree Tesla has solved autonomy with FSD version 14 series and is just a few point releases away, with Elon saying version 14.3 lands the last big piece of the puzzle. [00:00], [02:07] - **300x Faster Than Human Blink**: Tesla's AI4 processes a million pixels of video in 1 millisecond, 300 times faster than a human blink which takes 300 milliseconds, even faster than pro gamers or F1 drivers. [00:44], [07:51] - **Hardware-Software Co-Design Edge**: Tesla co-designs AI4 chips with software teams for unmatched performance per watt and dollar, processing 35 million pixels from 8 cameras into 360° 3D environment in real time, impossible for competitors buying off-the-shelf parts. [06:48], [12:20] - **6.5B Miles = 481M Human Years**: Tesla's FSD has driven 6.5 billion miles, equivalent to 481 million years of average human driving at 13,500 miles annually, making it the most experienced driver. [02:27], [02:32] - **Lidar Adds Fatal Latency**: Waymo and Cruise accidents happen because lidar data adds latency in split-second decisions when cars cut in, unlike Tesla's fast vision inference. [10:24], [11:01] - **Cybercab at $20K, $1/Mile Wins**: Cybercab costs half of ordinary vehicles and a third of lidar-equipped ones, at $20,000 and $1 per mile creates a new category that kills ride-share business. [22:51], [23:27]
Topics Covered
- Tesla Processes Video 300x Faster Than Humans
- Context Compression: 1.5GB/s to 2KB/s Instantly
- Co-Designed Hardware-Software Crushes Competitors
- Lidar Adds Fatal Latency in Split Seconds
- Cybercab at $20k Creates Killer Category
Full Transcript
Most Tesla investors are now agreeing Tesla has solved autonomy with their latest full self-driving version 14 series and are just a few point releases away. Elon said last week that version
away. Elon said last week that version 14.3 is where the last big piece of the puzzle finally lands. But there's a very big advantage, very likely the biggest actually, that few understand that truly
separates Tesla's visionon neural network approach that separates them by miles from competitive solutions. We
will talk about that today. What I'm
referring to is the ridiculous speed it takes for the car cameras to interpret the world and then take action. Tesla
vice president of AI Ashok Alaswami said Tesla's AI4 inference can process and understand a million pixels of streaming video in just one millisecond. Think
about this. It takes a person 300 milliseconds to blink once. A pro gamer or F1 driver can maybe do it in half that time, but Tesla's cars can
understand its 360 degree situation 300 times faster than humans. And here's the thing, no other company can match this.
Not for many years anyways. Why? Because
it's not just having the most powerful, most efficient inference strip. It's
about being able to integrate software, hardware, and data. No one else is doing all of these things together but Tesla.
to help us figure this all out is Larry Goldberg. He's a serial entrepreneur and
Goldberg. He's a serial entrepreneur and has been an active venture capital investor for the last decade. Check out
his website at lumicenti.com.
Thanks Larry.
>> Great to be here, Herbert. Nice to be with you.
>> Yeah, so I just wanted to bring this up because I think few people have spent enough time talking about it and I think it is probably the biggest competitive
advantage that Tesla has when it comes to autonomy or full self-driving. And
you can hear it directly from Elon and uh Ashok and all of his team members their own words. They tried to explain it but few of us have actually said what does this actually mean? So let's start
off with this where you've got Elon talking about start off with this where you got Elon talking about uh version 143 is where the last big piece of the puzzle finally lands. Most of us, I
believe, uh, already realize that Tesla has solved autonomy and we're just a few point releases away or maybe just another version away. Certainly, within
a year, we're there. And um, Tesla said with 6.5 billion miles driven, FSE is basically the oldest and most experienced driver in the world. And
Yantasai, one of the engineers at Tesla AI, said average humans drive 13,500 miles annually. This translates to
miles annually. This translates to 481,000 or 481 million years of driving.
I think that's supposed to be a comma.
Um, so you got the data, you've got the uh, you know, the the version, but I I wanted to share this, which is um, ignore this part here. This is the this
is where Elon replied to that Google showed that they're able to um, have this fast context compression. And what
Elon said was this. Not sure if it's super new as we've been doing this something along these lines at Tesla for a while, but it does make sense. He's
referring to that Google thing is not super new because we've been doing it.
The single biggest tech technical challenge of Tesla self-driving AI is context compression of 1.5 GB per second of video to about 2 kilobytes per second
of control outputs using a puny inference computer without making any mistakes.
super hard to avoid overly lossy compression at any given step. We we'll
dive deeper into that, but uh what's your comment about what he's saying here?
>> Well, I mean, just think about it. He's
taking a 360 degree view of what's going on around the car.
And from that view, he has to say, "Go forward, stop, or turn this amount." One
of those three things.
and and perhaps add to it the amount of speed or or acceleration.
But he's taking these gigantic sets of inputs and he's translating it into the simple command.
Now this is kind of reent of how transformers work. They take you huge
transformers work. They take you huge amount of data.
>> Mhm. and they respond and they compress it to a transl, you know, a next step.
And that next step is typically, you know, one or two or three or even hundreds of sentences, but dramat typically it's a lot less than it to
give out than it takes in.
So it it has to be done very quickly.
there's a very little time between the you know the actual recognition of or the the taking in of the scene and the
giving out of instructions has to be done in very very short order. So you
think about compressing that all down and translating that to next action.
It's an incredible process that has to take place in a very short period of time. If you've ever studied the work of
time. If you've ever studied the work of a transformer, you know that it's doing many many many many many many steps.
>> Take this huge amount of input and to translate it into that step and all you've got in the car is a very small amount of energy available to be able to
to do that.
So it's, you know, it's a incredibly challenging task and it's, you think about what the brain does >> where the brain takes in not as much
information but is able to tra to translate it in very short order to very >> good example. Yeah. I mean it's it's a very similar >> good way to explain it. Yeah. how you
can just instantly walk into a room, understand the situation, just in intuition already know something quickly and you didn't even process it necessarily.
>> You're already reacting to what you've seen before you fully understand yourself, >> you know, in your mind what you're what you're what you're seeing.
>> Yeah, it's a very good very good metaphor. So, you know, this is the
metaphor. So, you know, this is the competitive advantage that Tesla has that no other car company, autonomous car company can do it. And here's why.
Ashoka Swami explained it. He said, um, not to mention these chips are heavily co-designed with Tesla AI software teams. So, he's talking about the AI4 chips. Tesla Elon was saying, "Hey,
chips. Tesla Elon was saying, "Hey, we're making these AI4 chips. I'm going
to create the greatest AI 5 chip." But
he's saying, "We are co-designing these chips with the software team to achieve incredible performance while also beating by a large margin on performance
per watt and performance per dollar." So
for instance he said AI4 can process and that's that's the chips that's already in our cars now can process and understand a million pixels of streaming video within one millisecond. This is
only achievable because the software and the hardware are designed together to achieve this performance point. So it's
like yes you can go ahead under car companies buy uh Nvidia chips you can go ahead and buy you know Whimos lidars you can then buy somebody else's uh neural
nets or you know their their version of self-driving but if they're not designed together you won't be able to accomplish this performance per watt performance per dollar and then this guy explained
it really well he said okay Ashok said understands 1 million pixels in 1 milliseconds what is the meaning of this figure it takes about 200 to 300 milliseconds for a person to blink. It
also takes about 200 to 250 milliseconds for a person to see with their eyes and recognize it. Okay? And this is like
recognize it. Okay? And this is like done in one millisecond. He said there's a programmer or an F1 driver, right? The
best of humans out there, they can do it half the time of a regular person.
That's why they're so much better than most of us. But Tesla cars can understand 360° situation 100 to 300 times faster than humans in that short
of time. 1 millisecond, which is 1/
of time. 1 millisecond, which is 1/ 1,000th of a second. Um, can Larry, is it possible? I mean, I guess it's always
it possible? I mean, I guess it's always possible, but it's, you know, that that is the standard that Tesla's going after, and it's unlikely that other car
companies can match that. Well, I I you know, I wouldn't be that I wouldn't be that certain that other car companies can't match that. But I mean, when you think about how long it's taken Tesla to
get there and other car companies haven't started down the path yet, >> it's very hard to see how they'll catch up. Now, a lot of them, some of them,
up. Now, a lot of them, some of them, a handful maybe of them are using LAR and other, you know, very accurate maps,
very accurate mapping devices. radar and
uh they're building to those and uh you know um Google has been doing it for years. I mean Google started this
years. I mean Google started this project I think six years before Tesla began their project with a third party.
I mean where Whmer have got to I think is you know a dead end because I think that the business model that they can spawn from their technology just is not
going to hack it simply because they just need so much care and feeding um of their system to to maintain to keep it
up. So I think I think it's going to be
up. So I think I think it's going to be very tough. Now a lot of the Chinese
very tough. Now a lot of the Chinese firms are beginning to copy uh Tesla and I think they will move at warp speed.
>> Sure. You you need to copy Tesla which is you need to have your own chip, you need to have your own supercomput, you need to have your own vision cameras and so forth and then you can optimize
>> supercomput are necessary. I think to scale they to scale to the levels that Tesla talking about may become necessary but I think at the outset you can start
where cost is not an issue simply getting the operating system going is necessary I think you >> but for superhuman safety of course you can reach you know standards that is
better than humans but you've you know to be the best um like I keep repeating this but people don't understand this lidar and zuke Lukes who uses LAR Whiml
and LAR have had accidents and you people well liar prevents them from accidents. No no no it's about how
from accidents. No no no it's about how speed of of decision-m is what matters and so most of the accidents or some of the accidents has happened with Whimo and Zuks was the car is driving you're
the Whimo and then another car decides to cut in front of you. It's not that LAR didn't see it. It's just it didn't decide oh it's coming. I got to move away. that split micros secondsonds that
away. that split micros secondsonds that happens is the difference here. Um and
as his brain is not faster, it's it's inference. And then that's where Elon
inference. And then that's where Elon was saying that well your you have to decide now. I've got LAR data. I've got
decide now. I've got LAR data. I've got
vision data. I got to that adds latency to how long it takes for you to decide what to do. They sent software updates to fix it. But that is examples of how
quick you know microscond decisions what to do is makes makes really important things. You're probably correct and I
things. You're probably correct and I don't want to pause that out because you know there are a lot of factors in that a lot of factors but I will say this
>> that it's one thing to get there it's another thing to get there at an expense at a level of expense that just makes the whole game untenable >> right
>> now you know China can manufacture some of these things at very low cost and they can turn them out and they can even manufacture them below cost below low
selling price, above selling price, I should say, um because of their crazy economics there, but it's going to be
awfully hard to compete. In fact, I just I don't see it happening. The the system that Tesla has is so elegant and people saying, "Yeah, other people are doing it
now. They've got to where Tesla is or
now. They've got to where Tesla is or ways ahead of Tesla." Yeah, but at what cost? And just to finish off uh the
cost? And just to finish off uh the explanation, it reads so Tesla cars can read the 360° situation from eight cameras. It grasps the meaning of each
cameras. It grasps the meaning of each and every pixel and reconstruct it reconstructs it into its own 360° 3D environment. For reference, about 35
environment. For reference, about 35 million pixels come in comes in at once from the eight cameras. You can think of it as almost understanding the situation around the vehicle in real time. That's
AI4. Don't wait till you got AI5, AI6.
And uh it actually understands it, right, Larry? Like like it knows that
right, Larry? Like like it knows that that that is a fire hydrant. It knows
that, you know, there's a sign there that says something it can read. Um and
Elon's talking about a new chip every year. I mean, it's going to collapse the
year. I mean, it's going to collapse the >> cycle by four times because right now it's a new chip every four years. He
wants to collapse the time cycle by four times. Now, even if he collapses it by
times. Now, even if he collapses it by two times, it's a killer. It's an
absolute killer.
>> Okay. Well, I I do think this is one of the key things. I'm going to show some videos um that uh you know, FSD videos.
We've seen a few of these, but it's just more um more, you know, just, you know, walking around. Did you see that person
walking around. Did you see that person with a baby of all things?
>> Okay, you look at it. So, like, okay. Uh
there I saw it there.
what do you do is just incredible. She's
not even working on the crossalk.
>> Yeah. So that that's a baby there and then um you know detouring around construction zones. So you have to make
construction zones. So you have to make a decision, right? You see an event, do I go around it? How fast is that car coming?
>> It's amazing.
>> Uh what do I do? What am I allowed to do?
>> And it needs to make that decision. And
then of course we've seen a few of these which is um oops you know being driven around. This
is Vancouver Canada.
>> Yeah that could be my car pulls out of my garage packs up finds it way out.
It's amazing. I mean the first time it did that was I guess 13.6 but now does it with such authority and knows exactly what to do. Waits for the
door garage door to open. You know, it's got the garage my Q waits for my queue to open the door and it slides itself in exactly the spot that I, you know, want
it to be at and then drives opens the garage door. It doesn't open. I have to
garage door. It doesn't open. I have to open the garage door, but it then drives itself out.
I guess that's my point, too. So one one thing we've talked about here is understanding the world in milliseconds and you can only do that if you optimize
the infrastruct the software the hardware all together. Yeah.
>> Like at the same time, not separate companies, >> but it's also the brains and understanding what's going on. Like I'd
rather would you somebody would you rather have a third liidar eye or would you rather have a bigger brain? I go
with a bigger brain to understand the world of the context of what's happening.
Um, that's what >> I would mind an eye on in the back of my head, but they were really >> Well, if you have a bigger brain, then you know. Anyways,
you know. Anyways, uh um yeah. So, what do you think, Larry? I
um yeah. So, what do you think, Larry? I
mean, I know that you focus a lot on on business model. That at the end of the
business model. That at the end of the day is what matters >> because once you hit a certain level of let's say somebody's five times smarter than five times safer driving than a
human but we are 20 times safer.
Maybe they're still equal in the eyes of general people. What they care about is
general people. What they care about is like things like this, right? Does it
understand that that is a toll booth and I'm supposed to stop there and then only leave when it's done? Like those kinds of intelligence, right?
>> Yeah. The first time it did that to me, I thought it was a mistake.
>> I thought, "Oh, no. It couldn't have done that. It can't do." And then when
done that. It can't do." And then when we got to the next path, did the same thing. I thought, "My god, when did this
thing. I thought, "My god, when did this happen?" You know, it's amazing. It's
happen?" You know, it's amazing. It's
just amazing.
>> Yeah. You know, the the level of intelligence um is very exciting more so because it points to what can be done
with, you know, optimists. I mean what's coming with Optimus is even beyond this level.
>> You know the folks at ARC did a piece where they said Optimus is going to be a thousand times more difficult than driving. I actually disagreed with them
driving. I actually disagreed with them and I came up with a number dramatically less than that for various reasons. But
this is just training. When I say training I mean in the broader sense.
This is just feeling their way. getting
it aced like this is just feeding their way towards what they're going to have to do with Optimus, you know, because the the variable, the variation is going to be, you know, many, many times this,
but it's fantastic to to to be driving this now. Wow. Yeah, that's a good
this now. Wow. Yeah, that's a good reminder. I think with the Optimus, um,
reminder. I think with the Optimus, um, so compared to a car, you have a little bit more uh input, just a few bit more, right? You have audio, you have touch
right? You have audio, you have touch sensors, but the output >> thousand times more degrees of freedom >> which is the output. Yeah. It's like
here in a driving car steering and uh direction and speed and d and brakes which is just one thing versus uh optimist it's like the the the position
of each finger joint and the movement of the wrist. It's like, you know, it's
the wrist. It's like, you know, it's like a million things happening.
Thousand, >> you know, it's the ankles, the knees, the head, the arms, the fingers, >> the body, you know, and and you know,
the degrees of freedom are I mean, they're really huge explosion of degrees of freedom and that's going to be complicated.
So Larry, I was uh talking to somebody about this recently and they said that you know so they agreed that the version 14 is already safe as a human in fact
safer than humans but that when the average person will adopt FSD it will only be when they are able to do things
like you know park in handicap parking perfectly straight not if it's an angle parking do it right like those kind of things and uh I I I don't agree with
that. I think I felt like um the one
that. I think I felt like um the one thing that people will want, first of all, safety is so important, but it's actually not a decision-making thing for most people. It's texting and driving. I
most people. It's texting and driving. I
do agree when Elon said that it's texting that if you if my wife hears me, my daughter hears me, my friend hears me
say, "Oh, I texted and drove or I was in a Zoom call while we were driving."
What? And then they that's when they might be able to do it. But do you think that no, it needs to do all these edge cases first before the average person will go, "Okay, you've solved FSD, solved autonomy."
solved autonomy." >> Well, I will tell you that everybody I see driving next to me when I'm on FSD is texting while they're driving. So,
>> yeah, 30% of all accidents happen because somebody was texting. Yeah,
>> I it is incredible. her. But I, you know, I look around all the time and because I'm able to and I see everybody is either on the phone or they're
looking at something on their phone or they're texting. It's it's like 80% 70%
they're texting. It's it's like 80% 70% 80 just look around you. So there's no way that it's not going to be safer with
the current level of FSD without a driver paying attention.
So you know the driver paying attention now is a matter of is more a matter of regulation than it is you know the necessity of it. So I think it all turns
on regulation. I think we're there. I
on regulation. I think we're there. I
you know I think this version that we have right now 14.2 is as good. It's not
as good as it's going to get. It's going
to get a lot better but it's good enough. Um and so good enough for you
enough. Um and so good enough for you know 80% of drives some drives you know >> but when will when will the the vast majority of people not just me and you
the early adopters when will they go okay it's solved and ready I'll be doing it I'll be using it now you know when we were discussing this at a brainstorm
I said it's going to be you know first quarter next year in the March time frame that I think we'll get to FSD
unsupervised.
I think when we do get to unsupervised, the rate of adoption is going to skyrocket.
That's my view.
>> Can you be at unsupervised but not solve the edge cases like not being able to park properly, not being a you know, >> no, you have to do all that properly.
>> Okay, that's what I'm asking. If you
think that they have to solve these little things at the end >> versus just the fact that it's cuz today it drives >> those little things at the end are
trivial. I I may be wrong but I think
trivial. I I may be wrong but I think they're trivial. I think they're
they're trivial. I think they're >> you'll be cases that will have to be addressed but I think they're trivial.
>> Yeah, agreed.
Okay. Interesting. So uh business model what are the what's your list of the things that differentiates Tesla from other competitors in terms of uh solving autonomy
business model is your first one right well solved autonomy I think autonomy is going to be solved um progressively over
time the issue is not autonomy the issue is econ economy
>> can you solve autonomy at a cost and uh with the support level that is necessary to turn that into a business
model. Now autonomy is not going to sell
model. Now autonomy is not going to sell a huge number of cars to be honest at the outset. I think autonomy is the
the outset. I think autonomy is the point about autonomy is robo taxi and ultimately it's really
about cyber cab. You can make a cyber cab at half the price of the cost of manufacturing an ordinary vehicle and a third of the cost of manufacturing
ordinary vehicle with all the LAR and gizmos and gadgets then it's game over.
And if you can do that with the level of infrastructure that Tesla already has, then that's like the icing on the cake.
So whether you'll sell a lot more cars and whether people will actually buy it.
>> Um I think that's a that's an issue at the margins that's not going to affect the big picture. That could certainly add >> three, four, five points to Tesla's
margin. And so it could be great to
margin. And so it could be great to have, but it's the icing on the cake.
>> The big issue is the cyber cab ultimately the cyber cab and the cyber cab and the cyber cab. I mean, that car at $20,000 at a dollar a mile, it's a
killer. It's a category. It's not It's
killer. It's a category. It's not It's not a category killer. It's a category creator.
>> Yeah.
>> Create a category that we don't have yet. Yes. It'll kill everybody in the
yet. Yes. It'll kill everybody in the car, you know, in in in the ride share business, but it's going to create a
whole new level of ride share.
>> Oh, inspiring. Yeah, we're just that.
Yes, we're here and uh think about where we're going to be in a year from now and uh we're going to be AI5 maybe a year or two years from now. AI5 and then you've
got so much safe for the human. You got
even probably better than this 1 million pixels per second millisecond.
Uh it's going to have all the edge cases probably saw by then and it's going to be just flawless.
Unbelievable. Unbelievable. We're there.
And then the business model just keeps getting better and better and better.
And so yeah, so you think a dollar, right? Anything under a dollar is
right? Anything under a dollar is category creator. And uh
category creator. And uh >> Oh, yeah. I mean, even at $2 because where am I right now? Oh, like $5. They
say they're at >> Yeah. cost them too much. They're trying
>> Yeah. cost them too much. They're trying
to make their money back. Wonderful.
Thank you. Love that we had this conversation, Larry. Appreciate you very
conversation, Larry. Appreciate you very much. Check out his new website,
much. Check out his new website, lumisenti.com. Follow him on X at Tesla
lumisenti.com. Follow him on X at Tesla Larry. Thanks. Happy Thanksgiving. I've
Larry. Thanks. Happy Thanksgiving. I've
created a website that is the most comprehensive resource for the Tesla investor. Please check it out. Simply go
investor. Please check it out. Simply go
to my website at herdomm.com.
Loading video analysis...