LongCut logo

TanStack DB + Electric SQL is the stack I’m most excited about right now

By Kevin Wade

Summary

## Key takeaways - **Electric SQL Uses Postgres Logical Replication**: Electric SQL puts a service in front of Postgres using logical replication to give you a reliable way to access a live slice of data. It avoids the insane complexity of building custom sync engines like Linear did. [01:08], [01:41] - **HTTP Long Polling Enables Caching**: Electric SQL exposes an HTTP endpoint using long polling instead of websockets, getting all HTTP benefits like caching, eTags, and 304 not modified responses for super fast client reloads. [02:42], [03:06] - **Transaction IDs Resolve Optimistic Mutations**: Every Postgres transaction has a transaction ID; return it from your custom API and TanStack DB uses it to know exactly when your local optimistic mutation has synced back through the full stack—no blinking or heuristics. [09:19], [10:08] - **Custom Proxy Handles Auth & Filters**: Build your own proxy to Electric SQL's HTTP endpoint to add authentication, authorization, and static where clauses limiting data visibility per user, without any prescribed auth system. [03:38], [25:13] - **useLiveQuery Re-runs on Data Changes**: TanStack DB's useLiveQuery hook lets you query local data like a database with joins and filters; it automatically re-executes and recalculates whenever underlying data changes anywhere. [32:46], [33:21] - **Real-time Collaboration in Debt Calculator**: In the debt payoff calculator demo, changes like updating balances or payments instantly sync across multiple clients via Electric SQL, reordering rows and recalculating timelines in near real-time. [14:39], [15:37]

Topics Covered

  • HTTP Trumps WebSockets for Live Data
  • Own Auth via Custom Proxies
  • Transaction IDs Solve Optimistic Sync
  • Live Queries Enable Client Joins
  • Incremental Adoption Fits Existing Stacks

Full Transcript

I have been wanting to make this video for a while. I am super excited about Tanstack DB and Electric SQL. If you

haven't heard of those or if you have and you don't know what they are, that is okay. That's what I'm going to go

is okay. That's what I'm going to go over in this video. Both what it is conceptually, but then also walk you through an example app that I have built

and we can see how all of this works together. Let's go over what all these

together. Let's go over what all these different pieces are. Tensacb it is a client first store. It helps you manage

data locally, load it up from some remote source, but then also do live queries on it locally. Optimistic

mutations whenever you're updating data and it does this all in a really cool way. I'll go over that in just a bit on

way. I'll go over that in just a bit on the diagram I put together. Next up is

Electric SQL. Electric SQL is a way to

Electric SQL. Electric SQL is a way to put a service in front of Postgress still in your private infrastructure although I believe they have a cloud

offering as well. It uses the logical replication of Postgress. It essentially

gives you a very very very reliable robust way to look at a live set of a slice of data in Postgress. And if

you've ever tried to I mean most of us haven't tried to build a sync engine because it's super difficult but if you've ever tried to mimic a sync engine

which is synchronizing remote state locally it's an incredibly hard thing to do. Linear actually gave a talk on their

do. Linear actually gave a talk on their custom sync engine that they built inhouse from the ground up. I watched

it. It's a few years old at this point.

It's insane. It's absolutely insane. No

one should try this. It's it's crazy what they had to do and you can achieve the same thing with these tools. So let

me go through this diagram and then let me show you some example code of how this will work. So everything in blue on this diagram are your reads. Everything

in red are your rights. So we'll start with Postgress and go up. So we've got Postgress.

It's a database. Sitting on top of that or in a different service layer still within your private infrastructure is electric SQL. And then what electric SQL

electric SQL. And then what electric SQL does is allows you to subscribe to slices of your data. They call it shapes.

And it exposes an HTTP endpoint.

Now, HTTP is a really simple quote unquote mechanism, and it essentially does long polling to keep the data updated, which you might

think, why wouldn't you use websockets or something like this? One of the really great things about HTTP is you get all of the benefits that come with

HTTP. So, for instance, caching and

HTTP. So, for instance, caching and cache layers. So when the front end is

cache layers. So when the front end is pulling data down, if that data hasn't changed since it last queried from the server, if the person refreshes the page

and it's doing kind of a reload, it can use all the HTTP caching. Even if it's not like public sort of Cloudflare caching, even just e tags and checking, you know, 304 not modified or or I may

get that status code wrong, whatever the notmodified status code is to make that super super fast on the client layer. So

this is exposes an HTTP endpoint. What

you can then do with that is by making your own proxy to this endpoint within your custom backend, you can then do O checks and data filters. So if you have

a user that's wanting to subscribe to a set of data inside of that proxy, you can make sure they're authenticated, but then also make sure they're authorized

to access that data. So you can apply any wear filters that you might need to do in order to limit the visibility of their data.

One of the great things that I love about this approach and I especially love it on the right side is this

pairing of Tanag and Electric SQL does not at all prescribe any sort of way to do authentication or authorization. Some

of the tools that are out there that offer sync engines and local first data for you to use that synchronizes with the server often come with not only like

them kind of owning your backend and your backend data, but they often come with a proprietary authentication and authorization system

which is super super annoying. And you

know it whether it's rowle security or fieldle security it may not quite fit your use case especially once you get to an app of a substantial size. In this

case you can use whatever authorization checks you need to use and apply any sort of data filters by simply composing

a wear clause for a database query. All

right. So that is the backend. Again,

we're reading data so far. Once we get up to the front end, Tanstack DB is kind of your local data store on the front

end. And the real time reads and

end. And the real time reads and real-time data updates can be powered by electric SQL and then other providers that Tanstack DB provides as well as

well as custom ones that you may want to write. At that point, your UI is able to

write. At that point, your UI is able to then use live queries to live subscribe

to data as it changes and it can operate on that data as if it were local. So you

can update that data locally, you can read that data locally. It becomes super super super

fast. So that is kind of the read side

fast. So that is kind of the read side of the house. Let's talk about rights.

This is often again where if you're using some other sort of system that's trying to provide local first data or sync engines, they have a very prescribed way to do writes and then you

end up kind of like figuring out how to do database type transactions on the front end which is very strange.

This takes a bit of a different approach although some elements are similar. So

when doing writes you can issue inserts, updates and deletes locally with Tanstag DB. So if you're doing very simple

DB. So if you're doing very simple operations, simple insert of a record, simple update of a record, simple delete of a record, you can do that locally

with Tanstag DB. It has callbacks that then handle calling to some remote endpoint and handling those updates.

Again, this is completely non-prescriptive.

So, you can call your own custom API for handling whatever sort of insert update delete endpoints you want to hit. These

could be HTTP endpoints, they could be REST, they could be server functions, which would be a great one, a great use case if you're on Tanstack start, which my example here in a minute will be. You

I mean, you could use GraphQL, you could use RPC, you you could use anything that you want. And again, with it being your

you want. And again, with it being your own custom API, you are handling your own authentication and authorization.

So, let's start with the simple operations here. When operating on data

operations here. When operating on data locally, first because you're writing to local data, your UI becomes super responsive and you only have like one

code path to handle. If you're writing to local data and your local interface is rendering local data, you don't have to worry about kind of like that in

between loading state because it it frankly doesn't exist. So once local data is mutated, Tanstack DB calls onins

insert on update on delete operations to your custom API. They so cleverly implement a way to handle optimistic

mutations. So you need to hang on to

mutations. So you need to hang on to that temporary state of the data until the server fully syncs, right? But there

could be a lot of different operations happening. So you could be doing all

happening. So you could be doing all sorts of different things and other users could be doing different things.

And if you have a highly collaborative environment, how do you know when that one specific operation of yours came back? Because it may not be the very

back? Because it may not be the very next tick of data, the very next update payload of data, maybe someone else's operation happening and then yours

disappears briefly and then it comes back. like it's that's a very complex

back. like it's that's a very complex thing to design and handle in most use cases. There's an incredibly simple and

cases. There's an incredibly simple and effective approach that this stack takes here. What happens is every single

here. What happens is every single Postgress transaction has what's called a transaction ID. And if you just

slightly alter your custom API to as part of its payload return that transaction ID, Tenstack DB will then use that as the

identifier for when the mutation has been synced and incorporated back into the remote data. So you return the

transaction ID. The optimistic state

transaction ID. The optimistic state stays here. It makes it through

stays here. It makes it through Postgress. It replicates to electric

Postgress. It replicates to electric SQL. It makes it through your proxy

SQL. It makes it through your proxy which is being pulled by Tanstack DB which includes that transaction ID. So

Tanstack DB knows at a transaction level at a database transaction level when that data is reinccorporated. So there's

no blinking, there's no guessing, there's no huristics. You get a solid reliable identifier of when that data is

updated.

So what's nice about this is you can kind of see we're developing one read path here for our data and mostly one

right path. Although I have an

right path. Although I have an alternative here for you as well. Let's

say you have some advanced or custom operations. Let's say this you have

operations. Let's say this you have something that needs to operate on a lot of different data at once or it's not a matter of just a single insert, update

or delete. Your UI can still call

or delete. Your UI can still call whatever custom API that you want to call and then at that point because the source of truth is Postgress, it really

all comes back to you just have to update Postgress. That's it. You don't

update Postgress. That's it. You don't

have to do all of the writes or updates through Tanstack DB. You can do offshoots through API endpoints and as long as it's writing to Postgress, which

everything should be, then that will sync back up through your read layer. So

this offers you incredible flexibility with having your default option of let's do the simple CRUD like operations in

Tanstack DB but for the more advanced things going the custom route and it's still syncing back through Tanstack DB.

So that is really really cool. That

means here that tanstackd is not requiring you to follow any sort of paradigm here. You could adopt it

paradigm here. You could adopt it incrementally within your app. It

doesn't have to take over everything. I

mean, that's what's really cool about this. We have in our app Baton that

this. We have in our app Baton that we're building, we're using this setup for our AI interactions. So, when you're chatting with Baton, we've got things

that could happen while this AI response is coming back in. There could be network disconnections. You could

network disconnections. You could refresh the page. Even under great conditions, you want to see the data stream in. We're doing all of this with

stream in. We're doing all of this with electric SQL intens. And it's working really, really well.

So that is the conceptual overview. Let

me dive into an example app here that I've created. And this will be open

I've created. And this will be open source. I'll put this up in GitHub and

source. I'll put this up in GitHub and link to it in the description.

So the app here is a debt payoff calculator. And so this is a way to put

calculator. And so this is a way to put in any sort of debts that you may have, credit cards, school loans, car loans, etc. And then set a monthly budget for

what you'd like to contribute to paying off those debts. And then it will give you a payoff schedule, how long it will take, an approach to take. There are two different approaches someone might often

take with this, which is avalanche style. This is where you pay off the

style. This is where you pay off the highest interest first. That's the most efficient and mathematical way to do it.

Then there's also snowball style, which says you pay off the lowest balance first to kind of get that like psychological juice going of, hey, I'm

crossing debts off the list. So this is a really great use case for this sort of example because I have really two

different data models in the system. One

is a workbook which is kind of everything you see here. It represents

almost think of like a spreadsheet and then the next thing in the other data model is your debts. Those are linked to a workbook. You have all of these

a workbook. You have all of these different rates and balances on them.

The settings for how you want to do this is saved on the workbook itself. So your

monthly budget, your strategy, etc. And then as you make changes here, all of this data changes and the table recalculates,

etc. You can say, okay, well, I don't have this personal loan anymore, so I'm going to delete that. So everything's

going to come back in here. We've paid

off a few thousand on the car loan, so I'm going to update that. that now

recalculates all the data.

And if you were collaborating with someone on this, y'all could pull up the same screen, see that data side by side.

In fact, let me take a look at that here. So, if we're looking at this demo

here. So, if we're looking at this demo data, and I am increasing our payment, I can see that our timeline is shortening,

our total interest is going down. I'm

making this change locally here within my browser. It's syncing back up to the

my browser. It's syncing back up to the server. It's going back through electric

server. It's going back through electric SQL. All the clients who are querying

SQL. All the clients who are querying this data live are receiving updates from electric SQL and their Tanstack DB is updating the local state within the

browser. So you get with this setup

browser. So you get with this setup really real time collaboration and sync almost for free. You have a few trade-offs, but almost for free. So

again, let me change these balances. I'm

going to change this balance. These rows

will reorder. You can see that happen on both sides. I mean, you can see how

both sides. I mean, you can see how incredibly fast that is. As soon as it saves on the right, it happens nearly this. Those animations [clears throat]

this. Those animations [clears throat] are almost in sync. That's how fast it is even going through that whole stack.

So, this is the example app. Look, I can I can rename this. I can say these are our joint finances. That renames it over here. I can go back. I can see my

here. I can go back. I can see my workbooks here. If you have more than

workbooks here. If you have more than one workbook, maybe you're like helping a friend with these sorts of things, then you can have multiple workbooks

here. But yeah, so that's that's kind of

here. But yeah, so that's that's kind of the example app. Let me walk you through some of this code and how it works. So

the first thing you do in Tanag DB, well, a after you set up your infrastructure, you have to get electric SQL running, which it runs easily in Docker Compose. In fact, I'll just show

Docker Compose. In fact, I'll just show that to you here. So, I have a pretty standard Postgress setup. Although we

are configuring it for logical replication. And then we have our

replication. And then we have our electric SQL instance here. It is

connected to the Postgress instance. And

it exposes a port that again would not be publicly accessible to the internet in production. This would be exposed to

in production. This would be exposed to your local private network within your infrastructure. and then your backend

infrastructure. and then your backend exposes it through a proxy. That's the

electric SQL side. Let me kind of go let me flip concepts for a bit. Let me go back to the top of the stack and we we will look at the UI side of this first and then maybe we'll work our way down

from there. So the first thing you do in

from there. So the first thing you do in Tanstack DB is my gosh these hovers. The

first thing you do in Tanstack DB is you define what's called collections. And

collections you can really think of as a table. So we have two collections in

table. So we have two collections in this app. One is workbooks and one is

this app. One is workbooks and one is debts. So we have our workbooks

debts. So we have our workbooks collection here. You map that to a zod

collection here. You map that to a zod schema. So if I take a look at this,

schema. So if I take a look at this, I've got our workbook schema here. ID

name, monthly payment, strategy, created at updated app. uh that helps tan stackdb locally to validate your data and know the expected shapes and provide

all the nice typings that you might expect in typescript.

The shape options, this is kind of electric SQL speak and it allows you to have a way to filter your data. In this

case, we're just saying that we want the table workbooks. And actually on the

table workbooks. And actually on the server side in our proxy that's where we're we are further limiting the data to only what this user has access to.

I'll get to that in a second. Then you

define callbacks for onins insert on update and on delete. This is where you connect up when something happens to our local data on the front end within

tanstack db. How do we write that to the

tanstack db. How do we write that to the server? So in this case all I'm doing is

server? So in this case all I'm doing is I'm calling three different server functions that I have. One called create workbook, one called update workbook, one called delete workbook. Those are

server functions that authorize the users through our better off setup that we have here writes to Postgress and it returns the transaction ID. In fact,

let's dive into that a bit. So let's

take a look at those server functions.

Uh what would be a good one here? Let's

do update. So we have an update workbook server function. We're creating a

server function. We're creating a tanstack start server function. We are

validating the input. So we want to expect a workbook.

These hovers are driving me crazy. Uh we

want to expect a workbook. I'm not sure how to turn those off. Uh we want to expect a workbook. pick off only the keys that we want and then also say that

the the user can pass in partial data.

So if you're just updating the name of the workbook, only provide the name. We

don't need you to provide all the values. In fact, that's worse because

values. In fact, that's worse because then you may be overwriting someone else. So we're just using our same it's

else. So we're just using our same it's actually the same workbook schema. So

we're sharing this ZOD schema with both the front end and the backend, which is really cool. Then down in the handler,

really cool. Then down in the handler, we have an authorization check to authorize this user for the workbook.

Again, this is our own handwritten homebaked authorization check. Then

we're doing a Prisma transaction here to both update the workbook and get the transaction ID. And then as a part of

transaction ID. And then as a part of the return, we are returning that transaction ID. We actually don't need

transaction ID. We actually don't need to return anything else. I'm just doing that here. The local client has already

that here. The local client has already updated locally and this transaction ID provides that optimistic end point of

when Tanstack DB can utilize the live data once this transaction flows through the whole system. If you're familiar with Prisma, you may be like, "Oh my gosh, how do you get that transaction

ID?" Well, it's not baked in. You have

ID?" Well, it's not baked in. You have

to do this a little bit manually, but as you can see here, I just have a nice little utility function that I've hidden it within and then you can use that throughout. But let me show that to you

throughout. But let me show that to you here as well. I'll pull this down here.

So again, we're doing this inside of one Prisma transaction. We are using the

Prisma transaction. We are using the built-in Postgress function for getting the transaction ID and simply returning that back. And that's it.

that back. And that's it.

That's what it takes to perform the writes to our data. And

yeah, it's it's pretty easy like that. I

did skip over what the actual write looks like in code. So, let me show you what this new workbook button looks like. So, again, we defined a workbooks

like. So, again, we defined a workbooks collection here, which is a local collection of where to connect up to in electric SQL. what to do for callbacks

electric SQL. what to do for callbacks of saving the data to the server. What

does it look like to actually work on local data? So, this new workbook button

local data? So, this new workbook button here takes that workbooks collection and runs an insert on it. So we're inserting

just a UYU ID that we are generating here utilizing the new Postgress UU ID v7s in the latest Postgress version uh

18 which is cool and setting some default values here. This is getting validated against our ZOD schema. So

this is assured to match the schema that we want. And then we are going ahead and

we want. And then we are going ahead and navigating to that workbook.

So this workbook will exist instantly in our local collection and meanwhile it will be saving to the server as well.

All right, so that's what an insert looks like locally. Again, tan site DB then takes that calls this oninsert call back that then lands at our server

function which actually does the inserting of data into our database again with O checks again setting all the proper information. So that's a good

example here. We're not setting an owner

example here. We're not setting an owner ID for this workbook on the client because that's not something you want to rely on the client to do. Instead, we're

setting who owns this workbook on the server side based upon who made this call. So, that's a really great example

call. So, that's a really great example of how you can handle the safe things that can safely be done on the client,

but then handle the security mechanisms on the server. Okay, those are the rights. And I'm kind of surprised I

rights. And I'm kind of surprised I started there, but uh let's then go back through the read layer now. Okay, so I think you probably know how Postgress

works if you're watching this video. Uh

again, electric SQL is a layer on top of Postgress. Then let's take a look at our

Postgress. Then let's take a look at our proxy. So again, we have Postgress

proxy. So again, we have Postgress electric SQL. Then we're running a proxy

electric SQL. Then we're running a proxy here, which eventually makes it up to that tanstack DB collection that I just

showed off. So let's look at the proxy.

showed off. So let's look at the proxy.

Um, so the proxy here is just simply an HTTP endpoint that the client is able to call. Tanstack DB

handles calling this for you. We are

authenticating the user through better off throwing errors if they're not authenticated. We are then taking these

authenticated. We are then taking these search parameters and applying some these these popovers. How do I get rid of these? Okay, I think I figured out

of these? Okay, I think I figured out how to get rid of the hovers. We'll

we'll see. So, here on our custom proxy, we are defining what are the allowed tables that the client is able to listen

to. So, again, we're trusting nothing

to. So, again, we're trusting nothing from the client. We are doing server side checks on everything to make sure that they can't just essentially query the database for anything they want to

see. So the client can only access

see. So the client can only access workbooks and debts. And then even within that, let's go to workbooks.

That's simpler. For workbooks, we are applying a wear condition that gets passed to electric SQL that says only return workbooks where the owner ID is

the current user ID. The client doesn't set this. This is set by the server and

set this. This is set by the server and server only. So it ensures that the this

server only. So it ensures that the this client is only getting workbooks that they have access to.

Similarly, for debts, we only want to return debts that are a part of one of that user's workbooks.

I'm not honestly 100% sure if this is the best approach. I would be interested if any tan tag DB people or electrical people are watching this video, what the

recommended approach would be. So, I'm

still exploring this a bit, but the way I'm doing it here is I'm fetching all the workbooks for this user and then applying a wear clause that says where

the workbook ID is within this user's workbook IDs. I think this works fine

workbook IDs. I think this works fine for this approach because you're not often adding and removing workbooks for this particular app. This would be

something that changes infrequently.

One of the requirements of electric SQL is that the wear clause has to be static. It cannot be dynamic data. For

static. It cannot be dynamic data. For

instance, if you are wanting to limit data with respect to time, you couldn't say just show me the last 30 days because the last 30 days is always

changing at any point in time. You could

say return to me data that is 2025 or newer because that is a wear clause that doesn't change. It's always going to be

doesn't change. It's always going to be 2025 or newer. These are also considered static wear clauses. There may be a better term for it, but these are also considered static wear clauses because

even though your list of IDs may change here, this rendered wear clause doesn't change. And if your set of ids does

change. And if your set of ids does change, it's actually just considered a new shape within electric SQL and that

creates a new subscription on that side.

So this is an okay approach. It has one or two sort of issues that you have to deal with with this because when it's polling, the current poll may not

include a brand new workbook that is appearing. that brand new workbook would

appearing. that brand new workbook would only appear on the next HTTP call. So,

for instance, you may you may have noticed I'm doing one little thing in here on uh may not be able to find it.

One little thing on here on navigate.

I'm actually reloading the document when navigating to this new workbook so that when it subscribes to the debts, that new workbook ID is a part of that collection. I'm sure that there is a

collection. I'm sure that there is a better way around this. Again, it's

fairly new to me too and fairly new technology. So, I'm awaiting best

technology. So, I'm awaiting best practices on how to do this.

But that is really the proxy. It then

takes this information, passes it on to electric SQL, and returns the electric SQL response that it has. So, to show you what these calls look like, I have

refreshed the page here, and I've opened up network inspector. This is calling our proxied electric SQL endpoint. It's

providing some conditions here. So our

condition of give me the table workbooks. But then it's also giving an

workbooks. But then it's also giving an offset of data. So an offset of negative one is saying give me everything from the beginning. And you can see various

the beginning. And you can see various calls come in here as this polling happens. Now this polling as you could

happens. Now this polling as you could see when I have the screen side by side is very efficient very quick and these

are all 200 but sometimes these will come across as actually cached and it doesn't actually have to return the data from the server. It can just return that

not modified status code. So if we take a look at the payloads you can see here are the workbooks and these are only my three workbooks. you

can't see anyone else's workbook and they come in like so. So, if I come over here and I say want to rename this to uh

uh electric demo, then I'm going to hit enter there. If I look at one of the

enter there. If I look at one of the things that just came across here is the update from the workbooks table. the

value came across the it was this ID electric demo was updated and the updated at time stamp was updated so it's only returning to me the partial

record updates and then merging those in locally again if I had a network disconnection if uh I went online

offline etc it's able to reliably merge that data on transaction ids and these I believe these are timestamp

offsets like lo like replication log time stamp offsets not just like you know updated at time stamps that could be manipulated. Now one of the critiques

be manipulated. Now one of the critiques of when this first came out is that what essentially happens on page load is that

it loads all of the data all of the data you have access to. So, all of my workbooks, all of my debts, no matter if I'm looking at a particular workbook or

not, depends on how you have your collection set up, but for the most part, it's loading all of the data up front. Now, for this type of app, that's

front. Now, for this type of app, that's totally fine. This is a very small

totally fine. This is a very small amount of data. And actually for most apps, you can go further than you think you can as far as what you can load up

front, especially if you're doing so in an efficient manner and especially if you're doing so with the caching set to

where it's attempting to upload all the data, but really all the data doesn't have to be transmitted over the wire if it's even just cached locally in the browser by the built-in HP cache. ing

like we're not talking build your own cash here that becomes incredibly efficient to do beyond just that first load. Now there's this new version of

load. Now there's this new version of Tanstack DB that was just announced and released which is Tanstack DB.5. I

believe it has the ability to from your queries pass the parameters on through

to the read query. So, I'm not sure if Electric SQL has quite gotten up to speed on that or not. I I don't know as

of the recording of this video, but they are making it to where you do not have to load all of the data up front. So, if

you have any hesitations like that, don't worry. It's actively being worked

don't worry. It's actively being worked on. Now, I'm realizing as I'm telling

on. Now, I'm realizing as I'm telling you this, actually, I haven't showed you how you read the data from the components. So, let me go back to the

components. So, let me go back to the dashboard here. Uh, that's one of the

dashboard here. Uh, that's one of the coolest things. Almost missed it. Um,

coolest things. Almost missed it. Um,

okay. So, let's say we want to get the data for these workbooks. And I'll flip over to debts as well. Tensb provides a

hook called use live query. And it looks roughly like a database hook where you say I want a query from workbook. and

you give it uh that you want to query from the workbooks collection and you want to order by and in this case I'm ordering by updated at descending. So,

electric demo was the one I just updated in the other window, by the way. Demo 2,

let's say I make that demo three. I'm

going to go back here. Now, it's the first one because it's the most recently updated.

This query gets re-executed and recalculated anytime the underlying data changes, which is super cool. And it

does it in again an incredibly efficient manner. They've made a lot of claims to

manner. They've made a lot of claims to the efficiency of this and that it's not a naive implementation.

So even if it's a large amount of data, this is supposed to be very very efficient. So that's what a query looks

efficient. So that's what a query looks like a simplistic query. Let me go back into let's say the workbook view itself.

And similarly again I'm getting the workbook here. You can even do a find

workbook here. You can even do a find one. In this case I only want one

one. In this case I only want one workbook. And now I want the debts for

workbook. And now I want the debts for this workbook. So I'm going to say do a

this workbook. So I'm going to say do a live query from the debts collection where the debt workbook ID is equal to this workbook ID that we're looking at.

And then right now I'm ordering the debts by name. I'm later process in this component processing these locally to do some like ordering by balance and things

like that. Um, so yeah, you can still

like that. Um, so yeah, you can still process this locally once you get the data from use live query as well. Now,

one thing I don't have an example of in this app are some of the more powerful mechanisms of use live query in

particular is joins.

This is a very common thing that you might want or need to do on data. For

instance, we've got, let's say you've got user posts. You've got a collection of users and you've got a collection of posts. You can then say, I want to query

posts. You can then say, I want to query from users, join it on posts where the

user ID equals the post ID. then you get back a return set of rows with those joins together. So you don't have to

joins together. So you don't have to build out endpoints for users, endpoints for posts, endpoints for user posts. Do

all that join together like this is something we would often do in GraphQL and make these fields within fields and do those joints there. In this case, you

could just simply expose the raw data underneath the hood. again authorized

limited etc for users and posts and then join that on the client and again supposed to be very very efficient and you can do all sorts of things.

There's um I didn't even know there's left joints. There's left joins on here.

left joints. There's left joins on here.

Uh oh gosh wow they they have a lot of different types of joins. Sub queries

that's interesting. Uh a lot of different things. It's not trying to

different things. It's not trying to replicate SQL, but it is trying to provide you the ergonomics that you might be used to and the things you want to see and

because you have data that can be changing in multiple places. Like let's

go to the simple example again here.

We've got user posts. Well, in this case, a user's name could change, their profile picture could change, a post could change, its title, its body, etc.

Anytime any of this data changes in either of those connected relationships,

then you're going to get back updated data from your live query, which is really, really cool. Again, if you're

implementing something like this via GraphQL subscriptions, most of the time if some other like related model changes, you're often not going to get a subscription update

unless you just like really go to the ends of the earth to figure out all the different related things to update the a subscription for at the application layer. It's just not often going to work

layer. It's just not often going to work like that. So being able to handle that

like that. So being able to handle that via Tanstack DB, handle that via those disperate updates coming in on a variety

of models is really really cool.

So that is a whirlwind tour of Tanstite DB and Electric SQL. Again, this is as far as like web development or even

beyond web development. This doesn't

have to be web. This could be on, you know, React Native as well. This is one of the things I'm most excited about right now. Live collaborative data is

right now. Live collaborative data is something that we're kind of all working towards. Apps are expected to be

towards. Apps are expected to be collaborative. They're expected to be

collaborative. They're expected to be reliable. They're expected to be fast.

reliable. They're expected to be fast.

And it's hard to achieve all of those things at once with our traditional tools that we've had. This provides a

really nice incrementally adoptable solution that builds off of tools you're most likely using. A lot of you using

likely using. A lot of you using probably using Postgress. A lot of you are probably using React. So there's not a lot of requirements here to get

started. Doesn't have to take over

started. Doesn't have to take over everything. You can bring it in piece by

everything. You can bring it in piece by piece and it doesn't require you to follow any specific again authorization

or authentication mechanism. You can

build that in for the needs of your app.

So give this a try. Check it out. I'm

going to post the repo for this example app on GitHub and I'll link to it in the description below. I am also still new

description below. I am also still new to this, so I am looking forward to learning more and more and more about this. It's rapidly being developed on by

this. It's rapidly being developed on by some incredibly smart people and I'm excited to see what comes next. Friendly

reminder, if you made it this far in the video, thank you for watching. Uh I am hiring Typescript backend and front-end developers. So check the description for

developers. So check the description for those job posts if you are interested.

We are hiring remote first positions within the US. So if that may be you, I'd love for you to uh submit an application. All right, have fun

application. All right, have fun exploring. I'll see you in the next

exploring. I'll see you in the next video.

Loading...

Loading video analysis...