Google Cloud Summit Paris - Cloud Spanner 101: Google's mission-critical relational database
You. Might remember me from two, of the keynote demos that I done this morning. My. Job is a cloud. Developer advocate, so, what we understand, under the title is basically, either, your software developer, if the license to speak or how. I like to introduce myself as like the bi-directional channel. Of you. Developers. That build on our platform, to the, developers, inside Google that built this platform so please. Feel free to reach out to us to, me personally you can reach out on Twitter to. Me under the name, hacia wasti and, we like to hear your stories like how you use Google cloud platform where. You think it's complicated, and where we could do better only, with your feedback we can actually improve our products. Now. Today I gonna talk a bit about cloud, spanner on one hand about, the history. Of cloud spanner how we got about it in Google, I will. Provide some explanations. What cloud spanner is I also go down into some architecture, level and talk about the technologies, and the hardware that we use to. Actually. Make, that possible what we can achieve cloud, spanner and I. At, the end I gonna do some, coding, and. Show you a demo now. When, I talk about cloud, spanner and look at the history of it we have to go back at around, 12, years to. 2005. Now Google at that point has been around for a couple of years and our. Business, was growing massively so, we had our AdWords, system on the, back of a shorted MySQL, system now. This MySQL, system was charted on the customer, ID and we. Had to recharge the system several. Times. The. Last time we restarted, the system it took actually several years and. This. Was not a solution that we could go forward with. The the. Way that we were growing this. Was not a sustainable solution. So. We, needed something that. Can. Scale to. The point that we we, needed it, so. We needed something that horizontally. Scales. We. Had a lot of customer, to be recording, click you quickly we. Needed. A system which is still strong. Consistent, and has like. The relational database attributes. Like the asset, transactions, right, so. We were dealing with money. We were dealing in budgets, we had customers, we had partners, we had HMCS so, eventual, consistency was, not an option for us. Now. Also, every. Minute the system, was stolen, we. And also our customers, were losing money so. We couldn't accept any downtime for the system we needed a system where we can make changes, where we can they make updates where, we can make security patches. Without. Any encourage, downtime. So. Prior, to spanner there was no system that could provide all these qualities, and that's, the reason we rebuild it ourselves. Now. If we look at cloud. Spanner compared, to. What. We have internally. You. Get the, same system. The same performance the, same consistency, as the. Internal spanner that we are using so. It's the same site. Reliability engineers. That watching our, systems, that also watching, your systems. Now. As. Mentioned, already we, needed asset transactions, we need a global consistency.
We, Actually have. The, highest standard of the, consistency, the our consistency, model is external centralized ability. Now. One. Important, part was also that we. Wanted to tap into the knowledge and the experience of, our, developers. We didn't want to put the, system in their hand which is completely different to what they, have been dealing with all their like. Developer time or like, a developer, life so, for. Us it was really important, that certain. Semantics, are the same as you would, think, of them, in a traditional, relational database system. On. Top of this we added things like automatic, and synchronous, replication, so. We can spend more per each ons and we can actually spend globally, and out. Like. In spanner, and, for. Regional system that means that. We provide. An SLA of four, nines of availability so. This is about four, minutes and 38 seconds, of downtime, that we allow per month now for, our multi-regional. Setups where you spend your database over multiple regions and we'll talk a little bit about what, that means. We actually provide an SLA with five nines of availability which. Boils. Down to about 26, seconds, of allowed. Downtime for months. Another. Really important, aspect. For us when we were building cloud spanner was to. Make. It open to enable. Customers. And developers, to adopt it easily so. When. We started and when we were working with our early, access, customers. We, were focusing, on making it as easy as possible to adapt it so some, of these things are really sing standard, SQL ng 2011, we, actually uses across most. Of our products, database products, within Google we, added. Things like enterprise, features like, encryption audit, locking. Identity. Access management things. Like that we. Spent. Quite some time and invested. Some, resources in building, our client, libraries in major, languages, and there are more and more of these client libraries coming, up where, how to connect, automatically. To. Cloud spanner and. Currently. In a read-only version we.
Also, Have. A JDBC, driver for cloud spanner where. You can connect your most, favorite like the eye tools, directly, to spanner and and use them. Now. How does it compare with. Like. One. Of the things that we always get asked or like compared to is on one hand like the traditional relational database system, and on, the other hand like the no. SQL horizontally. Scalable scalable, database systems, and as, you see here we, put, up with like a small matrix, of features. That we are comparing here. So. If you look at a traditional, odbms. And, you. Out, scale this system usually, what you're gonna do is you build like, a. Master, replica, system, where you have your master like a mascot a monster and then you have a failover instance, and you, might have some read replicas, to free up your master, of right capacitive, read capacity, to, have more write capacity, on your master, now. If you, want to do any kind of maintenance on this usually. What you have to do is to, fail over to, your failover instance, if you do, maintenance on your master for instance security, patches or, an, update of the version now. That encourage, downtime, and lose your money and it's quite, some operational, risk that you incur there on. The. Other end if you look at like the, no SQL systems, like, if you basically, out coup image. Hearted. Traditional. Oughta be mess. Many. Went. Through like no SQL and we're, looking at things, like Cassandra or other database. Systems that are out there in, the Noack NoSQL space, but. What that meant is basically, yes, you do have uniball, consistency. But. Most of the times what we actually do to get consistency. What still, is. Required, in many businesses, is that, you build this consistency. In transaction, logic in your application, layer so you pull up this logic into your application, layer and that makes, it quite complex and difficult and risky. So. We've, cloud spanner you basically can. Move this logic, back into into the database and, you. Still have these qualities, of a traditional, database system, of asset transactions, with the scalability. Of a new SQL system. So. Typical, workloads, that we're seeing especially.
With, Our early access program, members, are. Can. Be put in like four categories. So. The first one is if, you, have like a transactional. System think, of like an inventory, management or. Financial, transactions, or something like that and you, basically Sheryl, Crow's decides what your additional database system, can handle then. That is like, we're spending an arm can come in you, still. Have like the need of transactions, but like your traditional systems, can't handle the size of the database and, span. I can do this for you like with the horizontal scalability we can also scale at the data size or the database size. Now. If. You, look at like, one. Of the things that we get asked often is also like well it's, loud span or only for big databases, and I have to say no like. Even if you have small databases, which are a couple of gigabytes big, but, you have a lot of traffic on this on this database and you, out scale your solutions, in terms of read and write transactions. That you can run this, is another use case for cloud, spanner if you need to scale read and write transactions. Another. Thing is if you have, mission-critical. Applications so. You really need a database system that is available for, nines or five nines and you just can't make any sacrifices, on this again. That is something where cloud spanner comes into the play the, way that we architected enables. This high availability and, disaster, recovery, and, in, enables, you to basically build a global, data. Plane where you can have for instance if you think, of a supply chain management. System. And you want to really know like what items. Are still in your storage and which, items you have sold and things like that you, want to be as accurate, as possible and, you want to have a few, on your data as a. Reason that's possible, and. The. Force is if. You, think of systems, where you traditionally. ETL. Data around so, for instance you have a user base in a MySQL, postcode, a debase and then, you have like, some. Log data or some. Audit data in a Cassandra, cluster, and then, at the end of the day you. ETL. That into. Another system, and do some analytics, on this now, this is again also a use case for cloud spano since we can scale on read and write as well as an on storage you, can combine, and, have these data. Collocated. In one database and, do all the analytic, analytics, there. So. To. Talk a bit about more, the, architecture. Of cloud spanner I want to. Synchronize. On on the terminology, so to say and, everything. In cloud, spanner starts, with an instance so the instance is basically where you define how, many nodes you. Want to have for your instance so how what. The scale. The horizontal, scale of your instance and the, capacity. Of storage. Now. In this instance you can. Create a bunch of databases currently. The limit is set. To 100 databases for per instance and then. In these databases, you, can create, tables. Currently. Per database you can create, about 2048. Tables. Now. This. Is basically what you see as a developer, and as a user, to. Enable, us to distribute. Your data and to distribute and scale on reads, and writes we. Have another concept, underneath, which is the internal concept, which, we call split. Now. What is the split the split is basically if you look at as a table, and you have your primary key we. We. Divide. This table and we. Divide that based. Lexicographically. On the, primary key so or. You could say also. Alphanumeric. Ordering. Or alphabetical, ordering basically, and we, basically use ranges, and, assign. These splits to work on notes now. How does that look like so if you look at an instance. View. Of like, of a spanner instance, you have in a regional instance, a distribution, over. Three zones so. We have our compute, on the top as you can see. One. Spanner node means you have one compute, node in each zone for, a regional, instance. So. For regional that means you have three compute, nodes, in in three zones and then, you have. Storage. In all of those like, assign three that, can be used by these nodes. So. Now if you want, to distribute the reads and writes you basically take. These splits and assign, these splits to. A set, of nodes across, these three nodes across. These three zones. But. I mentioned, earlier that we are globally, consistent. So. One. Of the criterias, here that is really really important, is that there's never more than one, group. Of notes. Responsible. For a split and we actually need one member, in this group who. Is the. One the leader of this. Group and what. We are using to do this is on, one hand Texas. So. The packs of algorithms, basically, allows us to, vote. For a leader with. A majority voting, and, we.
Need Another thing to. Make sure that we have exactly, one paksas group and exactly one leader for. Any, any. Split and. What. We use there is, something. That we invented and called through time now what is through time if you. Synchronize. Your watches with. Your neighbor and you, go away and you come back together in a week and you, look at your watches it's. Very highly likely that your watches have drifted. So. They, drift because of, cosmic. Rays they, drift because of different, temperatures that you have been there, are many factors, that let your watch this drift. Now. To. Ensure for, us that we have exactly one leader we basically need something, that is close to a global wall clock time and. I. Mentioned, ads banner is distributed. Across the globe, so, we need something where, we can synchronize, time across the globe. Now. It's not possible as I just mentioned, to just use a watch synchronize, it in one place and go somewhere and distribute, these watches, to all our data centers they, will drift away and they are not synchronized anymore, and then basically what could happen is that you have two, leaders for the same split which, means your. Data can, become inconsistent, so. What we needed is basically, we needed something to quantify, the. Trip that we have between these data centers and we needed a technology to. Synchronize, all these watches in our data, center. So. What true. Time does is actually it doesn't give you a specific time, through. Time gives you an interval. So, you have a lower bound and an upper bound. Now. The lower bound of you through time time. Stems is. At. Times them where, you know that this time stem has passed all over, the world, do. This times them that you get as a lower bound has. Passed you, know that for sure, now. The upper bound that you get is, time. Stem that has passed nowhere. In the world and. You. Can be sure that this is the case. Now. We use this to, select. The leaders and make sure that, we have exactly one leader, and. One taxes group for split and by that we can ensure that we have global consistency. Now. How does it look in hardware, we. Have in all our data centers, we have atomic. Clocks and GPS. Receivers. Or time clocks so to say and we, use GPS to synchronize, the, clocks around, the globe and. Then.
We Have atomic clocks, which we synchronize with this GPS time, masters, and every. Time we ask, for true time timestamp, or interval, we. Use, a. Subset. Of our time marshals that are available in our data center and then. We get it back and if there's any outlier, we basically can say okay this atomic clock is bad we, don't care about this anymore and we use. Ones that are looking. More reasonable, now. We. Synchronize, these clocks, we. Described in our paper like, roughly every 30, seconds, and as. You can see there the, clock. Drift is about 200, microseconds, per second, and, so. The. Interval that you get can get, down to like single-digit, like very low single-digit milliseconds. So. That basically means all over, the world we, can get an interval where we know like the lower time stem and the, the upper time stem are not, further away than two milliseconds or maybe one millisecond, and we, know that the lower bound has passed everywhere, and the upper bound has passed nowhere and this. Enables, us to to. Do this close consistency, now. Some of you might ask what, and if, like. Almighty atomic, clocks fail in the data center or all, my time masters fail in the data center now. We can still get these through time stamps or timestamp intervals, from, a different data center and we. Calculate, like all, the like the the time of the transmission, and things like that what. Basically means this, interval becomes bigger, maybe. This interval is sent 10 milliseconds or 100 milliseconds. Which. Basically means yes our, database, system is slowing down but. It's not stopping we, can still operate we, just slow down a bit and that. I think is pretty amazing. Now. Let's. Have a look at a more practical practical. Example. And I like to explain that are based, on and a life of a query so. In this case I am selecting. All events, where. The name is cloud, summit Paris so think. Of like the agenda, that we get back for this now. The client sends two requests and, in. Our case here it comes in to one. Of our followers in, this paksas group so we have a leader and so in tune and we have two followers in so in three and two and one now. The request comes in through, the follower and so on one and. That. One will basically ask the leader and say hey do, I have the most recent data like. I am seeing this timestamp of the data that is requested is this. The latest time sent that you have now. If the leader says like yeah your your current I can, just serve that data from a follower and I'm, done now. In the case that, my request comes in and my. Followers of my replica in this case doesn't. Have the. Data, the. Leader will respond and say like no you don't have the most current data but. Just wait the data is already, in flight so, just wait out at that time till you have this. Times them of your, data and once. You see that timestamp you, can surf the data because, then you know for sure that, you have the most current data, so. Obviously just wait and then I sent the response to my client. Now. Another really powerful feature, and spanner is that you can, do consistently. Consistent. Stale reads now. What does that mean a consistent, theory basically, means that you provide. A boundary, or you provide a specific timestamp, within seven days where. You say I want to see the state of the database on, that, specific, time stem or I want, to see the, state of the database and, I, don't care if it's up to it let's say fifteen seconds old. Now. So. Basically in this case I'm sending a request from my client, it comes in again through one of the replicas, and I, say it's, fine if this data is 15 seconds old, if. There is basically. The. Data most, in most, of the cases like. In the most most most most most of the cases this. Data will be already at the replicas and they. Will have a time stamp where they know okay they can serve this data which is within this boundary of 15 seconds staleness and they, can just respond, to the client we, don't have to ask the leader or anything and this, is especially powerful if, you have a cloud spinner instance that is globally distributed that. Is you. Have it like let's say a read/write master, setup, or readwrite quorum, in the US and then you have reach replicas, in Europe and in. Asia now if you're if you have agent, liens coming in and they. Don't. Care if they have the most current data but they want to have a consistent state of data they, can be, served with data immediately from. The read replicas, in a jar without consulting, the, readwrite, quorum in the US and that speeds up your career is quite a bit. Now. If, we look at the readwrite transaction, you. Can imagine in a readwrite on section, I need. The.
Leader Involved I'm changing. The state and I'm globally consistent so I need some I need the leader to be involved so. In this case we are routing, the request directly to the leader of the package groups that, are involved so if multiple pexif groups are involved we use two-phase commit just, to, racy communicate, and so you can eyes between them if a, single packs of troopers involve a cyclic an just do, the transaction, within this package group now. In, my. Example, here I'm reading, I want, to update my talk as. You can see here so the first thing that I do is I do read, I get, my query result, and already. Acquiring, some locks and then. I'm doing, some changes, I'm writing this basically, this. Mutation to a buffer and then I'm committing, this, transaction. And the. Leader of a city at that time, sends. Out the write requests, of this new data to the followers. Of the replicas, and as, soon as a quorum, so the majority of that Texas group responds. With yes I have written that data. We. Basically can respond. To the client and say like your transaction, has, been successful, and we can release the, relax. Now. As for. The data. Format. Or the Lea layout like, what we have in spanner is. Exactly. What you were like, pretty much what, you used to from a traditional, database system, so, we have tables with columns that are strongly typed as you can see here as an example I, have, a. Singles. Table and an albums table, and I. Want to join them for instance, by the primary key in this case I'm using the, Singh ID as, an example now. In traditional, database systems you have something like a foreign, key constraint, they. Can say okay if I delete a singer, I want, to have all my albums deleted. As well now. In a distributed, system this becomes, really complicated, and very harmful, so, we don't support this semantics, but. We support a little bit like, foreign. Key constraint, - - semantics so. In case you have something, we have tables that you join a lot and where, you want to be sure this data, in these tables is co-located, we. Have a. Construct. That is called interleaved tables. So. In interleaf tables what happens, is basically you interleave, your. Child, table into. A parent table based. On the primary, key so the child table has to has. To basically, preclude, with the primary key of the parent, table and then, we basically can, interleave this data right, where. The where. The parent rows are. So, basically, what happens is they'll end up they, end up in the same split and as. You know like the split is managed by one compute, so, one. Paksas group so, we don't have to go to like, different compute, nodes and get all the data together if we do this join every time. Now. You have to be a bit careful about this don't, overuse, interleaving. Because. Stress, mentioned that it all ends up in one split so you can run into things like hot spotting or you, can exhaust actually, the storage limits office split so, you have to be a bit careful there. Now. What. Does it look like in SQL on the DDL statement you, basically just add like the interleaf in zingers, interleave, in the parent table and then you can like you can actually have multiple levels of this interleaving, if you need to. Another. Powerful, feature that. I mentioned earlier is online. Schema. Updates so. If, you wanna. Like, evolve your application, and you want to add columns or things like that you can do that while all your load is going on and it. Is done in a transaction so they are transactional, consistent, if you do schema, update for instance here I'm if I'm changing my singles table and add a column age I can, do that inside. A transaction. So. There are a couple of schema, design don'ts. Which are coming from the nature of this being a distributed.
Horizontal, Scalable database so. If you come from a MySQL, or post casserole. This. Is not a lift and shift that you can do you, not can you can't take your application that, you've pulled on on on, top. Of my scale or Postgres, and just move it over you, have to do some, redesigning. And some. Changes. On your schema, in most of the cases. So. One of the things that are really harmful in in, spanner, are things, like an outer. Increment, in Tetra so, think of like auto increment feature, in my scale where. Like every time I insert, a row it get automatically, assigned a new ID. Now. If we have like these, we. Have monotonically, increasing. Numbers and as, I mentioned earlier we, are doing. It's by Lexus, like by, alphabetically. Ranges. Alphabetically. Sorted ranges, or lexicographic. Ranges. Now. You can, imagine if you have an monotonic. Increasing ID or. Time, stems for instance every. Time you insert data it basically gets inserted. Into your last split and your, hot spotting on your last split even though you might have thousand, splits for your table and you, always insert, row. With the, most current timestamp you're basically sending all this data to the last split and by. That you restrict. Your scalability, and your hotspot on that split. Again. Also with. Interleaving. The same problem, like if you inter leave a lot of data into one row into one parent row you. Interleave, all this data into one split and again you can also cause hot spotting so be careful with how, deep you go and like how much data you interleave into a parent table. So. Just to give an example is this. A little bit more graphically, here if you, have a timestamp as a primary. Key like a lot of success timestamp for instance and then. User ID and some other data for, each of euros, since. It's lexicographic. You ordered you, get all these, into. One split. So. What can you do to avoid this hot spotting so, one of the things that you can do is using use, you IDs, and. They're preferable, version four or later. As a, primary, keys you. Can do things like that you add a charting counter, or charting, ID, to your data in front like we had a charting IDE to your timestamp and things like that so. As an example here, again same. Table as you saw before we. Have our last success times then we have our user ID and we just add in front of it a shorting ID and by, a heading. The shorting ID now we can actually distribute the, load of inserts. Or updates on this row across, multiple splits and you're not hot spotting on a split. So. With, this I hope you, got. Some insights on like how loud spinner works. Under, the hood and, what. Things to pay attention to and, now I want to show a little bit demo. And want to show little, bit like how you can get started with cloudspinner. So. Please. Switch to the demo laptop okay perfect so, your first stop is cloud. Spanner or cloud at google.com slash spanner, which, is basically your intro. Page into, into, cloud spanner and has a multitude of information, linked from there I want, to point out a couple of things or actually, one thing down here. There. Has, been a couple of case studies studies, with our early, access, members. For, cloud spanner and one of them is Quizlet, that moved from a shorted, MySQL, system to cloud spanner and they talked and like very technical, and very detailed about, their transition from, my Scott, and MySQL system to cloud spanner so, if you were in the situation, that you have a shorted, system and you want to explore. Cloud spanner this is a good start. To look into like how what are the things to pay attention to. Another. Thing that we sometimes. Get asked is does, cloud, spanner defeat, the cap theorem. Now. Like if you look at distributed, systems and you know like Eric Brewer, he. Like. Came up with the cap theorem where, you have to basically choose two out of three of consistency. Availability. And partitioning. So. Like, you can't have a. Consistent. Consistent. Available, system that is also petitioned, all, right things like that and, now it's a mask like okay the spanner actually defeat the cap theorem and the. Answer is no, like. If loud spanner becomes partition, to the thing that we don't have a majority anymore, for our taxes, groups we, default, to, a. Consistent. System and become. Unavailable but. The, likelihood is very very, really low otherwise we wouldn't give out the nestled a of four nines four regional or five nines for multi-regional. Because, of the system and the hardware that we have, underneath and the way the, spanner, was architectured. This, likelihood, of of a split, brain how, it's called sometimes is very very, low. Now. Another. Thing that I want to point out is if you look into building. A new application, or like migrating, and application, please, read our white. Papers that we have linked in the documentation, which talks a lot about do's. And don'ts, in terms of schema design and, create design there, are some specifics, to cloudspinner to get the most out of it and we want to enable you to get the most out of it.
So. For the ones that have seen, the keynote, demo this morning you already know how to create a. Spanner. Instance I just want to show that really quickly again this is basically the home screen if you've created a project in the, cloud console, so let me actually. Sum. This up a little bit so. If you want to go to spanner you have multiple options to go to spanner your eyes that can use the hamburger. Symbol. Here on the left and and, look, for it loud spanner you, can type in into this, field. Up here in the search or if you a whim user you know how to search and BIM basically. You slash and I automatically, focuses into, the search, box and then you can do a spanner and, click. On it and your head spam now. To create an instance I just click on, create. An instance, I name it I let select configuration. That's to Europe and I, say like five notes and I create and as. Seen, in the architectural, overview that, I've shown no, I have no I have, no sufficient. Cool. Let's, do one and. Fingers. Crossed that, works. So. As. You've. Seen in the architectural, diagram, we, are actually distributing. And separating, the, compute and storage to. Enable the horizontal, scalability now. To create, a database, we do the same thing like I create I say I want to have a demo. Database I, can use. Our. Dialogues, to create a schema, for instance, if I want to have a singers table I create, a singles table I'm adding a column for, instance age do. Int, I create. I have to select the primary key and then I click create and again. Within. A, couple. Of seconds you. Have your database, now. Just. To show you if I wanna. This I can do an edit schema again I can use TTL, or our, dialog and I, can say, for, instance city, and. I'm adding string and. Who's done and then save and as. You will see in a second there's a box coming up which shows that this is basically, running, in a transaction the scheme update so as soon as this transaction has finished the. New schema, will be available. Now. I want, to switch a little bit to more like. Complex, schema, and, again. You might remember, that one from the keynote, demo I. Basically. Have, the scenario of a ticket broker where. You have, event, organizers want to sell tickets and then, you have customers, or fans, at one of buy tickets, and I. Just came up with like, sample. Application. To. To. Represent, such a system now. What you have in this is on, one hand for instance if you want to have tours, like, that, to Springsteen, tour sting tour or whatever, called Louie Courtley, you. Have a multi event which includes basically multiple, events in different cities then. You have the events for. Each event we need a venue. So. You see when you are up there as well and then each of these venues has different, seating. Categories, and configuration, so you can for instance have general admission, you, can have premium seating you can have VIP and, all, these have different prices assigned so it gets like if you think about it like.
Complex. Quite quickly now. I did, one thing here, as you can see I interleaved, the venue into at the seating category, actually into the venue because, I'm I'm doing, this joy in very often so I wanted to have actually the seating category, co-located. With my venues so I used the interleaving, in this case all, the other things are basically, standalone. Tables and I can join them by by, IDs as I wish. So. The. First thing that I want to do is and, then in my case here I'm using Java. Is. To. Show you how to get started with Java and, connecting against. Cloud. Spanner now. Just. To check how many Java developers. Do we have here in the room. Okay. How, many go, developers. All. Right see you the cool, kids on the block, peyten oh. Wow. I should have done a Python demo alright, next time, so. In, this case I hope you, bear with me Python and go Forks. That. Hopefully. You can read my java code so, the first thing that we want to do is we. Want to authenticate. Now, if you run this on your local. Machine and have that set up with the G cloud SDK or, you, run it on any, of our GCE, instances. Or inside, kubernetes, you. Get a default, authentication. Context. On these machines and you can just use this so. This is really easy I'm just saying Google. Credentials that get application, default and I'm. Authenticated, so, I think this is amazing just imagine, like what you have would have to do with all. Kinds, of things of keys and oh yeah. It, gets messy and I wouldn't, be able to do this in the, time that I have left now, in. Case you need to authenticate with the key we, also have the possibility that you create, a JSON. Key. Like. A service account and then, you use that or authenticate, against cloudspinner, now. The next thing is I need, a client. Stuff so I need to say, which, instance, am I going to talking to which database am I going to talk into and use, all this information like. The project, ID and things like that come from this authentication. Context, and then, I create, a database. Client so as you can see here I'm creating my client. Next. Thing is what I want to do is in this case it's a read-only transaction.
So Read-only transaction. The advantage of this is I don't need to acquire any locks, so. I can just do. This so. I want to have a read-only. Connection, and you. Basically what you do is the. Beat line that single, use for your. Read-only. Transaction, in our case and then. I want to of course, run, a query so. Here in this case I'm looking for, our multi. Events, that are happening from. Right now till tomorrow, so 24, hours later so. As you can see here I'm using a statement builder, and I. Can. Put. Parameter. Placeholders, and then. Bind these parameters, down, here as you can see so. I'm using basically for the date times them now and then for the next day I'm. Creating. A new timestamp. Where I'm adding 24. Hours and. Last. But not least I of course want to show. This and iterate. Through my result set so. I basically just like, as. We know from iterate able we, just iterate, through the list we, don't have any results, at left and I, can point through the results, in my from, my query either, by location, so ID and, they say it's like you see get string. I0. Which, basically means I want to have the first precision, of my query or you can also use, the. Column names if you use. Them so I could use basically multi event ID for instance as, to. Get the element. The value of this for, this row for this column, and then. I can run all this and. Fingers. Rust it, hopefully works so see I also limited to 10 results, so we see that at. Once now this, took a little bit and that. Is because like, creating. The connection, to cloud spanner takes a little bit of time if I would run it like multiple times with the connection. Standing it, gets much much faster so you get to single, digit query, response times actually, now. I mentioned, that we can do stale consistent. Reads. And. What, I can do is I just modify. My single read only transaction. And I'm adding a time bound so. As you can see up there I'm. Adding, a timestamp bound of a max stillness stillness, of 15, seconds and by. That basically, if I come into a follower, or replica. The, replica can respond, with the data right, away so, we can just run this again and you, will see that.
We Get some, data back as well. Now. The next thing that I want to show is a read-only transaction. Now I have all this step stopped here already so, this is just a repeat repetition. So, icon has jumped right away to the. Insert transaction, now. What I do here, is a write only transaction, and. First of all what I have to create as a mutation, so we don't have currently DML, you can do you can't do like insert, into, and, things like that, but. We use you. Can use the client libraries, and they have an IO Matic way of inserting, data, now. There's. Something that we use is called mutations, and I want to imply. Collaborate a little bit on this in spanner. A mutation, is basically, a change, of a cell like, a modification of itself so. That means if you have a table with five columns and you insert a row you, have five mutations. Now. Why, do I mention that the, thing is for transaction, we have a limit of 20,000, mutation per transaction and, you. Have to be aware of this if you are adding indexes, on your. Table, for instance we have our five. Column table and we edit in India we add an index with three columns where. We do an index on three of the five colors then. We basically have, for each row that we add in our table we. Have eight mutations, five, for, the table and three, for, the index and it's. Really really important, to, remember, if you ever see that error like exceeding, mutations. Limit that, you think of like okay a mutation, is not one, row of one mutation a mutation is. One row. Multiply. It by columns, and so on and so on so. In this case I'm creating, an, account I'm. Adding like my name like, email address and everything and. Then. I. Run. This. There's. One thing that I want to, mention. Yep. Basically. Here. You see I'm. Basically doing DBE, client, that right and I provide these mutations, and I'm just, writing that so. This will return, me with timestamp, when this has been written, now. We. Have this in the database and I can show that really quick. We. Have I have here my demo instance, and. I. Going to go to the accounts and if we go to data we're, gonna see here my data that just got insert, now. One. Of the things that I want to do is if. I want to do a read write write, so I have to create like a transaction. Context in my in, my client, and then, do the read which, I can do either through, the read API or I can, use SQL so. In this case I'm creating, this read here. And. Then. My, mutation but. There's something special about this construct, as you can see here we do this read write transaction and we provide a transaction, runner transaction. Caller bill now. Why are we doing this as, you can see here we. Basically provide. The. Clients library a, complete, function, of this transaction and that, enables, us from the client side to rewrite in to rerun your transactions in case it fails so. You basically provide, us all what it has to be done in this transaction and we, can rerun it on the client side if if there should be any problems, with this. So. Again if we run this I'm just, basically selecting, my account and then changing, my name from, Robert, to spanner guru so. If we go, back here and. Go. To our account table to. My data we. Now see that my name has changed. And. Then, I. Could. Do this also programmatically. But we, just, recently added this feature in our cloud console that, you can now also edit, data right, here or delete it in my case I want to clean up after, myself and so.
I Delete this data this. Is back to the slides. So with one minute left I. Just, want to mention like. Briefly, the, other database, products, that we have in our cloud platform, portfolio. So, our database, products, range from you know in-memory databases, to, like the data warehouse databases. So, if you for instance build, on App Engine you can use our App Engine memcache, if. You. Look at a relational, we have a managed, database, service where you can have my SQL or Postgres, as a managed solution. Which, can be set up with fail over and read replicas, but that, is a vertical scalable. Solution. Now if you need horizontal, scalable, solution for the, criteria. That I mentioned, throughout this talk you're, looking, at spanner if you. Look. At like more than no SQL databases, we, have two offerings one on one hand loud datastore which is more like a document, store and. Then we have cloud BigTable, which is key values for which. Can have, like a immense, throughput. And. Scale if you need on the throughput side and also on the query side in terms of like single digit career, response times even if you have terabytes. Of data in your database now, cloud datastore supports. You from small applications, up to huge applications. Just, think like, snapchat built their entire application. On data store and dep engine, and. Then, if you look more unto like object, solutions, you can look into cloud storage where, you can save, basic. Blobs like video files, anything. The. Nice thing about cloud storage is also it's backed by spanner again and. It enables, you to if, you store a file and you look anywhere in the world on the state of like how what kind of files you have in your buckets you will see a consistent, state across the globe so, if your data like if your data is there you will see it from everywhere and that's, enabled, by cloud spanner now, if you want to do analytics, we. Have a like. Cloud native solution. Like, fully managed solution called, bigquery, and bigquery. Is amazing, to crunch through terabytes, even petabytes, of data in, seconds. Or very, little minutes so if you're looking at like lower. Your, costs, of ownership in, doing. Analytics, bigquery. Is one, one of your things we really, should look into with. This thank, you very much I hope you enjoyed this talk and enjoy. The rest of the day.