Building distributed applications with Orleans
Hey, come check out our next show on our leans with, Sergei it's on distributed. Computing, take, a look. Welcome. To another episode of the on.net show today we're going to be talking with, Sergei. About, Orleans and, I'm rich. Okay. Yeah so how. About you quickly, introduce yourself and then we'll get into the topic. Serge. Ibaka if I'm being working in Orleans. For many, years now and I'm, gonna, do, the whole story of getting into open source and moving. Forward it's, all happy always happy talk about it yeah, and we we actually kind of worked on that the open sourcing a little bits together you have learned a lot from you guys yeah, yeah, we went a little bit before and yeah. That was fun well, it was more than fun because when we started talking I was shocked like all these energy was kind of people are glowing how we're open sourced and they don't, do, sync. From the back end just just do you develop it and open in the open that's what we learned from you is yond. We were still doing it it's it's super successful, actually. On, that note I, know with our you, know project, on a core that. I'm. Amazed. Actually, at the number of community, developers, that are working on the project have. You guys had any success, with attracting. Folks, like that yeah. So we, have a very, great community, of so we have. The. Number of contributors both, from the issue side and, pull requests, and like. I keep mentioning this, guy Dimitri. He. We, merge over a hundred of his pull requests we, focus on performance, we counted, just through his pull, request alone with double or throughput. Okay. You, know performance, yeah. That's super important. We. Have somewhat similar things, on our program then yeah, then. I'm. Presumably. Ben's. Performance. Improvements, probably helped you guys in some way so. How about we talk about about what, Orleans. Is and what it's for and. Also. You. Know I just recently talked, to Aaron, at akkad, net so be good at some point to describe, any, similarities. Or differences there. Okay. So how do we explain, their lease in. A couple of minutes yeah, it's a couple of minutes that challenging, let's try. So. It all starts with, kind, of good old days where there was client server server. Was holding, a bunch of objects. Was state and everything, was consistent, easy. To manage locks, this. Whole model. Fell, apart many years, ago project 20 years ago when the web started because. The moment you had the second server you. Have two. Copies of the world and they have nothing, in common there is no coordination, and. So the world switch to you stateless, where you have front, and the stateless business logic they all talk to storage storage is where, everything is happening really, and. It's an efficient because you need to read, data file requests right so, you have conflicts, requests, come to different servers so, at. The end of the day you overload, your storage. That's. Bad yeah, and, of, course we threw in cache say hey we're going to read from cache right directly then you inherited this problem, of cache invalidation and. Everything. So you deal with you storages, so. The the, released model, is, saying is based, on this idea that let's put a, stateful, middle tier which is a cluster of servers have this virtual. Memory. Space essentially. Where, this objects will live as if they're in the single server. So. The coldest limitations. Of a single server machine. Where memory. Doesn't, scale they all go away if we if we move to the cluster, model and. If, something fails that's. Handled, automatically, now, this has become more expensive. Like. The nice thing about the the first slide that you have the stateful model is. Although, it has a bunch of drawbacks, it, the, expense. Of the system, relates. Almost exactly, to, you know the, storage and throughput, you need is. That correct yes that's yeah. So with this system is, it, like oh well, I need 3x of the costs, to maintain. System or is, that not fair no that's not fair I think it's more expensive because you need to have communication between, no, it's a new civilization and. Overhead. Of messaging, but, you don't have to multiply, you can, still have say. 10 times the through point if you have 10 servers on a double double service I see that's the idea so the, picture we arrive to is, like that so you have cluster, where you have many many this independent, objects we call them grains and there means, that. That leave the isolated, and they. Talk to each other still but. Because they isolated now we don't care if they live on the same server or in different servers so from, the users developer, perspective it's the same so you have this kind of illusion of a single machine but you you program against the cluster.
Ok. Makes. Sense. All. Right and. They. Quickly, show code, so but the grain starts with the grain interface, which. Declares. What, kind of methods you expose they all have to be sync and because of a single way. TPL. Would benefit from all of that. Requirement, as all calls to grains these objects are asynchronous. Right. So is, there some kind of a factory, method, somewhere which, is like you. Know, get. Me a grain of this type yep, that's exactly the, next thing so how do you call a grain you say give me a grain that. Implements, this interface with. This identity and an identity is meant to be application. Specific and state bollock for user can be email it could be social security, number or phone number we, say give me a. Reference. For, this grain which in fact is just the proxy object and, it can immediately make a call to that object, right so I see that when you get. A new, grain that, is a synchronous, call so that they're, always immediately. Available yes for, new grains. And, then the, the operations, are the things that are async, are, there cases where you can kind of let a grain rest, and then, get it back and, then that would be an. Async. Call if it's like currently. Residing on one of those other servers, or that's the wrong way to think of it I, think. It'll, be clear I can. So. Implementation, is simpler. Just the claws the implements, the interface everything. A synchronous, the. Key thing here is that it's single, threaded so, you don't have to protect, this counter, and as a jump-off you can increment it and guaranteed, no conflicts, there even make sense that's, more like just the way async works right no. It's a guarantee so they say hello will never be invoked on two threads in parallel for. The same object I see so even if you do. Tasks. That run for example, in there. Will, that not run on another thread or you don't you're not allowed to do something like test outrun you can do different, tests chastened run is an exception, it doesn't respect. Okay. Okay but there is a way to run multiple tasks, the keys they, will never run on parallel. Threads within, the context of single, object single grain there will always be, serialized. Of that one at a time okay. But. So the key here is that this grain is it's a logical constructs, it has this identity, like it's a user email. Or phone number and. Logically. The loose forever so you can always call at any point you don't need to create you need to delete you just say give, me a reference make a call the, gray may not be anywhere in memory can, be just have persistent, state and in storage, but, the runtime will create. An instrument when needed, execute. All the calls wait. For it to be unused. For, a set period of time and then removed from memory because it's I don't, I. See. So. Wonder, what I understand, from that is. The. Grain. Doesn't. Really persist, over time in, the sense of after. It's unused, it. The. Fact that you might get the same one again is not super meaningful it's. The data that it it, represents. That is, the more meaningful part, is that fair. Ah yes. And no the. State is the most important part but also the runtime. Creates. This activity, we call an activation instance, in memory in the cluster somewhere, the kissing glances the routes on the request for, the single accuser, grain to, the same physical, object, so. You have, both, persistent. State and in memories that which you can keep there for, a long time and that. Contributes. To low latency, high throughput because you don't have to go to storage for a request anymore okay. And you get more for that because they can this grain can talk to another grain which is in memory you don't have to go to storage. It's. Very very, efficient that's okay, make, sense so. We end up with, this. Cluster. That actually, handles, a sliding window of, grains I. See. Which. Gives you free. Resource management, you only have memory. Used for these, grains that are used, right now or we're used recently you want to keep them kind of warm.
In Memory but. Other than that you, don't have to have resources, you don't have to put all your state for all your accusers, memory. I see, so in. The storage world we talked about, like. Sparse. Sparse. Allocation. Of drives. It. Kind of feels like this is a sparse, allocation. And the sense that. You. Know the entire. Universe. That you might be trying to represent might, have you know a million. Grains. Or actors in it but there's, only like 10,000 that are currently active and so. But but, you still, represent. The case that this million, or the other, whatever. 190,000. That. They exist, they haven't been deleted, in any sense, yeah the oles is the whole time so that's what they do we call them virtual actors, like virtual memory you, statism. Pagefile. Kind of a coolant but you can always are this memory that's, a good good, analogy, and this, is actually difference with, you, mentioned, a cadet net so that caca or Erlang does call, of traditional, actor models. There. You explicitly, create every actor so you would create you say create, user actor, X and then usually need to pick a server or it needs to leave and then. That. Actor may create another actor, and then you need to remove them from memory so this whole resource management is on use it you don't I see, some more explicit, it's, more explicit, it's more work it's more space. To make mistakes because if something failed you have to recover and here. And in this model without. Any change like. If, you say a server dies. Right. In this picture that's. It's only normal event because, the runtime knows oh we lost actors that are there we can immediately. Recreate, them on another server right, as long as their state was persisted, to a, database of some. Kind then, you're good to go yes. The key is that you, don't need to write any application, code. To react to server failure event you, just write code for this class that gets. Activated can. Persist it's they can. Do some actions and that's, it so we. Saw what that code kind of looked like for a grain. You. Know I think it was just doing console.writeline, there, but, presumably there's, like reads and writes to storage. Now, do those go through some kind of a layer in between. The. Grain and the database, or as the grain developer, I completely. Own, you. Know writing to the database on, the consistency model, that. I need. They're. Both choices are available so you can go through it's kind of simplistic. Ecology. Persistence, why they say this is a property back class and this is a state of mind grain and then, we have a notion of providers, persistence, providers you declared, your associate, it's, a user grain class, with, say, cosmos, DB as a provider or as, your table or whatever. MongoDB. And then. You just say write state you update this properties, in property back call. A single method and it persists. But. If that's not enough for you if you want to be more advanced. To you on the low partial state and handle it you can just write any code, and talk to storage yourself okay so a lot of flexibility, yeah. Okay. So. How about we talk a little bit about where, or liens is currently, used today. In. Terms of its, BC well. So you, know I guess you're. Not talked about this a little bit earlier which is, in. Computing. Today the, kind of main cases, are, desktop. Mobile, server. Cloud and IOT, so. In which ones are those do you see orleans where. Is its sweet spot well. It's definitely the backend the server-side cloud. Cloud, is the natural place. But. Anywhere. On. The backend it's yeah I mean from, this picture you showed it's, clear that elasticity. Of. Resources. As a, key point both from the standpoint of scaling, out but. Also from the standpoint of. Failure and recovery, and pretty. Much all the other computing. Mediums I mentioned, don't have those characteristics. Right. And then from the beginning when we started our leans back and in research the goal was the cloud. Because. It, was coming, there. That's where you have these failures at, the scale they can't guarantee than you the, elastic makes, sense so, I. Know from having talked to you before that one of the big. High-profile. Users, of. Orleans. Is the the halo service, I think. It's been using it for like what five years or something like that or maybe even longer six. Yeah. Okay and. Has. Has, that been a success, are they still happy with that absolutely, happy, and sort. Of they infected, to other people because they oh if works. For halo which is a bigger scale than than mine then. I should be fine with it so. It was a huge, kind, of proof point right, right, so, is it the case that. Or. I shouldn't. Put words in your mouth what in a, short number of yeah. How. Would you describe like. Why. It is that the folks at halo are happy, with that like what's the, specific. Problem, it solves for them. There. Are two, points we touch in scalability. And, that was one of the, two main goals but the other main goal was develop a productivity so.
Because You. Don't have to write code to do, a lot of these things that Kendal failures, or. For example we propagate exception, through distributed, cold chain so you can write try caches if things. Happen locally as. A result, you don't need to write code to deliver. Failures. And make mistakes there in debug so, developer, activity is, much. Higher and some, people say they write 3 times less coats and people claim that 10 times less code I don't know what the, true metric is somewhere. Between 3 and 10 yeah that's. My feeling so, that that contributes. To the. Efficiency. Of developers, work but also the less good you write the fewer bugs and interviews, absolutely. So. It's it's used. In. Some big services, of Microsoft, has. It also been adopted, in, the outside, world yes. Of course so. We had public preview in 2014. I believe and then we open source in January 2015, and. For. A very. Wide. Range of applications some. Things that we wouldn't even think about. One. Of my favorite project is this. Energy. Management. System, in Hawaii where. They, have a lot of green power during the day and then distributed, to this ceramic, water. Heaters so, they turn them on and off and on the very like short notice using, the system right in release to consume, and drive excess. Energy and, store, it in this form so everyone things like we need the battery well you have a heat battery yeah. Then you heat water for free. That's. A fascinating project, another, one was like couple. Million, mousetraps. I Yuki send. The signal that there's. A mouse something in the mousetrap no literally, literally, mouse traps, what. Is it though about, I can kind of grasp of the mouse trap one. What. Is it about or, liens that aligns, with this. Energy. Transfer. Story. The. Way you, can explain it as for, liens, shines when you have lots of context, and the context maybe like a user accession, device so, you have this water heaters that spread. Around they have their own properties, like I say, and you send requests, to that specific, context and say turn on turn off give, me your state and, because. It's in memory you have got for NIT they get they call a ghost. Device. Without. What determines sort. Of fraction of a device in the cloud and memory but has a copy of the state, it. Can be very like low latency. They're, low latest operations, with that yeah, I don't, think anyone actually does this but with those kind of systems I always think of it's. Like. You. Know by power. During. The night, and. Then pump water behind, the dam. With. That cheap power and then and then. You. Know run the turbines during the day and sell at. British power kind of the same yeah it's easily do that yeah. For. Sure. So. If. Folks were interested in, using. Orleans maybe maybe, just to try and experiment with it to see what it's like or actually adopting, it where, would they go to try. That out. So, does it on github is that yeah so it just go to github and search for Orleans. You'll. Find it there we, also have the get a chat on blinkers, there that's, where a community hands, out and, that. Sort of gives a lot of energy to my team when they go and talk with those people in, real-time and answer their questions, and hear their ideas and feedback so. Kind of feedback comes in form. Of issues and combinations, like Andy github but the same time there is real-time. Near, real-time chat, where people just go. Oftentimes they build comment say oh thank you for you doing and what can be better, yes. It's, it's, very nice but. Instant, gratification to, get that so, I. Guess, maybe, two last questions. One. Is. What. Are your aspirations for. What you want to do with Orleans next and then. The second, one is just any closing, comments. So, we just had a big, first, really, major release. And in March early sto. The. Goal there was that. Net, corn. And that standard compatibility. So now people can run literally. Anywhere they want backwards. Linux, docker, containers, that, was a big deal for us on the way in restructure, it and we mature the codebase much more much more. Dependency. Injection friendly, and to. Be much more serious about backward, compatibility, now and as. Part of that we. Introduced, data of, transactions. Which. Keeps. Blowing people's mind when say distribute transactions that they're. Completely impossible they, slow, and we're trying to prove that no that can actually scale, and be efficient. So that, that's the next big. Thing we're finalizing we have beta that's, working, but we finish, it with, internal. Team that, does, virtual. Commerce for gaming, to. Productize, it right. Make, sure you take, the right amount of money in one account and put it in another kind of thing yeah they have funny. Sites like exchanging, gold, coins for cannonballs and you don't want to spend gold, coins that you don't have right so the content rules apply there as well yeah well I'd be happy to spend gold coins I don't have to, get your cannonballs, no problem there.
In. Closing, I would, invite everybody to github. To did the open source it's, a great place like it all right get high community, it has some, people that. Sometimes. Don't even user liens yep and they're a day, job just like. The. Crowd of people because, you didn't, truck seems like like, being in trucks people that are kind. Of experience, they're passionate so you don't get people there just do you like lazy. Job, they would not go there so you have the self selection of people that they really know, what they're doing really passionate, and they're very good to each other helping, or changing eight years so, I would advise, everybody. To go there and just check out the ravines and ask questions, very friendly crowd. We were there like my team is there yeah, that sounds like a wonderful community, yep we've. Mentioned joints I had no problem hiring people and explain what. Yes. For sure yeah. That's what this whole open-source thing is right it's a hiring mechanism, no, no, no. I mean, it sets. The bar so, it's like you, know this code and everyone can see it so you cannot do sloppy job or you try to do, as best as you can and you. Learn to accept, feedback it's not like first. Thing you all neurons being defensive, right, yeah. I'm sure you guys wonder that oh yeah yeah, we're. Not doing that yeah not being defensive okay. Awesome well thanks for coming by. It's. Nice to see we're, Orleans's. Kind, of ended up having, kind of worked with you maybe two years ago on this. Oh yeah. So take a look on github I think it's gif hub comm slash, dotnet. Slash Orleans right yeah, that's right because we're in that the. Same github org so, take a look it's one of basically, done it course sister projects, and. Give. It a try and thanks, for watching another episode of the on.net show.