Ingesting Data into Neo4j for Master Data Management Pat Patterson

Show video

Okay. Good. Morning welcome. Thank you for joining me for this first, breakout. Session, of Graff connect 2018. I'm. Going to be talking about, ingesting. Data into. Neo4j, 4, and I'm going to focus on the use case of master, data management but, really it's a general purpose problem. I'm. Pat Patterson I'm. A technical, director at, stream. Sets where, I. Help. Companies to. Unlock. The value of big data specifically. By by, figuring out how they're going to move their data around, you. Can email me Pat at stream sets calm or you can follow me at twitter, at. Meadow, daddy. So. I'm, going to start, by just, talking, for. A couple of minutes about what master, data management actually. Is, just set a little bit of context, and I'm. Going to focus on, a. Specific use, case in product. Support, ok, we want to. Analyze. Customer. Support, tickets, and. Join. Data from different sources and I'm. Going to be looking at how we can use an. Open, source tool stream. Sets data collector, to, move data into, neo4j, for. This purpose, and. What. I'm actually going to focus on is are. The techniques. That we can use so. The cipher. Statements. We, can use to move data and to. Build. Relationships. As we. Unify, data from different sources and, I'm. Going to show you how, we can also use cipher, to start joining that data once it's in neo4j. And deriving, insights that are impossible, when it's all spread out, and. I'll. Be they'll be showing you this mostly, where, I'll try, and get through the slides so you can get on to the actual practical, example. So. Just a minute or two on, stream. Sets the company we. Were founded, just. Over four years ago and our founders, had worked, at. Informatica. And Cloudera, so, they had a long experience in data integration, and big. Data and, what they had realized, is that the, the. Then-current. Tools. For. Data integration so. ETL. Extract. Transform, load tools, did. Not address. The, world of continuous. Streaming. Big. Data and. No. Sequel they're very oriented, towards a static, schema, relational. Database, world, so. Fast, forward to today we. Have, dozens. Of customers about, 1/2 from this global 8000, 1/2 smaller companies. Our. Open source product. Stream sets data collector, I've, been downloaded, this is actually a little bit old this slide probably, over 2000, different companies. Have downloaded data collector, and. We've seen over a million, downloads. Over, the years, from. Across. The world. One. Thing that sets us apart is our broad connectivity. So we can connect to over. 50, different. Data. Stores streaming. Systems and so on and. The. Innovation. In our engineering, team goes right back to. Essentially. The dawn of Hadoop we have. Committers. To Hadoop core and. Related. Projects, so spark, scoop. And flume, on. Staff, so. Master. Data management so. The cool problem, is that, in any enterprise. Data. Is spread. Out around. A number of systems, and this is necessarily, the case we, have different, applications, with different data stores that, we use for different purposes. And they, can be located on, premises, or, increasingly. Now in the cloud and. Data. Sets often, overlap, now this is a good thing in that, we want to have be able to build correlations, between, data, in different places we want. Common. Identifiers, to, use as keys between one. System and another but. Too much overlap gives, us pain because. We don't always know, what, the source of truth is for a given piece of data when it's duplicated. Across data, stores and so. It's hard to draw insights, from. Data when, it's scattered, across different. Systems it's hard to build those. Relationships. From. Say, an. HR, system to. A support, ticket system to. Records. Of faults. Coming in from our, I o T platform. So. Master. Data management is, this whole practice of building. A single point of reference from, which insights, can be drawn and this is kind of an abstract statement, and it, can take many forms we. Can build systems that effectively.

Proxy. Requests. Out to different. Data stores or we can synchronize. Synchronize. Yet synchronize data into. A single. Location and that's the approach I'm going to focus on because that's what we can use with. Neo4j. So. Traditionally. When. We did, master, data management we. Would. Copy. Data into, a relational, database okay, that was the tool of choice as, Emile. Was saying on, stage, earlier on. Back. In the 90s, you had about, four choices for a database and they were all basically followed, the same model, it was just a different sticker, on the on the box of floppy disks. But. What. If you could use neo4j. Or, another. Graph database, it's the other, other graph databases I suppose there must be to. Model this, master, data as graphs. So. We can follow those relationships. And perform. Analyses, that are not. Possible or at least very difficult, and time-consuming in. Traditional. Relational. Data stores now. Often. The biggest. Problem, is just, getting, the data into. Neo4j. Okay. We. In the room here how many people have written cipher queries. Okay. So. Once the data is in neo4j, we, know what to do with it right we can find shortest paths we can find relationships we can write queries it's, all golden but getting the data there is often. The the. Single first step that's the hardest. So. My. Use case that I built a system. Around I have. A, customer, service platform. That. Holds support. Ticket status so it's got status, priority. They. Date, the ticket was open they subjected. Ticket all of that all of that data and it's, got the assignment to, support. Engineers, now. Customer. Service platform, often. Allows more than just the support engineers to log in and view tickets going to allow their managers, to view. That status and it's going to hold some, information, about which. Engineer, reports to which manager but it's not the authoritative, source, for that data. The. HR, system which. In my example, is an on-premise, relational, database is the, source of truth for the reporting, hierarchy, so, if there's any discrepancy. Between the. SAS customer, service platform, and the, HR system the HR system wins and. To. Help in our fault. Diagnosis. We. Have a device, data that's being reported might, be being reported from an IOT platform and these. Are just, flat files so. A simple, structure. Files. Arrive on disk. That, report. Device. Status, and faults so. The question is how. Can we bring these together to. Get a holistic view so. That we can run queries that run right from, the. Faults. That, devices. Are experiencing, through the support tickets to the. Reporting. Hierarchy. Now. Stream sets this, is our kitchen, sink slide, that shows everything. We do, screen. Sets is a platform, for data ops so. Operationalizing. The flow of data around. To prizes and it's. Vastly. Flexible, as I mentioned there's about 50, different systems, or types of systems we can connect to I'm not going to go into detail, walking, around this slide suffice it to say we. Can talk to api's. IOT. Platforms, people use us for reading. Firewall. Web server logs. Databases. And then. We, typically, feed big, data stores so.

We Partner, with cloud, era and map. Our data. Bricks and we. Can write data into Hadoop. Amazon. S3 Google, cloud and so on and what we're really enabling, is. Analytics. Ok. We're reading, typically, from operational, data stores writing, into more, analytical, big data stores for, the purpose, of drawing those insights. And this. Is all really complex, and it could be on-premise and in the cloud but really it boils down to. We. Think of ourselves as a Swiss Army knife for data, we. Can build data, pipelines. That, source. Data from any, of. Dozens. Of possible sources. Transform. It and write. It to a similar, number of destinations. So. About. 18, months ago one. Of our SES, came, to me and said. One. Of our customers, financial. Institution, is really interested, in writing. Data into neo4j, for, analysis. Can. We make that work and I. Went away and looked, at neo4j. And. Started. To bolt some pieces together and found, out that it was actually pretty. Straightforward, to. Work with neo4j, from, standard. Tools and so. This is your first just. As a quick, point of interest how, many people in the room have, heard, of stream sets before today. Ok. Good smattering, how many people have used stream, sets before today. Well. One one or two ok great. This. Is a pipeline. That. Will, actually go into more detail in, a little while but, it's reading, data from, delimited. File from disk, performing. Let's. See about four different transformations. Including. Filtering, relevant. Data and in writing that data to neo4j. And, what. I discovered, is that, the. Neo4j. JDBC. Driver is. Really. High quality and, really, performant, for, these purposes, and we. Can construct. Cipher. Statements, so. I, guess. Technically. They're not queries, because we're creating. We're. Creating, nodes, but. We can create, nodes, through cipher, and create. Both, nodes, and relationships. Very. Efficiently. So. You can see here I've creating. A node for a fault and. Matching. A device and then creating, a relationship between a where, a fault, affects a device. So. How. Many people, in the room have used this JDBC. Connector. A handful. Okay, oh one, handful right, so. This. Is not part, of. Neo4j. Product. Per se and my machine has just gone to sleep so I'll need to wake it up later. The. JDBC. Connector, is a contribution, to. The, I guess the the, broader neo4j, project. By, an italian company called, lares ba but. It is the official JDBC. Driver for, neo4j. When you go and look on the JDBC website this. Is where it takes you now. It's. Actually, very. High quality I've, worked with a lot of JDBC. Drivers. Quality. Is very very, variable. In the way that we implement, that JDBC, interfaces. There are often unimplemented. Methods. That just throw an exception is, there doing something sensible, but, I haven't had any problems, with this driver it's it works very nicely indeed. One. Technical, note it, can use bolt or. HTTP. Slash. HTTP, now. Bolt, is a. Binary. Over, TCP protocol, that, the. Neo4j. Drivers. Use to communicate with, the server so. Across, all the languages they will use this low-level protocol, and you. Know if you're making choices, when you use this JDBC, driver bolt. Is. Sends. About 1/10, of the. Number. Of bytes over the wire than JSON. Over HTTP, so definitely unless, you have an overriding. Need, to, use. HTTP. I would, use bolt when when communicating, between the. JDBC, driver, and, the. Server and. Happily. It's enabled, in the o4j by default because it's the the, main method. Of integration and when. You set up JDBC, you will need a URL, of this form so, JDBC. :, so neo4j, selects, that driver and then, the bolt selects. That protocol, and then, the, hostname. So. The architecture, I built is very very simple, I'm. Going to read. MySQL. In one, pipeline and I'm. Going to read support, tickets from Salesforce, so full. Disclosure before, stream. Sets I was a developer, evangelist, at Salesforce, so I know their. System I know their api's I'm. Going to read support, tickets from Salesforce. Service cloud you. Could equally, apply this to Zendesk or any other support, ticket system, so. I'm going to read those in a second pipeline and then, I'm going to read those flat files and do some transformations in, the third pipeline, but, bring them all together you unify, them in, neo4j. And. This. Is the goal here is to. Create. A graph, where, the, purple. Nodes here are employee. Data so. If you were to kind of, pull. On the right place in the graph you would see it settle out into a reporting, hierarchy. That's. Coming, from our. Employee. Database, our HR system the. Green. Nodes are support. Tickets. They're. Coming from Salesforce, and then, the red nodes are devices, and they really connect. The. Tickets. To the blue, ones which are the. Device faults, that, are coming from our I o T system.

So. What I'll do is I, will just. Log. Back into my laptop and then. We, can. Go. To the icon D. And. Unfortunately. When I do the eye candy, my eyes need some help okay. So, this, is stream. Sets data collector as I mentioned it's Apache, 2 license, open. Source it's the I, guess, the core of our platform, and I. Have my, three pipelines, here so. If we go look at employees, it's. A very very simple pipeline what. I'm going to do is I'm going to read. Data. From, my. Sequel so, really. This is just a microcosm, of an. HR, system, we've. Got an employee, table and a. Table, of titles so an employee, table just has a few fields, but, crucially, it has the reporting. Structure every employee, part. Of our VP has a manager, ID that, links it to another employee and we're. In this kind of relational, denormalized. World or normalized. World rather where. Instead. Of having job titles, in the employee. Table where we. Have a foreign key to a title table so like I say this is two tables, where real HR systems probably have dozens, but, it's enough, to, show the, core. Of what we're doing so. Over. In stream sets we. Can read this with, a, JDBC. Data. Origin, so. What. We can do is we can actually have the. Database do some of the heavy lifting here so, this is just the standard sequel. Query that, is going to read, fields. From that in play table join. It to the title table because we don't want. Numbers. In neo4j, for, job, titles we want the actual strings, it's. Going to do that join in the query and then, just write that data to, neo4j. So. This, is where the magic starts to happen, so, we, have our. JDBC. Connection, string and then. We have, some. Cipher now. What. We're doing here what's going on in this pipeline, is that. We. Kind of have a micro, batch architecture, it's kind of similar to Apache spark. We. Read. Batches. Of data from, JDBC. So it might be a thousand, records at a time, feed. Them through this pipeline so each batch. Gets fed through here and then. This cipher, is executed. For each record. Ok. We're reading records from JDBC, and in, writing them, well. To JDBC, - to neo4j. And we. Get to write. Freeform, queries now by default we. Would use a different, JDBC. Destination, that is, designed. For relational. Systems where we're going to do sequel inserts ok. You wouldn't we need to write a sequel insert you would just kind of say, ok these are the fields that I'm interested in writing and the system builds the insert for you because we were working for neat with neo4j we do cipher and we get to write a freeform query.

Here It kind of says sequel query but it's. What we submit to JDBC. And we, substitute in values, from. The record that we read so. We can set employee, ID the. Name there the job title and then. Crucially. We're using a merge because. What we're saying is we're, going to this, is not a one-time process we're. Going to need to periodically. Read this employee database, and then, update, neo4j. So. The, first time around we're going to want to create nodes from, then on we're going to want to merge nodes based on this employee ID because, employees, might be changing their names they, certainly might be changing, their manager. ID moving, round the company and then. We, create the relationship. Between the employee and their manager and this is pretty straightforward we're, saying. Matching. The manager, where. Where the IDS correlate. And creating, that relationship and. Then. Another. Interesting. Trick. Is that. We can, delete. Outdated. Relationships. All in, the same statement, because. What we say here is with that same employee, that, we're working with if. There, is. Their. Former, manager so, if there's already a relationship, in the, ER for je that's this thing here where. The, employee the the manager's ID is not the employees. Manager ID then, delete, that relationship, so this is a really, neat technique and it's not specific, to data collector at all is it, works with any. Way of ingesting. If you wrote your own app to ingest data into, neo4j. This. Housekeeping. Of the. The. Graph as you read it in using merge and delete, okay, one. More detail, usually. This would run continuously, I wanted to kind of show a batch use case here so I'm saying that when there's no more data we. Want to stop. The pipeline so this is just going to run read, my employees in and then stop and. Just double check here this. Thing that always kills me is. Okay. We have an empty we have an empty database so. If, the demo gods are smiling, on us this, should flick up yeah so it flicked up it was almost imperceptible. Because it's like ridiculously, small database but. We read seventeen, records we, wrote nineteen two of those are, events that went to that the, finisher thing and, over. Here we. Should be able to see. Click. In the right place. Our. Lovely graph and. You. Know we can we can look at this and you know we this, is a fairly basic graph, but, we can start to run cipher.

Queries So, clearly, we can run. Queries that kind of match. The. The what. We could do in any relational, database so we could say. Which, manager, has the most reports, we can eyeball this because it's so small it should be Kim Park but. When, we run it yeah it's Kim Park and, she. Has four reports, can. I zoom in on this. So. So. That's just mapping what you can do in relational. Terms but, we can start to run queries that. Are very, difficult or. Or. Impossible. In a relational system so, we can do shortest, paths so. We have, Michael. Sinclair and Peter, Preston who. Are down here where they should, be in different parts, of the organization Michael's. Over there Peter. Is over, here apologize. For people at back can i zoom in here oh yeah oh. There. We go. That's. Much better okay, so. We. Can run that and see that you. Know if we follow reporting, relationships, Michael. And Peter are far separated, in, the database and this. Is one of those queries where it's one line of Seifer but, you'd have to start writing loops and, jumping, through all kind of Hoops in. In. A relational database and. Then. By. Contrast, if we just run another quick one with. Tony, and Peter here we, should be able to see that they report. To the, same manager now, this. Is all very interesting, what. Happens when we move, somebody around so. Let's move Peter to. Report, to a different, line, manager so, if we go over here to my sequel, update. That so. What I'm going to do here is say ok we. Have this job, runs periodically, this pipeline if. We run it again. That. Cipher. Runs and what's happened is that it's updated. Peters. Node and it's, removed, the old link so. If we go back to neo4j, and, take a look at the. The. Graph it's pretty hard to see here but we should see there peter is now reporting, to hana who now has route 3 reports, and the, results of our two queries should be complete. Different, so, now Peters actually in the same org as, Michael. And. He. Is across the. These. Across, the, company. From from, Tony so, we, are. Maintaining. The. State, of. Neo4j. Continuously. We could run this we. Can run this batch job as often as we liked but. But. Jobs are not very interesting. You know the word, stream in the stream sets name kind, of implies that we're we're. Really thinking about continuously. Streaming, data. So. Here, we have. Reading. Salesforce, cases, so his, Salesforce I've got about eight, support. Cases open and, they all have priorities. Go. In on this a bit, they all have priorities, and, case. Owners, and so on and. What. We're doing here is again a very simple pipeline we're, running a query to pull these from Salesforce. So, here's my the, interesting, data and again, writing. To neo4j using, very similar, pattern. Merge. Creating. Nodes, relationships. And. Deleting. Outdated. Relationships, but this one's a little bit more interesting, in that, I can. Just. Let me reset this make. Sure that I get all the data I, can. Run, the pipeline it'll. Pull those first eight records and, what. It's done is its knitted, the data together so. If I look. At the graph so. I'm I'm already, I'm, correlating. The data here alex. Is working on three support, cases here. Support. Cases of green devices, are red and, again. I can run, cipher. Queries so, I, can, do a simple one like okay what's the support. Engineer, with the most cases. And. That's, Alex as we saw he has three. And, then. Again. I can join data that's in. Different. Places. So. I can say okay who's the director with. The most high priority, cases and. It's. Going to be Andrew. Smith, so my. Cypher query there is joining. Data that's in HR, this, reporting. Structure, following an arbitrary, number of, reporting. Links. To, find a person with a job title that owns. These, high-priority cases this might be important, to see if. One, departments, becoming overloaded, with high-priority, cases. Now. What's really neat here is I've, sync I've moved eight. Records. Across and. If I start playing around with these maybe. I want, to change, a whole bunch of these to high-priority. Just to. Make. The numbers different. What. I should see whoops. Save. Those. What, I should see when I go back here is that now eleven.

Records Have been moved, what's, actually happened, happening, is that. This. Salesforce, origin, is subscribed. For updates, on changes. To cases, so. There's a protocol, to get notifications when, anything. Changes in Salesforce, in real time these are push notifications. And push. Them to. Two, neo4j, so. Now I'm creating new nodes modifying, new nodes rewriting. Those links dynamically, and what. Should happen here is that if I go back into, the. Graph. Suddenly. The. Hierarchy, isn't very different but the those number, of high. Priority cases, should. Change. So now sanjay has. The most high priority, cases so. Data. Is changing, dynamically, in the real world, those. Changes are being picked up by data pipeline, and being. Written into neo4j, so. Now we're not just running. Our analytics. On. On. A. Static, data set we're running on. Changing. Data, that, is literally changing, from one second to the next, okay. So the last little bit of integration, here is our. Device. Faults so. Here I've. Got to transform, the data a little bit because, my, faults, are in these pretty. Obscure. CSV. Files I've got serial, numbers time stamps and fault codes, most, of it is. Chaffetz. All. Of these zeros are just status, reports, they're not interesting, it's it's, anything with a non zero status, that is a some. Kind of warning, or something and, so. What. I'm going to need to do here is. Read. The, delimited. File parse that CSV, file, rename. A field that SN I want to rename to serial, number convert. The data types of fields so, everything. Comes as strings from CSV so I'm making, date times and integers and then I'm going to filter out on a condition, so. I only want, records. Whose fault code is is greater, than zero and then. I'm going to do a lookup because, I don't want numerals. In yo for J they're not very useful for my analysis, I want, actual. Strings. Representing. The, faults, and then. I'm going to write all that into neo4j and it's very similar cipher, to, before. Now, one, really nice thing I can do here is have. A quick look at the data as it's going through so. I can see the. Kind of data that gets read from the. Disk I can, see how fields. Are renamed, data. Types are converted, and I, can see how most. Of this data goes. To the default, the default discard. But. The this. Record here is going to go. Through the lookup and be, assigned an actual fault string so this gives me some confidence that my pipeline is going to do the right thing when, I run it so. What's, going on here I've got. 86,000. Rows. And you'll see there they were interested in about, lesson two seconds so. That JDBC. Driver is very efficient, I was running eighty-six, thousand of those cipher. Statements, in, just. A couple of seconds, and, over. Here the, graph goes a bit nuts because, I've. Got 800, or so.

800. Or so nodes, I just created and we only get 300, here so it's always a bit of a lottery but. We see here now that these devices. Are surrounded. By these faults, and it. Lets me again. Write. Cipher. That. Spans. The different, systems. So. I can say okay. Which. Support, engineers, are handling, critical. Faults, you, know make sure that, these important. Support cases are assigned, to the right people and again this. Was not apparent, from, any single, source. System. Ok, device, serial, number was, in Salesforce, and the, fault codes are coming from IOT. So. Here, I can see okay sorry employees, are handling, critical faults and. Alex. Is handling too and the other ones are handling one so. I can make sure ok Alex, is a hotshot, support, engineer he can he can handle those critical faults and, I can do, other things like. Report. On devices, who, have which have faults, but, don't have a support case open. For them so this could be crucial information. So. I've got two devices there that, have reported faults, but nobody's opened over the ticket and now. I can kind of simulate. More data arriving. Just. Drop another CSV, file into, the directory, and this. Thing should spin. Up just in a couple of seconds so it's, spun up from 86,000. To 170 mm, and, again. It's writing. Knitting. All of this data together building, those relationships, so. Now if I go to New. York for J and say ok I've got another day's data that's come in now. I've got. Whoops. Now. I've got three, devices. Now, that have faults, but no, support. Tickets, so, this is really, giving. Us the power to draw, these insights, from, the systems, around the enterprise. Okay. If we can go back to the, slides. So. Just. Writing data, into, neo4j is. Not. The only use case that this, tool can be used for data collector, and the stream, sets data, ops platform, we, do a lot of work. What's. Called data like read platforming, so, building. Data. Lakes in. Hadoop. We. As I mentioned we partner with cloud era and map. Are. With. Their big data products, the. World of IOT there's, a lot of data, to. Be read in from devices and IOT platforms. Bringing. Data together for, cybersecurity use, cases so parsing those log files from. Firewalls. And web. Servers and other equipment, and working. With continuous. Streams of real-time data we integrate with, Kafka. And other. Message. Systems. The. Marketing. Slide so, we and, we have some great customers. GlaxoSmithKline. The major pharmaceutical, actually. Uses us to bring data together from. All. Of their drug. Discovery. Systems, all the drug trial data into a single data. Lake so, that their scientists. Can make more effective use of that data in one, place cox. Automotive, built. A group. Wide data. Lake from, data from all of their subsidiary. Companies. And. Availa t in the healthcare space. Were. Found, that stream. Sets was able to accelerate, their time to, move data between. Different systems. So. Coming. To a conclusion we, might even have a couple of minutes for questions. Neo4j. Can. Provide insights, into data, that. Are just not available, when the. Data is in rows and columns and spread across silos. This. Shouldn't be news to anybody here, it's. You. Know it's one of the the, main points of value here we can run these graph. Queries, that are, just not feasible in relational databases that. JDBC, driver is. An. Excellent, tool for.

Bringing. Together standard. Applications. That use, this JDBC, interface with. Neo4j. I didn't, look at the reading. Side but you can write. Cipher. Queries, that, bring back data. That. Looks, like a JDBC. Row set just as easily as writing data and. Data. Collector, from. Street sets can read data from a huge variety of. Streaming. Flat. File, relational. Database, sources, and, work. With, neo4j. Very handily via that JDBC, driver. So. There's. Some references, here they'll. Be in the slides that are available. To. You elsewhere and with. That I think I have about, two, or three minutes for questions. Yeah. Yeah. So the question was can I differentiate, stream. Sets, data. Collector I guess from. Informatica. I'm. Not an expert on informatica, my. Understanding. Is that they're. A bit more focused, towards the. Relational. Schema, driven. Approach. We. Tend, to work more in a schema on read kind, of way so when, I read that data from in a relational database I don't have to set up schema I can, introspect, the data in the pipeline using, the. Data. Collector like, an IDE to see what's there and start. Start building. My. Rules to switch the data around, so, if I add. Some columns to the database and run the pipeline again it. Really doesn't care they'll come through the pipeline. And. Be, written out to their destination. So we call it an intent, driven approach. Yeah. I think that. Gentleman there was the first hand I saw. So. Good good question when, I. When. I update the graph what am i doing am i, deleting. And creating, or creating the whole graph over what. I'm doing is I'm using a cipher merge, to. Find a node with, a particular, ID and. Overwrite. Its attributes, okay because my manager, ID might have changed, and then. I'm, creating. The, relationships, between employees. And managers for instance and then. What I'm doing is I'm doing a match, to, find a relationship, where. The. Condition. Is no longer true, so, if I've now got an outdated relationship. Where the. The. Employees, manager, ID is not as the same as the manager, ID I delete. It automatically. So I'm kind of like housekeeping the, graph it's it's pretty efficient because I'm doing the least possible, amount. Of, work I'm not deleting. And in recreating the graph that would be pretty disastrous. I'd lose all my my. Relationships, because these things can all be updating, kind. Of asynchronously, so question. That gentleman there. No. I know. I don't have any quantitative. Feel. There I mean you could see there I was bringing, in you. Know 86,000. Records in less than two seconds so. You. Know it's just kind of hand. Wavy it works and it works well on you know everything's running on my laptop.

Yeah. I need, to I need to look into the admin to all ciphers life is the only tool I've used there okay, I'm getting the wrap-up signal, from the back so with that I'd like to thank you all very much for coming. And enjoy. The rest of the conference.

2019-01-23

Show video