"Introduction to AI for Developers" by Sam Nasr (NIS Technologies)

Show Video

welcome to glug net glad you can be here whether you're here in Live or via the recording uh tonight we're going to have a presentation on introduction to AI for developers by Sam nazer Sam uh is a long-term member supporter uh helper with clugnet and um a sponsor actually through his NIS company so we very very much appreciate his uh involvement in the group uh both as as a supporter and a contributor so we're very fortunate to have uh Sam's talk tonight a little bit more about glug net uh glug net is the greater Ling User Group for.net glug net has been in existence for 21 years uh I have been on the board and assisting running it uh for 20 of those 21 years so I'm getting old sorry folks anyway uh or that that's getting old actually maybe I should say that we're based in okus Michigan and we do meet the third Thursday of every month that has been our pattern from the very beginning I think there was like once or twice we deviated from that for very special exceptions but it's it's very rare uh like most meetup groups were free and open to the public we're currently virtual only uh the hope is that at some point this year we will go to a hybrid model where we will have virtual presenters or or in person if we're lucky enough uh but then just you know meet watch the presentation and then get some of that social interaction afterwards at a local restaurant or watering hole and and um get some some uh uh networking time after the meeting so we're looking forward to trying to get that going again and this presentation will be posted to the YouTube channel uh Sam's own YouTube channel where all the glug net presentations are posted so um you're welcome to go there and watch this presentation uh in a probably in a week or two also there's you know all the past ones so if you want to catch up on all the great clug nut um Candy that's out there you're welcome to to enjoy some and I think Sam you also post other groups right not just clugnet not just Cleveland uh actually the two in Cleveland uh the Azure Cleveland User Group the Cleveland CA sharp user group and of course glog net yeah so a lot of really great content a lot of really great content Beyond just what we can offer uh some housekeeping items if during the presentation and Sam I do you want people to ask questions as they have them or do you want to hold them to the end uh either or uh feel free to jump in at any point in time uh and then I will also stop at the end and ask for questions okay sounds good so while uh for the convenience and enjoyment of others while Sam is doing his presentation please keep yourself muted until you have a question we would appreciate that very much and uh Steve are there any other housekeeping items I should be mentioning okay that I'll take that as a no all right at the end of this presentation there will be uh we'll make available a survey and that survey will ask about the uh meeting and also uh give an opportunity to give feedback to Sam on his presentation um being a speaker myself and knowing Sam very well I know that we he's very interested in getting that feedback uh Sam works very hard to put together high quality presentations and getting that feedback makes them even better so this it'll be very much appreciated so that'll be available for 24 hours after the meeting uh we'll put the code back up at the end of the meeting so you can uh get a link to that um and I think we'll paste the the link into the chat as well so finally I want to thank our sponsors sponsors are theet Foundation theet Foundation is kind enough to cover our meetup.com uh subscription fees that helps us greatly and then of course Nas Technologies Sam's Group H sponsors the um YouTube channel the the meeting software goto and uh really any he pitches in anywhere he can can so we appreciate it very much and with that um I will that's actually when we meet at dupoint which we're not doing today so now it's time for the feature presentation you've all been waiting for and I'm I know Sam I know it'll be great so I'll uh switch it over to Sam and we'll go from there thank you Joe highly appreciate that uh beautiful introduction thank you totally unscripted it is from the heart there you go all right well thanks again Joe appreciate the introduction uh hello everyone uh my name is Sam Nasser as Joe said a little bit about me uh you can find me on Twitter at Sam Nasser I've been a software developer since 1995 I am a senior software engineer as well as a trainer for NIS Technologies and I've been certified with a few certifications from Microsoft and still continue to add to that list uh in addition I run the Cleveland C user group as well as the ezure Cleveland user group and uh I've also authored multiple articles for visual studio magazine did several courses for LinkedIn learning uh and I'm am a seven time Microsoft MVP in the AI field a little Shameless plug if I may for the Cleveland C User Group uh much like lugnet we meet every month uh the meetings are free of charge and open to the public and we cover a variety of topics related to net we meet uh like I said typically every fourth uh week of the month and uh you can find the meeting information posted at meetup.com at the uh link listed at the bottom of the slide so with all the uh introductions and Shameless plugs aside let's dive into tonight's presentation which is an introduction to AI so many of you have seen these terms uh AI ml DL NN and it kind of seems like alphabet soup pardon me and you really don't know how to refer to it and and so AI has become the general umbrella whenever we're referring to any of these but as developers we certainly want to gain Insight onto what is a neural network and what is deep learning and how does it differ from ML and Ai and um so basically as you see here in the diagram it shows that the neural network is the underlying Foundation is the the uh the smallest building block that we then build upon um and we'll get into what a neural network is in just a second um but essentially we use that for building up layers so we get into deep learning when we have multiple neural networks and then on top of that then we're able to utilize machine learning and then we get into AI which is essentially mimicking human uh understanding and uh responses so essentially what a neural network is is basically you have an input coming in you make a decision one way or the other and then you have an output but you could have multiple layers in between and so each one is considered a node as you see in the diagram here uh every Green item is considered a node and essentially you can line them up so you have multiple nodes multiple if then statement ments if you will that are intertwined with each other and in the end you get one single output uh at the very end now what does that look like in Practical terms well let's back up a little bit and let me give you a more practical example so supposing for example that I was a um a big-time Surfer uh I've never surfed but let's just assume that I am now if I'm going to go out surfing I need to evaluate a couple things number one are the waves good uh number two is the line empty or is the beach fully crowded number three is it safe meaning that there are no shark attacks and we'll just stick with those three just for the time being so what I have here is essentially three nodes that I'm going to be evaluating in order to make my decision whether I'm going to go surfing or not so the input is do I go surfing as the question and the output will either be yes or no in between are these three questions that I need to ask myself now with each question I'm going to give it a specific weight so for example um for uh node number one I'm going to give it a weight of five because large swells in my area don't happen too often on the edge of Lake Erie and so if the swells are high then yeah I'm going to go for it I'm going to give that a weight of five that's going to emphasize my my wanting to to go surfing if the line is empty yeah that's good but you know being an avid Surfer I'm willing to go surfing if the swells are good and if the lines are full you know so I'll give that a weight of two it's not that important but then you look at something like a shark attack if there's been a recent shark attack then and it's not safe then that's going to be a weight of four and if you notice because I'm a dieh hard server then the a large swell will over a a shark attack so that has a weight of four versus a weight of five for the the the swells the whole point is I'm assigning weights to each decision uh is it really worthwhile for me to go surfing if the waves are good if the lines are empty and if it's safe so each specific item has a specific weight so I give each node an answer or a binary answer of either yes or no one or zero and and then in the end I put all those weights together and I have a threshold meaning it so in this case it's rather arbitrary but I'm saying if I if it's above a three I'm going surfing if it's below a three I'm GNA change plans and so in the end I have my equation where I take my yes and NOS I assign weights to them and then I have the final threshold of three and then I see where I'm at with that threshold so you see here um that there are uh large swells so that's a heavy weight of five uh that the lines are are not empty they're full so that's a weight of zero I'm sorry it's an answer of zero but times two it's still zero it doesn't matter and then is it safe so that it's a one meaning yes and that's a weight of a four so my two big priorities are it's safe and there are large swells so it definitely supersedes the threshold of three so I'm going to grab my board and and hit the waves taking all that into consideration this is an actual example of what a neural network is so looking at the previous diagram what we have here we have all the nodes they're assigned certain weights and I feed it an input a question should I go surfing and then it goes through all the individual weights and it answers yes or no assigns the weights to them do I meet the threshold yes and then I get a final answer in the output layer so this is a very simplistic example of a neural network now obviously even in real life there are more uh more questions that I would ask myself is the water temperature good or am I going to freeze uh is there much daylight left so that I can see other Surfers without hitting them um are there a lot of Surfing vets out there or is it amateur night where people are bumping into each other so I can add in multiple additional questions and of course that's going to expand my network to be uh much more so being a simplistic example we went from three questions to six questions here but in reality when we're building a a model we're talking billions and billions of nodes hundreds of billions of nodes to be specific when it comes to a large language model and that's because it can then mimic human understanding mimic the response and not to mention in the reasoning that happens in between as well uh in addition uh large language models can be very creative they can design things and uh draw things for me so the building block is a neural network where then we build on that so anytime I have more than uh three neural networks that is essentially considered deep learning where now I have multiple networks and so there is deep learning that's involved and uh with that deep learning then I can develop things out of it meaning I can build on that and develop machine learning and machine learning essentially is basically comes in two flavors supervised and unsupervised so supervised is where I give it data and then I tell it this is good or this is bad and as we'll see in the example that we'll run through in a second I'm going to have it do sentiment analysis uh for local pizzeria and I will give it examples of good comments as well as examples of bad comments and it's going to learn and it's supervised in the sense that I told it it's good or it's bad unsupervised learning is where essentially the model is trained so that it can classify things together for example it knows that this is a cup it would also know that this is a cup and then it would classify these two together but then if I introduce something else then obviously something like this a cell phone that wouldn't be classified as a cup and so it would classify those as two separate items and it doesn't need to be it doesn't need to be supervised in that meaning once it's trained it can then classify items on its own after that so now this is the machine learning level to build on top of that and to get the AI where now I am mimicking human understanding so I can ask it questions like um draw a cup for me and it would actually draw a traditional cup for me and then I would say make it with a large opening or make it with a large handle and it will then take that and through natural language and it would be able to uh respond now when I say natural language I mean natural language as I'm speaking to you right now uh as developers we are trained on very rigid syntax right and C it has to start with an opening brace every statement must end with a semicolon uh strings must be enclosed in double quotes so it is very rigid very strict I can't go outside of that I can't just say well I'm just going to leave a semicolon out we've all been there and we've seen the kind of Errors it generates but with natural language it's basically like I'm talking to another human where if I was to ask Joe Joe can you get me a cup and he would naturally go down to the cafeteria grab a cup and bring it back right so it is very natural um I don't have to specify the the sentence or the question in a specific format I can just talk to him so let me pause here for a second uh I know I threw out a lot at you but do we have an idea of what the NN or neural networks deep learning machine learning and Ai and how they differ from each other at this stage and I feel free to either jump in verbally uh or or ask questions in the chat yeah I have a question Sam so you mentioned that the neural network and I'm sorry I can barely hear you right put my microphone back um so you mentioned that the neural network deep learning is you know typically three or more neural networks when when would you combine neural networks what would make you think that you should have multiple neural networks so it's not necessarily me um being a developer and someone who's a practitioner of AI I come in at more of the machine learning and AI level it's more of the scientists that develop their NE the neural network and they're the ones and like I said this the simplistic example that I gave is very simple uh realistically a long a language model would have billions upon billions literally hundreds of billions of neural networks or nodes rather and so it it's very extensive and it goes in much deeper as far as a natural language understanding where it breaks up words into what are called tokens um and so that layer tends to grow quite a bit in order for you for it to be able to understand uh human language okay does that answer your question yes sir it does very good any other questions all right very good as we proceed again uh please feel free to uh jump in verbally or through the chat window and if I miss the chat window kindly remind me and so with that let's continue on and so we are now at a level where we can utilize machine learning and so let's take a look at the terminology so with regards to machine learning a feature is what's considered the input value so for example when I'm buying a used car what are some of the features or the input values that will determine the price there's the model year the mileage um the condition of the vehicle uh all these things contribute to the the price of the vehicle so feature are the input values such as like I said the uh the model year uh the mileage condition Etc and then the output is what's known as my label that is the ground truth value and this is what's used to train the machine learning model in the case of supervised learning that is so we have all those features that will essentially dictate what the output price will be so in a nutshell features are inputs labels are outputs what happens in between is what's called a transform so this is a type of prediction where I am taking a value in transforming it doing some kind of processing on it and then producing an output value uh so that could be done in a variety of different ways depending on the machine learning scenario that I'm utilizing for example some of the machine learning scenarios are classification and categorization where I give it several items and I'd say classify you know these into two separate categories what's a what's a cup and what's a cell phone uh there's also regression so car prices is an example of regression where as the car ages over time the price decreases um something opposite with is with home prices as time goes on the price increases but of course it's also dependent on the input features of the home I.E the number of bedrooms the ZIP code uh School District all those things affect the price of the home one of the other features that I'm fond of is anomaly detection where essentially it will track the the usage for a user uh and this could be credit card usage uh or it could be um you how often they they clock in um or swipe their badge to enter a secured area it will detect any anomalies that are outside of the norm for that user another one is recommendations uh I'm sure you've all heard of a website called amazon.com uh this is typically where half of my paycheck goes every month and the reason being every time you go to make a purchase it also makes a recommendation oh I see that you're buying a laptop well how about you know a nice Mouse to go along with it or how about a mouse pad to go along with it right so it's making recommendations on previous purchases to the current user and his current purchase there is a Time series or sequential data where essentially it's forecasting product sales or spending over a period of time and so that's used to analyze how much should I increase the price of my product based on uh previous uh price increases or previous usage um and then image classification like we talked about where it can actually ident ify uh pathologies in a medical image for example if I have an x-ray and there's a spot of a lung that doesn't look 100% normal uh a model can be trained to actually classify that as abnormal that needs further diagnosis so essentially these are the various machine learning scenarios uh that are available to me now if I want to utilize machine learning it's basically a cyclical process and let's go with examples like that are more numeric based like linear regression um where essentially what I need to do we start at 12 o'clock where we're going to collect and load the data and then from there we're going to create a pipeline using the append method and then we're going to train the model after the model is trained we are then going to evaluate the model and depending on the evaluation meaning that we're going to test it depending on the results of that either we collect more data and train it again so it would be a cyclical process or we can just simply proceed to save the model and be able to utilize it in another project as we see at the bottom of the slide okay so essentially it's a cyclical process collect and load the data train the model evaluate and then identify whether you need to save the model and use it or you need to retrain it on additional data once you get to the point where you're ready to use the model essentially in ml.net what this is

going to utilize is the load method and then from there uh we'll make a call to the create prediction engine and make a call to the predict method specifically and pass it in a value where it can then make a prediction everyone with me so far on these Concepts yeah I have a question yes please go ahead Kevin in the previous slide why wouldn't you save after the evaluation like wouldn't you want to evaluate your model and say this is good this is bad and then save are you saying so whether I would save it right here or not in between the uh evaluate and collect load data yes right like at the at the :' where you're evaluating you get your results correct that that would be a more accurate placement of it you're absolutely right um this is just essentially to get the point across that it's a cyclical process but once you're done with it and you're satisfied with the evaluation process then you would save it okay yep understood thanks I will fix that slide though good point okay any other questions all right very good and so now that we have our model um now that we know what actually what's conceptually happening how does the rubber meet the road how do we actually implement this in visual studio so some prerequisites are needed if you're going to be utilizing ml.net so first off within the Uhn net desktop development environment as we see here you need to identify that you're going to to load the ml.net model builder so this would need to be loaded in as part of your environment once that's loaded or installed in your environment then you essentially have three options and sorry the slide is a little bit uh messed up um but essentially there are three ways to go about adding an ml.net project Into Your solution number one you can do it with a UI as we we'll do in just a second number two two uh you can use it programmatically because essentially that's what the UI is doing on the back end is that it's uh generating code and so I can skip the UI and just write the code myself and I can also do it through the uh CLI or the command line interface all three options are available to me but obviously I'm going to choose whichever is is easier now if you'll notice what I have highlighted in the slide is that have a project where I right clicked on it I selected add and because I went through the pre Builder I now have that machine learning class available to me in the context menu when I right click on a project and so now that I have everything set up let's uh jump into a demo and let me get the right version of visual studio all right so what I did to prep for the demo I created a very basic and very um vanilla console application as a matter of fact all I did was select the console application I gave it a name of ml demo ta and you'll notice here that all it is is just the program.cs and the program CS is kept very simple so because I have all the prequisites done I'm going to right click on the project select add and now I have machine learning model as one of the options to add in I'm going to accept the default name of ml model 1mb config and there it's highlighting the object for me going to go ahead and click add and it'll take just a minute and now it's presenting me with a user interface all within visual studio and it's going to walk me through what type of model I need to build so it's asking to select a scenario and here are the various scenarios that I can select on data classification where it can classify tabular data value prediction for predicting value uh for example sales or Trends over time uh recommendations whoops I clicked on that one by accident so let me go back and if you'll notice for each scenario it shows you where these items could be run either locally or in some cases uh locally or in Azure depending on how much horsepower you need for running these applications and we have natural language processing at the bottom where uh there are a variety of different uh natural language processing uh scenarios that it can perform we're going to select text classification and we can run that locally so we'll go ahead and click that it's giving me a brief overview of my development environment and then we'll proceed on to the next step now it's asking me for data so I know the data is housed in this file and let me go ahead and open up that file separately so we can view it and all it is is basically a flat text file it's tab The Limited and it contains two columns there's the raw text or the comment left in by folks that visited uh a local pizza restaurant and then I'm identifying whether that comment is positive or negative so one for positive zero for negative and there is uh approximately a thousand lines in here of various comments okay so I'm going to select that file now it's asking me which column is which which one is the label which one is the feature so the column to predict uh the label which is the output is going to be column one the text column is going to be column zero and now it's showing me a preview of the first 10 rows of the total 1,000 rows that I have in that file next step is essentially to start training and I'm going to go ahead and click this this is expected to take about three minutes and so because of that I have a demo that I created previously and we'll jump into that now after the the train step now obviously here you see in this project uh training is completed it took 190 seconds and then the best macro accuracy is 67% now it's time to evaluate it so there are some comments that I can uh put in here so I can say I can type in a comment like um did not like at all and it chugs and it tells me that it's 99% sure that this is a negative comment or a zero value um I can also try another prediction and say ample portions and good prices and then responds back 92% confidence that this is a positive score and so if I'm satisfied with that I can move on to the next step if not I can go back and collect more data and retrain it and give it more time for training so that it can go through different algorithms uh and be able to identify what's the best fit but let's assume that I'm satisfied with the results here now I'll move on to the next step which is consume and you you'll see that it's giving me some code Snippets if I want to be able to execute this and utilize the model that was trained and I can do this in a couple of ways I can add this as a console application to my project or I can add it as a uh web project to my solution so either or will work now if I go back to the demo okay so this one is still training so what I'll do is I'm going to add this as a console application and you'll notice it says add console app again because I already added it as console app one but I'm going to go ahead and add it again uh right here I'm going to add it again as a console application and it's saying you want to accept this default name for the projects ml model 1core console app to pardon me I'll go ahead and add that to the solution and it created another project in my solution it's another console application and it's kept very simple it's utilizing the ml model that I just built in the first project for an ml demo one and it's utilizing that where it's instantiating it and it's passing in a sample input and then it's going to display the results in the bottom here so I'll go ahead and run console app 2 let's make this my startup project and we'll go ahead and run it and there's my console window and if I put that side by side with the program.cs so what this did was it ran a single prediction of crust is not good and then then it called the model and it asked for a prediction on all the labels and all the labels that are being passed in are this sample data and then we get an output so the output is returned in the variable called sorted with scores sorted scores with label and then we simply just parse through and display those and so if we go to our output window the sample that was passed in is crust is not good the output value for that prediction is that it's negative and then here it shows us a score of 99.48% uh 049 excuse me uh percent

accuracy that this is a negative comment so I was able to utilize visual studio with the tools that I know to be able to create a model and then using all the features within the model builder I am now able to create a separate application just by simply clicking a button it added that project in there and now I can pass it in Sample data and I can respond back with a confidence score as well as an output value so I'm sure right now you're all standing and applauding about how marvelous this is and how easy to access uh but I wanted to pause for a second and see if you have any questions or comments thank you for the Applause Kevin I think it's a very cool feature all within visual studio all tools and language that I know and it expedited the ability to be able to train a model and use it all in one stop shopping all right very good thank you Mah Okay so there's no questions then we'll continue on and we can close the window stop that project all right so now we've seen what machine learning can do now let's take it up a notch and move on to Ai and more natural language understanding what other features are available if you go over into Azure and let me share my screen with you before we do that actually let's talk about improving performance so as we saw that it had a score of 67% overall when I trained it on the thousand records now if I want to improve performance what are things that I can do uh first I can obviously provide more data but not just additional data it has to be clean meaningful data meaning it can't just be Goble deg I have to go through make sure that there are no misspellings uh make sure that the data is label correctly because this is supervised learning um so I have to give it data and be able to train it on that uh data with context so obviously this was a simple example but if you look at things like uh home sales obviously some of the things that affect the price of a home the zip code number of bedrooms and then of course the various amenities uh whether you have a swimming pool uh a fenced in backyard number of bathrooms whether it's been recently remodeled also you have other factors that can drive down the price was there fire damage molds uh insect damage variety of other things is it in a flood zone all these things could also impact the price negatively so I can't just look at zip code alone and the price of the home I have to give it context what else is involved with that home what else happened to that home that may affect that price so that's what's meant by data with context another way to improve performance is by something called cross validation and what this is it's basically a model evaluation technique so as I built the model it basically if I use cross validation it can split that data into several partitions and then it trains different algorithms on the on that data set or on those various data sets and then it compares the results so this could be a very effective tool for training models with smaller data sets and then of course another way to find out if or how to improve performance on accuracy is by trying different algorithms uh by running through the data through different uh algorithms it can then select the the best model or create the best model uh for that data so building on AI excuse me now that we have machine learning we can build on it and go to the next level with AI or artificial intelligence and there are a variety of services available through Azure and these Services essentially they provide human interpretation where I no longer have to speak in a rigid manner where the sentence has to be structured in such a way but rather I can speak fluently and naturally as I would to another English speaker it's used for building intelligent applications so not only is it recognizing natural spoken language but it can also see and and understand and reason um variety of different things that mimic human intelligence and reasoning so as you see I am a your C developer and I use the capabilities that I have that I've you know as far as uh running it in Visual Studio I didn't have to get a PhD in data science or artificial intelligence to be able to utilize these features the the Deep core has been kind of abstracted away for me meaning the the building of the neural networks and the Deep learning has been abstracted where now I am using or I'm a practitioner of the higher functioning features of machine level and Ai and the models are built using the external portals so for example um if I go into natural language uh there are other portals that are outside of it that allow me to access Azure uh and we'll see later in another example where I'm utilizing a large language model uh I would go into the Azure Foundry that is outside the Azure portal but they work together but but it's still I have to go to that Azure Foundry to be able to utilize that large language model and so once everything is said and done and I have my model created as we had see I can deploy it in a variety of different ways uh it could be accessed as an API or as we saw utilizing the SDK within Visual Studio I can then use it within my console application so looking at Azure AI services uh essentially provides a variety of different human senses if you will artificially so I have Vision decision language speech and open AI which incorporates a lot of these features within Azure AI Services well more specifically let's focus on speech services where it offers a variety of things such as speech to text text to speech translation and then speaker recognition so I'll be able to demonstrate the first three however the fourth one requires some additional permissions from Microsoft because now it involves some uh personally identifiable uh data meaning I'll be able to identify Sam from a crowd just by hearing his voice and so with that type of information it's kept under more secure I should say they need to know what you're going to use it for before they Grant you access to that and so with with that let's look at a demo of speech services and because this is available in Azure what I need to do is first and foremost I would need to create a speech service so to do that I would just simply go to one of my resource groups I typed in speech services and there is the Azure service that I would need to create once I create this and I I won't do it here because it's already been pre-created but once I do this here I will then have two identifiable items that I need my key as well as the region that I'll be running it in so those are critical because I'm going to use that in my in my code and if we look over [Music] into my visual studio I now have another project called speech services and then I have the four different um speech services available all done through console applications so let's look at speech to text and with speech text I kept it fairly simple there is my subscription key and then here is my region and these two are kept in My config file are things I don't want to share publicly they are tied to my subscription and likewise when you deploy this you also want to keep those under lock and key as well you don't want to give away the the keys to the kingdom but essentially this is a console application where I am bringing in a couple nougat packages for utilizing the speech services and then within my main method I'm getting my subscription and my region I'm instantiating my speech config class and I'm specifying the two items uh for the subscription and the region and then I'm specifying that I'm going to be utilizing the the default microphone and then it's going to go into an infinite Loop where we ask how can I help you I'll then give it the text or rather I will give it the speech through the microphone and then it will respond back with the text of that uh that spoken words and then it'll say press enter to continue where the cycle will repeat again okay so let's go ahead and run this uh oh sorry wrong one so we want speech to texts and so we want to uh we want to select this project I would like a large pizza with sausage and there you see it transcribes the words that I had spoken into it I would like to order two large pizzas as you can tell I'm a pizza fan but a more practical example of this is someone who's visually impaired you can set up a portal where they would press a button and then they can just speak into the the microphone and then it would take that text transcribe it and then we can do further processing on it later I.E look for certain keywords look for a part number that they stated in there um a variety of different things or you can piggyback it with other applications where like a large language model where you could actually have it have the user talk directly with the model so the opposite of speech to text is text to speech and that's done in a similar fashion again getting my subscription and my region and then here it's different because I have a variety of voices that I can choose from and as you can see I experimented with a couple commented them out but then eventually I settled on Irish Connor um where he's going to be responding so in this case I will be entering in the text and then it will pass it to the speech synthesizer class and then respond back with the the text that's returned from it and so once we go ahead and run that so enter text I would like to order part number 1123 did that audio come through yes it did yes EXC all right very good and so as you you might have picked up there was a little bit of an Irish ACC sent uh thanks to Connor and the selection that I chose here okay fairly straightforward any questions on that okay moving on to the third one which is my favorite speech translation can you guess what we're going to be doing first right just like we did in the other two demos subscription and the region and then here we're going to be targeting two different languages Italian and French and we're going to be speaking in English uh the US variety and then translating it and it's going to enter into an infinite Loop where it's just going to ask me to speak into the mic translate display the text and repeat and so go ahead and select speech translation and go ahead and run that I love pizza with extra cheese so there was no verbal translation uh but it took the text that I entered in verbally it translated that into it took the the speech translated to text took that text and then translated into the two different languages Italian and French I'm not fluent in either but judging by the looks of this it it looks like it's it's spoton so this would be more useful if you have an international customer base uh or clients that don't necessarily speak English this is a good way for them to interact with your application okay any questions uh maybe I missed it but uh that is running up in Azure and not locally on your system correct uh so the application is running locally however right but the model is running on Azure exactly correct okay uh the step that I glossed over was basically uh looking at my my Azure portal I went into a resource Group and then typed in speech services and then you just simply select create here once this is created you would take your two keys for the subscription and your region and you would utilize that in your C code all right thanks sure other questions all right so let's get back into our slides so just as we had seen with speech services there's also a vision Services where you can do a variety of different things you can do object detection uh you can analyze the content and images or do OCR on them uh there's an article that I wrote and I'd be happy to share the link with you where basically I scanned in my my business card and then I was able to isolate all the various aspects of it name address phone number and then it was able to do OCR on that from an image and be able to um extract that text from it uh you can also identify people but again this comes with uh an application that you need to submit to Microsoft for being able to do that um and then the video indexer which is pretty powerful where you can analyze video and index the content in that video and then form recognizer where basically you submit a document and then it can extract various text from it whether it be a receipt or an invoice and it would extract that text from it and then allow you to populate a database with it directly and then uh the ink recognizer which essentially is used for handwriting um actual handwriting cursive um and so you can utilize that feature as well as far as face detection essentially the reply comes back where we submit after we submit the image the reply comes back in the form of Json where it's identifying the the face that's in the image and as we see here it's showing that it's a a female approximate age of 23 and the location of the face in the image and the coordinates where it's located um and then it also gives some additional metadata regarding the image itself the format the height and the width same thing with object detection where we can submit an image and then it can respond back with all the various objects that it sees so in this case it sees a computer keyboard it sees a kitchen appliance um and it sees an actual person sitting in it and then this is all in the Json format and it's giving you the coordinates of where these items are located within that image lastly I want to talk about the AI Foundry now that we've gotten to a point where we at the highest level we're utilizing artificial intelligence this is now utilizing large language models in The Foundry where it's literally hundreds of billions of nodes that are used in use to mimic human understanding and response um so with that let's get into The Foundry and we'll go over here and so this is a demo that I had created previously and before we get into that let's do this let's let's close that and let's go into another project in The Foundry so these are basically a variety of different models that are available that I can use um and they perform a variety of different tasks s and as we see here I can filter down on uh the various tasks that they can perform whether it's Q&A text translation classification etc etc rather a rather lengthy lists and as you see over in the right hand corner over 1,800 different models and this number is constantly evolving as more models are being added deep seek is the one that is most recent uh that was recently added uh but then we see a variety of other GPT models GPT 40 uh 35 and a variety of other different uh flavors so you can also search by The Collection or publisher of that model so there are ones that are curated by Azure AI by Microsoft meta cohere data bricks so on and so forth but essentially if we want to be able to utilize it can go into the playground and the playground is essentially a basically uh a Sandbox for trying out your your model so in this case I selected or I deployed the GPT 35 turbo uh because it's a lowcost model and allows me to experiment with it so when it comes to utilizing a large language model it has the capability where now I can ask it to act the way I want it to act uh I'm giving it a Persona and how to respond so here these are the model instructions that are being passed in I am telling the model you are an AI assistant that helps people find information so that's the default message now I'm going to add to it and I'm going to say you are comical and friendly but you're rude when the same question is asked twice and as you see I get a gentle reminder that I need to select the apply changes button below and so I'll go ahead and do that and if you notice I didn't structure that in any specific way I just typed it as a natural language now that I'm ready to utilize my model I will ask it um how many car makers are in the US I'll submit that and it comes back very friendly way ah the old age question well my friend there are currently 12 major car manufacturers in the US right so it's a very friendly response now let's go ah and ask that same question again and you see now I'm getting a little bit of a spicy attitude didn't I just answer that question right and then it ends it but seriously try to remember next time okay so this is kind of a a fun thing to do right but more specifically what I could be doing is instead of focusing on the attitude I can say keep all answers to exactly three sentences and you notice that now I have a new chat session so one of the things that to consider in a large language model is the context of what you're your referring to meaning I gave it a question um and it the next question will build on that previous question similarly if I was talking to one of you and I said uh how many car manufacturers are there in the US and you would list all the car manufacturers and then I would say uh name just the electric manufacturers so you're not going to look at that question alone by itself but that question is relating to the first question of car manufacturers but now that I applied the changes now it's saying that a new chat session has started and now it's a different context that I'm working in so as we see here previous messages won't be used as context for new queries I told it to keep all answers to exactly three sentences I'll ask at the same question again and there we see kept to three sentences and then I'll say I'll add on to it then I'll say start each answer yeah let's do that current date and here's tell me as of the latest dates okay now I can go on and and modify it but more specifically how can I utilize this in business so I have the ability where I can add in data and for the sake of time I'm going to uh fast forward a little bit um but this is another demo that I had created where I had imported product data uh regarding a hypothetical company for uh Trail Walker boots uh so it's an outdoor sports goods company and I imported a file that is all about the trail Walker boots and so I put that in there and as you see the question I asked how much are the trail Walker hiking shoes and it came back with the answer it's 110 and here's the reference where it got that from from this file so this is a markdown file and then when I clicked on it within the playground it's showing me what that file is now keeping in mind within that context of the the hiking boots I can then ask what color is it and then it says it's a it's in black is it comfortable and again all this is to show that it's working within the context of the initial question about the trail Walker boots and so it pulls up the text relating to that uh to the file that I'd entered so where this comes in handy is now I can create my own co-pilot my own assistant for my own customer data and my own products that I can then share with my customers so uh I no longer have to let them rely on Google where they're just shuffling through a variety of different information but rather they can ask pointed questions and then get pointed answers all by training my co-pilot on my data okay any questions all right so with that uh that pretty much wraps up my presentation um and if there are no questions for me I would like to ask you what have you learned here today what value can you take away from this meeting uh would love to hear your responses either verbally or in chat any takeaways would love to hear your feedback I think I think for me the the main takeaway is it's easier than I thought I I think that there has been a reluctance to tackle machine learning AI because oh it just must be so complicated and you did a good job of showing that's really not thank you appreciate that all right um yeah Kevin go ahead please yep also from my perspective I don't do a lot of data work in my professional job I mostly work on front ends and you know a few back again so seeing how to integrate these models in theet infrastructure has been fun because I've seen like tutorials online and they'll say go download olama or this software and that software and you use these tools but to also put that with to the tools I use is kind of fun to see yeah so AMA and other language models you can certainly utilize that um but that involves another layer uh where you'd have to download another server on your machine to be able to execute those models The Foundry takes care of that in the sense that that model is now running in the cloud and Azure it and even though it's called the The Foundry and it's separate outside the portal it's actually connected to the portal because everything that I set up in The Foundry ties back to a resource Group in the portal uh which also ties back to my subscription so it it um The Foundry gives me the the playground capabilities the ability to search through various models and see the the various benchmarks not to mention that list of models is constantly evolving but then in the end uh I can actually deploy and as we see here all within the playground I have the ability to set my prompts and give it instructions select the model that I want to deploy add my own data but and I can also evaluate it here and then when I'm everything is said and done I can select to deploy it as a web application if I need to access it through an API call in my C code I can look at code Snippets and there's my code snippet and python if I want CP that's also available same with JavaScript so essentially I would just cut and paste the stop code of course I'd have to utilize the the keys for my own account but it's giving me stop code that's GNA that can get me up running sorry I that was a little bit of a mouthful um as you can tell I'm very enthusiastic about these things yeah thanks oh you're very welcome okay so to wrap it up um we talked about uh the building blocks neural networks how that then is built into deep learning and then how we built ml models on top of that and then eventually get to an AI level we talked about how we can incorporate that in ournet Project as you had seen in the first demo how I can utilize AI Services through C code uh all by just simply setting up the the speech services um and then we took a look at the utilizing large language models and The Foundry and how that can be deployed and how we can utilize that and customize that for our own uh customer base so some references uh for the uh ml.net guide you can find that on microsoft.com uh in addition there's the ml glossery for all the different terminology and then for improving ml performance or machine learning performance uh it's a night nice article on how to improve performance and how to cleanse your data if you found the this uh presentation interesting or you think it might be useful or you would like to have a a proof of concept uh done with your organization um I'd be happy to help you you can reach out to nist technologies.com um also this is available as a course for your team and if you need to get in touch with me for any reason or any question uh again just because the presentation ends here doesn't mean the dialogue stops I'd be more than happy to help you get a proof of concept up and running uh demonstrate something to your team you can find all my links on link tree Sam Nasser but more specifically you can reach out to me via email s Naser n technologies.com uh again my links and then if we're not connected on LinkedIn let's do so uh it's a great way to keep in touch um see latest articles and things and um overall great platform not to mention I have a few courses published on there regarding AI uh if you're interested in expanding your knowledge and so with that uh thank you for your time thank you for attending and I hope this was beneficial for you all right thank you Sam I'm GNA go ahead and redisplay that QR code for the eval as I promised that I would do so I'm going to go ahead and grab the um presenter mode for a moment sure please do okay you should have should be able to see the QR code and Steve if you can throw the actual URL into the chat that would be helpful as well I don't have just did okay very very good let me go ahead and see if I can grab that maybe I can paste that here nope okay all right so um thank you for attending thank uh and um we appreciate your attendance very much and your attention thank you Sam One Last Time thank you anything followup you wish you wish to talk about can be put in the discussion uh section of meetup.com plug

net and Steve you said this will be open for 24 hours correct yes okay fantastic well be sure to fill out the evow Sam will appreciate it and we will as well thank you very much all right thank you all good night

2025-03-02 21:47

Show Video

Other news

AI’s Vibe-Coding Era — How The Shift To Apps Changed The Race 2025-04-02 21:21
Webinar: Exploring efficiency technologies that advance decarbonization | GE Vernova 2025-03-30 13:34
BYD Announces their Next Generation of Blade Battery with 80% Charge Achievable 2025-03-30 07:18