Ask Sage Overview

Show video

hey everybody this is Nic Chaillan I'm the  former Chief software officer for the Air   Force and Space Force I'm also the founder  of Ask Sage bringing GPT to government teams   and integrating it with custom datasets  live queries to be able to bring these   capabilities to government teams and contractors  to accelerate your capabilities and save you time.   At the time of this recording we have about 3 400  government teams and 750 companies both financial   institutions and defense industrial base companies  and government contractors that signed up for Ask   Sage we're so thankful in three months to see  such massive volume of engagements and we know   you have questions so we wanted to share lots  of insights today and pitfalls to avoid and   things to pay attention to so let's get started!  All right so the first step for you is to go to   chat.asksage.ai to go and create an account so  as you can see here very simple you go to the   website and you register make sure you don't lie  on the fields here and if you do have a CAC or   PIV make sure it's inside of your device when you  do the registration here but effectively you're   gonna put your name first name company email  phone and select your country make sure you   read the terms and condition and then you're gonna  subscribe you're gonna get a code in your email to   validate if you don't get the code let us know we  can force your account so just send us an email   at support@asksage.ai will be able to help you  to validate your account if you have any issue.   all right so now that you have created that  account you can log in either with a CAC here   and simply click here after a registering first  but if not you can enter your email and password   and we'll show you later as well how to activate  multi-factor authentication if you don't have a   CAC or PIV as well. All right so the first thing  you see when you log in is this pop-up here we  

always tell you to read it because there's a  lot of great insights also when you scroll down   you're gonna see all the new features we're  adding and a lot of great comments so check   that out also watch those videos if you want to  learn more about Ask Sage and if you have issues   always you can find our Discord Community here  and the support email and sales email as well.   All right so first let's talk a little bit  about Ask Sage architecture I know a lot of   you have questions and that's understandable  obviously first for us it was very important   to be model agnostic so we do support  open AI GPT 3.5, 4, DaVinci, GPT4 32k   tokens but we also support cohere Dolly V2 Google  Bard and any other large language model that we   can use so we're agnostic and the beauty there is  the training we do on top of those models is also   agnostic you can train data and then ask questions  to any of these models and it's going to have the   same insights that we taught it so that's really  enabling you not to get locked in to those large   language models and try different things and see  what sticks Ask Sage is hosted on Azure gov Impact   Level 5 for CUI that's where we store all of our  data, datasets that's a multi-tenant stack we also   have an ability to host a dedicated tenant we'll  talk about that later but the multi-tenant stack   is on Azure gov and we have our dedicated Enclave  with Azure Open AI with a dedicated endpoint that   is on the FedRAMP high commercial regions where  we brought all the controls to make it capable   of doing impact level 5 work so while the Ask Sage  stack is on Azure government the API for the Open   AI models is on the FedRAMP High commercial  regions and we have specific settings thanks   to our partnership with Microsoft to do a fire  forget API so effectively it's not logging any   information, humans have no access to the prompts  and responses but also so the data is never used   to train the model so effectively when you ask a  question to the bot using Open AI it's a fire and   forget API so it does not remember what he just  said and that's the beauty of what we built makes   us capable of doing impact level 5 work and the  way we make it remember what you ask is by passing   again the history of the chat every time we have a  conversation with the API as you can see Ask Sage   has many datasets that we ingest and share across  users but your custom datasets are only visible to   you so if you train the bot on specific topics and  create datasets they are only visible to you and   to the people you decide to share those datasets  with and that has to be done through us by the   way for security reasons but know that when you  train the bot on dataset this is not visible to   other users and so effectively it's almost like  having your own dedicated bot dataset experience   on top of those large language models like Open  AI. Obviously, I care very much about security   and data centricity is essential we're built on  top of zero trust we have this model of labeling   data each dataset has a label that can be then  shared to other users but by default they're only   visible to you and that enables you to decide how  you assign data and how you cut it into different   datasets by creating your own data labels and  we'll show you how to do that all right so why   did we create Sage well you know like everybody  we started playing with GPT back in November   pretty much October started using it to write  our video scripts and that was great I became   kind of a pretty face reading the video scripts  and doing all videos that was exciting but what   really got us really excited is when we started  to use it for Mission work and let me show you   exactly the moment where I realized that this was  way more than just some gimmick capability writing   content for us in this case what really got me  excited is when we had this data set with Chinese   resumes of the CCP teams and it's unclassified  and open source and you know they have 150 fields   of different information but you know what was  pretty mind-boggling is the volume of information   and if you put this on Google translate you get  150 fields and your brain is not able to really   understand who this person is it's just too much  information and so we'll show you what I did right   so we simply did translate and summarize who  this person is and this is a Json document here   in Chinese so it's not in English other than the  fields names and simply by asking it to translate   and summarize and that's really where I decided  to make this a company is seeing the immense value   where now GPT is able to give us a clear rundown  of who this person is in plain English and that   was really mind-boggling you know the human brain  would not be able to do this simply by translating   things on Google Translate for example and so as  you can see now we can pretty much picture who   this person is that's why we decided to create  Ask Sage we realized it was way more than just   writing simple things and it was capable  of doing pretty incredible things to really   augment and empower people to get things done  and bring tangible outcomes to the warfighters   you know for me it's actually pretty  interesting I don't Google anymore   you know this is re-augmenting our time by  you know 80% on average a day at least for   developer we estimate you know a 10x ratio for  teams responding to RFPs for example we've seen   them be able to augment their volume by three  to seven X on average which is pretty incredible   if you look at GPT and the value it brings to  the table it's really almost like a a company   as a service capability what I mean by that is  it's able to do pretty much anything if you look   at Sage itself the logo of Sage was created by  the bot the UI 90% of it created by about 90%   of the backend and 100% of the SQL stuff 100%  of the CI CD DevSecOps pipeline 100% of legal.   you know if you look at all the capabilities  we built really everything was orchestrated and   empowered through the use of Ask Sage to improve  itself almost like like an exponential capability so obviously a lot of people are worried about  GPT a lot of issues a lot of challenges is it   perfect no obviously not but can you drastically  improve your velocity and your outcomes no doubt   and we're just getting started you know things are  improving every day and it's kind of mind-boggling   to see all the research coming out from scientists  and data scientists on the subject matter and it's   really slowly but surely becoming more and  more safe but also more accurate and factual   all right so here let's look at the  UI here a little bit as you can see   we have different conversations  and it's easy to rename them   and we can easily do that and delete former chats  ideally you want to create a new chat every time   you change topic or you swap between personas  and models we'll talk about that today you can   edit you can delete and of course you know the  conversation is kept but do keep in mind that   because of the token limit of the models we only  pass about five to ten of the previous messages   to the query so it's not always tracking the  entirety of your chat so try to get things done   in as little prompts as you can ideally in one  prompt or two but don't stop following up and   keep asking questions it's always better to get to  outcomes in the in the small number of questions   all right so what is a token one token is about  four English characters but that changes also   with coding and Chinese and other languages so an  easy way to see how many tokens you just entered   if I write this is a test you'll see here that  the bottom right here is going to show us this   is 18 tokens of text right so this is a good way  for you to try to estimate the volume of of tokens   all right so let's start with a simple  query here we're going to ask it who I am   and see what kind of answer we get just to show  you how this works of course we trained the bot   on who I was so it has all that Insight so you  know what's interesting obviously is we get   references so we know where that came from and  we have follow-up questions here so you could   ask you know additional questions like what is  the DoD Enterprise DevSecOps initiative that   Nic Chaillan co led and this is going to give us  additional insights here and give you additional   references for that question as well so this is  a very easy way to start asking questions like we   talked about always start a new chat if you change  topics you can just do /new with the command and   create a new chat or you can also clear the chat  by doing /clear as well all right so now let's   let's look at a little maybe a more interesting  example maybe writing some code so we're going   to ask it to write the code for kubernetes nginx  Ingress that leverages a mutual TLS authentication   to authenticate users with a common access  card and let's see what he's going to give us all right so he just gave us the code of that  Kubernetes Ingress and it did add the verify depth   and the verify client which is exactly how you  would do this to activate and the TLS secret will   be obviously passing the root CA with all the DoD  certificate here which is exactly how we actually   did build this CAC authentication feature in Sage  for the DoD and gov so pretty cool as you can see   it's giving us also some additional information  here and you can copy the code here and you have   a copy the entire block here but you also have  a thumb down down button here to notify us when   there is a bug don't use it when the answer is  wrong we can't help that but notify us when you   see a bug and there's an issue and you can just  click this thumb down button here to help us   fix it all right so first let's look a little bit  at the different models like I said we are model   agnostic so by the time you're watching this  video we may have new models but at least for   these models that we have today you'll want to be  able to know which one to use they have different   price points so you're gonna pay more if you use  GPT4 or 32k than if you were using GPT3.5 for   example when you buy an account with Ask Sage you  get DaVinci token which is kind of the middle tier   price point but these tokens can be exchanged  just like a currency into any of these models   so you do buy tokens 500 000 tokens for the 30  dollars a month account but keep in mind this   gives you five million token with GPT3.5 which is  way cheaper it's 10 times cheaper than DaVinci so  

you get 5 million token I know if you were to use  GPT4 keep in mind that GPT4 is about five times   more expensive than that DaVinci or 50 times  more expensive than GPT3.5 so you're getting   you know only a hundred thousand tokens when you  have the paid subscription of GPT4 and gpt4 32k   is nine times more expensive than DaVinci or  90 times more than GPT3.5 so you need to use   the right model for the right job DaVinci is  going to be less biased is going to be able   to do more things so when you hear the model tell  you oh I can't do X Y and Z, clear your chat do a   new chat and try again with the same question  with DaVinci and it should be able to do it   now Cohere is another commercial model it's quite  limited but it's it's a an option that you can   use and it's the same price as DaVinci so when  it comes to tokens keep in mind you're paying   not just for the question you're asking but also  the data we passed to the bot that we ingested   that could be datasets or insights we have but  also the reply that you're getting back right   so you're paying tokens for all of the above and  we can only estimate what you're typing we don't   know how much we're going to pass in terms of data  to that and we don't know also how much the bot is   going to respond to that so it's impossible  for us to predict how much one question is   going to consume of tokens but at least here  at the bottom right if I copy this text here   and I copy and paste it into the the box here is  going to tell us exactly here that it's 169 tokens   all right so when you sign up for a free account  you get 25 000 tokens and you don't get all of the   plugins we have so if you want to pay for a paid  account and get the 500 000 tokens so higher limit   you can simply click on become a Sage customer  today and you're going to be redirected to all   stripe payment and you can use a credit card  to pay and for government teams you can use the   government purchase card for up to ten thousand  dollars so you can actually pay for up to 27 users   per team so that's very easy to to do directly  there and reach out to us if you want to do bulk   purchases you don't have to put the credit card  for each account we can do that for you in the   back end all right so let's look at one of the  biggest impediment that is called hallucination   it's when the bot is making stuff up it's  just creating text and information that   is just not accurate so let's look at an  example and let's create an hallucination   here I'm going to use a da Vinci model  because it's more likely to hallucinate   I'm going to say something as simple as so I'm  going to ask who is Austin Bryan from Ask Sage and provide links with more information and  it's wrong obviously because Austin Bryan   works for defense unicorns and not Ask Sage so  that's gonna make it hallucinate right we're   misleading it to tell it he is working for Ask  Sage and so here what's interesting right is   that it gave us links but these links are not  working and there are actually hallucinations   so he's telling us hey this might be a sign  that the answer might be a hallucination so   that's something we build at Ask Sage to try to  mitigate all these issues with hallucinations   but let's try to ask the same question now on the  GPT3.5 model and try to compare what happens here

so see here it's interesting because it's  saying it's not sure but you can find more   information about their website and it  has a wrong page it's not that IO it is   dot AI and so the link is wrong and  so he knows it's hallucinating as well   but at least as you can see with GPT3.5 is not  making things up saying that Austin Bryan is a   CTO of Ask Sage so to be clear it's very much less  likely to hallucinate when he knows something so   that's why the more we train it the more data we  give it the less likely it is that hallucinations   ought to happen you're also going to see things  where Ask Sage is answering questions and making   stuff up like if you ask it to send an email  to somebody he's gonna say that it did but it   doesn't right it's not able to send emails Ask  Sage does not support that today so it's a lie   right Ask Sage is the text based chat and we're  working on a lot of different plugins so I'm not   you know telling you that in the future we may not  have an email plugin to email things to people but   right now that's not a capability we have so as  always trust but verify look at the answers you're   getting and try to make sure this actually makes  sense all right so we have three types of memory   right short-term memory long-term memory and  real time. short term is your chat with the bot   that's just a current chat and the history of the  chat the long-term memory is stuff you can store   in datasets into our Vector database on Azure gov  impact level 5 and that is you know all the way up   to CUI right now and that is good for things that  don't change a lot like documents policies and   things like that that you know change maybe once  a month or every six months or a year something   like that and then you have real-time data that's  where you want to tap into APIs data lakes data   warehouses that's pulling real-time data at  the time of the query to get the information   you need something like the weather if you ask  the question about the weather you don't want   to get the weather of yesterday you want to get  the weather of today so a good example of that is   the plugin we made with the METAR with the FAA  to get the weather from an airport so if I say   what is the METAR of KIAD in plain English what's  interesting is going to go use the FAA METAR API   to pull that reference here that you see encoded  but yet it's able to translate in plain English   because I ask it to do that and so it gave me all  the information here in plain English that's an   example of a plug-in we built that's pulling from  real-time sources to get the information you need   and then use the bot capabilities to read that  back to the user in the plain English context   so we talked about the token consumption like I  said you know you're paying for both the question   you're asking the data we pass to the model  whether it's real-time API like here or it's   simply a data set that we ingested in the vector  database and you pay for the answer you're getting   back keep in mind that if you don't need to pass  data sets you can simply set data sets drop down   menu here to none and that way by doing that you  make sure you make sure that the bot is not using   any data sets from our Vector database so you're  not paying for tokens you don't need to pay for   all right so prompt engineering is going to become  the most important skill for most people to get   particularly right now and so watch our curated  YouTube playlists of prompt engineering videos to   know how to ask a question to the bot you know  the tone you can use different words like you   know concise detailed summarize extract verbs  matter how you phrase things matter try to get   to your desired outcome in one prompt or two  maximum don't chit chat with a bot that's not   the most efficient way to get where you want  to be and the bot might forget all that past   context along the way so and if you can't achieve  something most likely blame yourself so far I've   yet to be able to find the limit of GPT so it's  all about prompt engineering and try to reflect   and for that you know we actually created a  persona to help you with that and it's called   the prompt engineer Persona and I'm going to  use an example here we have a we have a prompt   here I'm just going to copy and paste this and  I'm gonna ask the prompt engineer Persona and   I always clear the chat when I change topic so I'm  going to say help me improve this prompt to gather   medications and illnesses from a veteran medical  record right and so this prompt was written to   extract medication and dates and illnesses but  let's see if the bot itself can help us improve   it by itself and what kind of questions it is  going to ask us to do that that's always an   interesting subject so what's the purpose of the  summaries is for research legal purpose can you   provide more information about the medical reports  all there are they from a specific time period   this is to be able to file medical claims  we need to collect to collect all medication   dates and illnesses and the reports are for the  lifetime of the veteran so then it's effectively   continuously trying to improve the prompt and it  just gave me an updated prompt that I could use to   get better outcomes and use that prompt engineer  Persona effectively to improve your questions so   that you can get the right answer when you're  struggling and trying to get things done   all right so a lot of people ask us about  use cases and there are so many right it's   pretty much unlimited but we have people on the  government side in acquisition writing RFPs with   sage and RFIs we have people on the contractor  side using Sage to respond to RFPs and we have   people grading bids and categorizing labeling  data it's very good at labeling content coding   translation reviewing code commenting code using  DevSecOps engineering principles to write YAML   summarization and sentiment analysis and so on  sky is really the limit let me show you how I   wrote one of my bid answer by using Ask Sage and  this was a bid coming from Tradewind which is a   great DOD CDAO program and and so all I did which  is pretty incredible I copy and paste the bid I'm   gonna show you my prompt here I'm gonna do a new  chat and I'm gonna put Contracting officer and   I'm going to use a GPT 4 32k model here and the  bot is already aware about Sage but it's always   good to give it a little bit more context so  I've responded as a Sage to this bid from the   government and so I'm just giving a little bit  of context I'm the CEO of Ask Sage blah blah   blah context of what Ask Sage is I wrote a couple  of sentences about Ask Sage here based on the RFP   and things I thought you know the product and  the company will bring to the table and then I   wrote the government RFP information I just copy  and pasted the PDF no formatting anything it's   just the entire PDF here and then I said end of  context and I said action make sure to follow all   their requirements so that paper will be graded as  acceptable by the JFAC and follow their proposed   order answering all the required questions blah  blah blah and then I say you know following their   specific guidance right the two page 10 000  character detailed Discovery paper to show how   Ask Sage meets other RFP requirements and he gave  me this you know perfect thing I tweaked a little   bit a few paragraphs and something that would have  taken me probably a day or two took me 37 minutes   all right so other field here is a temperature  here and that field is important it's enabling   you to customize the level of randomness in the  generated text so if you want to stick to facts   you probably want a temperature down to zero  but if you want it to be more imaginative you   can increase that to 0.5 maybe all the way to  one but you know if you want to stick to facts   stay to down to zero so the live query is also  essential you know as you probably know the the   large English models are trained at specific dates  and times and they don't know what happened after   that so if I were to ask who is the governor of  Arkansas that's going to be wrong because it's   going to be stuck in time so it thinks it's Asa  Hutchinson and that's wrong but if I clear the   chat and don't forget to clear the chat between  the queries if I do the the live queries now it's   going to pull from being in Google and it's going  to give us now the right answer because it has the   latest information here as you can see so always  uh interesting to to to look at live queries when   you know it's time sensitive and you want to get  latest information as well all right so now let's   look at personalities we have a lot of options  here going from an accountant to a Contracting   officer to a decoration writer to an electrical  engineer a DevSecOps engineer to a legal assistant   of course read our terms and conditions you need  to always validate legal and medical information   with a professional but as you can see we we  have a lot of customized personas all the way   to a program manager an officer performance report  writer so that really helps you customize the tone   of the bot and the knowledge of the bot but also  sometimes the formatting of the answers so if you   want to have a specific format we can create  customized personas for you to follow maybe a   specific templating that you have to follow  in your responses and so that already opens   the door to a lot of possibilities by creating  custom personas as well which can be created on a   per user basis as well alright so don't forget to  clear the chats between personas and also between   models otherwise the bot might get confused now  let's look at data ingestion so first of all we   have a lot of parsers PDF HTML ingesting YouTube  subtitle from videos structured unstructured data,   you name it We have some parsers that are not  accessible to you directly so you may have   to reach out to us if you have questions about  structured and unstructured data ingestion we can   tap into APIs we can do a lot of different things  we're bringing a lot of different plugins to life   for our paid users including PDF word point  parsers and plugins will allow you to visualize   the content but also train it in in specific  datasets and even summarize it if it's too long   and so a couple of plugins as example here we  have this import chain for text plain content   so in this case since the size of the content  could be hundreds of pages and we are limited   in number of tokens we cut it into chunks then we  train the chunks into the data sets and then what   we also do is we propose to you to summarize the  content into summaries and train the summaries   into the datasets as well to have different  versions of that content for querying it to   find better results so if you were to use a book  as an example if you try to adjust the whole book   it's not going to work it's too many tokens so  we would cut it into chapters then do a summary   of the chapters and then do a summary of the  summaries to get a summary of the whole book so   then we can ingest the whole book and then if you  have questions precise about one chapter of the   book it will tap into the precise this chapter but  if you have broad questions it will tap into the   summarized versions so this plugin here helps you  do that now another example that's interesting is   let's say you ingest a database of resumes right  you have 4 000 resumes the way we pass data into   the model we only pass maybe four to five results  from the vector database based on the query that   you you're typing so it's not gonna go through all  four thousand uh results so if you're asking you   know who is Mr X that's going to work pretty well  because it's going to look at the closest results   to Mr X and he's going to return that information  and that's gonna very much likely to be correct if   it's in in the database obviously but if you were  to ask you know how many of these users know how   to code in Python for example it would only give  the top four results so it will not be accurate   because it might be more developers that can  write in Python so that would be wrong the way   we would have to ingest this instead it would  be to ingest with categorization by programming   languages so we could also have a separate entry  for these categories of results but you could also   connect to a database or an elastic stack or  a postgres database for example and then the   bot could convert that query into a SQL query and  get the results back as well so many people think   somehow that they need a lot of examples to train  the bot that's actually not true you only want one   or two you only want to train the right examples  the right information for the stuff you're trying   to achieve giving it the information he needs to  give more context more information so don't give a   lot just give what you need and you don't want it  to get confused don't forget we only pass the top   four results so if you give too much information  it's not gonna work we also truncate the top four   results to 500 tokens per result so you don't want  to train results that are more than 500 tokens you   want to cut it down to 500 tokens maybe summarize  it or maybe cut into multiple pieces that's how   we automate that for you inside of the plugins and  the summarize plugins and the data ingest plugins   now what's also pretty exciting with these  plugins is we have this plug-in here with   this example which is an admin plugin that you  won't be able to see but this is connected to   our Sage database and what's pretty amazing is  I can ask questions in plain English and get the   results in real time and just to show you what  that looks like we have a couple of results here   that are old but how many users have signed up in  the month of April 2023 and it gave me the answer   right here so it's converting this question into  from plain English to a SQL query and then what's   interesting is sometimes the query is wrong and  so we get an error message and so we tell the bot   to self-reflect and and improve its own SQL query  by giving it the error and fixing it and then it   fixes it behind the scene you don't you don't see  any of this and it's you know giving us the right   answer and so this is really a game-changing  capability but as you can see here if we ask   show me a table with the top five users who  consume the most total tokens in April 2023   it knows it needs to give us a table so he's going  to give us a table but when I ask it to add first   name and email it can automatically add those  results so that's a good example of a plug-in   that's tapping into a live database to pull the  results and format it in the right way to back   to the user to be as efficient as possible all  right so let's take a look at the data sets here   and like we talked about by default it selects  all the data sets but you can also put none if   you're not using any of the trained data sets  we have and you just want the the default large   language Model results you can just set it to none  so you're not paying for those tokens but then we   have ingested a lot of different information from  acquisition.gov and and DOD and Air Force and some   of my content as well then you can create your  own datasets and that really opens the door to   your own customized content which really enables  you to ingest your own data as well you can decide   who you share those datasets with and you have to  reach out to us to share it with another user in   your team for example so they would be able to  see different datasets and you decide how you   cut your data and ingest it into which datasets  just like labels and so it is very simple to to   create a dataset we're going to show you how easy  it is now we're gonna do /add-dataset Nic-Video   and this is creating this this dataset so it's  done and I'm going to do a simple training just   to show you how easy it is and keep in mind we  have a full API for a paid user and you can reach   out to us to get full access to that API but if  I if I now want to train uh that Nic Chaillan   has a dog and uh the dog's name is Monk and he and  and uh uh he is a French Bulldog um now I simply   trained this and it took me 21 tokens for that  training and if I ask does Nic Chaillan has a dog and yes it says a French bulldog  named monk so now he knows   and it's as simple as this train command of  course you can ingest much larger you know   documents and things keep in mind each chunks have  to be under 500 token so we cut it for you using   the data ingest plugins so you can do that with a  plugin you can do that with API or just with the   command line here and you see how simple it is now  if I want to look at the data we ingested and then   I want to find it into my data set I can simply  scroll and I see here uh the the training I just   did into the dataset and if I want to delete it  all I got to do is take the ID and delete it here this item has been successfully deleted so now   if I clear my chat and I ask  you know does Nic has a dog and he says it's not sure so   so like we talked about when you create an account  you don't have access to all the paid features   but if you pay 30 bucks a month per user you get  access to 500 000 tokens for querying of DaVinci   model you also get 100 000 token of training  but then you also have a 50 dollars plan for one   million DaVinci token and a 90 dollar plan for two  million tokens per user per month so that's more   capacity if you're doing bids and larger documents  so reach out to us to upgrade for these accounts   all right so let's take a look at some of the  account settings that we have here I can see   my first name last name company I can set up my  phone number but more important I can see how many   tokens I have for querying or how many I consume  this month how many training tokens I have I can   see which data sets I have access to but here is  how I can click to activate my MFA option it's   gonna use a QR code with Microsoft authenticator  to register that but you can also see here the   button to register your CAC with your account for  your PIV or your CAC to work with your account if   you do not register with a CAC in the device  if it says CAC not found that means your local   proxy is blocking your CAC and you need to create  a local it ticket to your support team to ask for   a white listing of your CAC pass through for the  domain *.asksage.ai to be whitelisted to allow  

the CAC passthrough to flow to our website  otherwise it's being blocked by your proxy   all right so another feature we built is the  prompt template and here you're going to find pre-   defined templates for prompts for different topics  for acquisition and all these different personas   and we can add more more templates obviously but  here's some example of how to do different things   and what's even more exciting is you have your  private prompts so you can store your prompts   and create new prompts to reuse them so you  have to remember how you ask a question so it's   as simple as writing a title description in the  prompt and picking the Persona you can also share   it with others if you want people to benefit  from your research you can just click public   and anyone can see those those prompts so that  helps you really reuse prompts across use cases   now another amazing set of features are the  plugins we've been building and and we're   releasing more and more plugins so by the time you  watch this video we're gonna have more plugins but   here you see our METAR plug-in we release this  amazing git repo plugin which effectively scans   a git repo that you give it and it's going to  look at every file and see look for improvements   both in performance quality and security and  create a merge request with a new Branch with   all the proposed changes for you to review so it's  kind of a free audi can do it pretty much in any   programming language and it's a game changer not  every change is going to be good but it's giving   you a lot of great Insight so try that out we  have the commenting code emotions to pull emotion   from content evaluating code importing chain which  enables you to import text so you copy and paste a   PDF or whatever document and it's gonna import it  summarize it and train it into a data set of your   choice we have this amazing PowerPoint generator  which effectively you asked about to write a   PowerPoint of X slides maybe five or whatever  about X Y and Z and it's going to create the   python code to create the the PowerPoint we have a  python sandbox to run the code get the PowerPoint   back and give it back to the user seamlessly  it's kind of a it's kind of mind-boggling   we have the sam.gov search which helps you search  for bids on sam.gov split text lets you cut text   into chunks summarize lets you take a long piece  of text could be hundreds of pages and cut it into   summary chunks and then summarize the summary of  the chunks we have the medical version of it which   is kind of the same concept we have the summarize  website which is also awesome which lets you put   a URL and gets a summary of what's inside of that  page and then we have the train content into data   set which you know is also used into the import  chain here but it's directly ingesting into a data   set it's not cutting it or summarizing it for you  first so you probably want to use import chain if   you have a long piece of text only use the trained  content into data set for text that's already   short and to the point so the beauty with plugins  is also the ability to do agents and complex   chaining that's a good example here to take a look  at we're going to take a piece of text and I have   this article that I wrote on LinkedIn recently  about about AI so we're going to use that to   try to see how this capability can take a long  piece of text then cut it into chunks and give   you the ability to ingest this into your data  set and so I'm going to copy and paste my article   right here I'm gonna leave the default of the  summarization and the cut and I'm going to want   to train it into my Nic-Video data set and so  it pre-populates my prompt here I just have to   click send and so what this is going to do first  this has um split the text into chunks so I have   my different chunks already ready to go chunk one  two three four it's asking me do I want to train   each chunks into the video data sets I'm gonna  say yes and now what's gonna happen is it's it's   training and it's ingesting each of these chunks  which is on average 340 350 tokens perfectly   under the 500 limit now it's asking me hey do I  also want to summarize this original content so   I'm going to say yes so what's gonna happen here  is going to take the the text the long text and   summarize it so that I have a summarized version  of it and it's gonna give me back that summary   another little hint when you use a plug-in make  sure that you pick the model that you want to   use because it's going to use the model you  specify here so use gpt35 if you if you don't   need something else or gpt4 but be careful you  know only use the right model when you run your   plugins so it is going to adapt to the the token  limit of these models so here I got my summary so   he gave me one two three four pieces of summary of  my article into much shorter chunks that are much   easier to to summarize so now I can decide if I  want to also train the summaries into my datasets   and I'm going to say yes and now my data sets  have ingested now 180 150 90 215 tokens here so   now this summarized version is also in my training  so now depending on the questions I'm going to ask   it's gonna pull the best results based on this  article that I just suggested in a second so as   you can see with the chain first he cut it into  chunks train the chunks summarize the article   train the summarize chunks into the data set all  seamlessly all with an Agent plug-in and so we   can build these for you to effectively automate a  day of work or a whole set of steps and stages and   conditions and decision trees to really automate  a piece of work another good example that we we   have built is the go no go flying decision with  the Air Force and it's effectively tapping into   20 APIs from the FAA and Air Force and Gathering  all this icing weather data to help the pilot make   a flying decision and so that shows you also the  the beauty of the decision Tree by aggregating   and putting data from different sources all into  the single place in Sage to help make the right   decisions and so that's also a great example of  a plugging agent that could be a game changer   all right so we talked about this but we  have obviously this multi-tenant stack on   chat.asksage.ai you know hundreds of and  thousands of people on this but we also   have the ability to host a dedicated Enclave for  you on your dedicated Azure gov Enclave so reach   out to us for these that's good if you if it's  scaling up and you have maybe already a lot of   users on the multi-tenant side and you want to get  a dedicated Enclave maybe for security reasons or   whatnot we don't think it's necessary honestly  when it comes to security but some people feel   better having a multi-tenant stack and we can do  that and that way no one else has access to your   data other than you inside of your enclave now a  lot of people ask us what are we working on when   it comes to air gapped classified work well open  AI is not bringing their API model anytime soon   on the high side but we do have Partnerships with  companies like cohere and databricks and others to   bring their large large language models to the  classified side so we have an engagement with   Microsoft and one with Amazon on the secret and  top secret fabric to bring large language models   and fine-tune them for different use cases but  keep in mind those models are not as good as   open Ai and so it's pretty tough not very good  at coding either so you know very limited but   that's the best we can we can do now on the  high side and Sage is agnostic to the model   so the more Models come the more we're going  to be able to bring these options to the table   all right I hope this was helpful to you if  you want to come chat with us come join us on   the Discord Community we have hundreds of people  now on the chat always great to share insights   and ideas and ask questions I hope this got you  excited to come and try Ask Sage and see what we   can do to augment your time and your capabilities  and if we can help you reach out to us at sales at   sage.ai and we'll be happy to answer any  of your questions stay safe see you soon

2023-05-18

Show video