The Rise of Open Source LLMs

Show video

[Applause] [Applause] welcome to the next session I'm very happy to have uh my teammates uh Fabi and Deon here on stage uh for the next presentation and uh also big big thank you uh guys uh for all the organizational part here uh we're doing uh on on uh the event and uh yeah looking forward to uh dive into the topic of uh open source llms thanks guys thank you very much Ro I hope I'm audible uh just before we start I'd like to make a 10c video of you folks uh for our video so everybody uh raise your hands both hands open source yay y thank you very much this is for sure going to make it to the final footage thank you thanks guys okay so what's our topic today uh we are going to talk about the rise of Open Source llms uh yeah as a matter of fact uh we will have to improvise here a little bit uh as we were pretty busy uh organizing all the details uh as a matter of fact Fabio was running to the uh supermarket 5 minutes ago to get some milk and now we have milk and yeah so we we improvise here so what we'd like to share with you here uh initially we've um we've um published a blog post about what we are doing and our journey with the open source llms right so we'll give a bit more of a detail uh on this on this topic uh who we are first of all yeah as I mentioned we are part of the Cod Simmons com core team uh my name is arjan and we have the one and only what's your name sir faio faio thank you very much um so the most important thing I'm the sad guy who keeps you up from lunch so we will do it really quick today um what do we want to talk about um or where do we come from so as you already heard in the morning from Roger with the amazing introduction speak we are code Simons com and um it's not only a installation it's not only the API seens com stuff we do or the my seanc stuff we do we are in the enablement business so we want to make people at seens be able to work we want to make or Empower people so they can do more of some stuff they can innovate and we really want to build the fundament and remove all the unnecessary burdens so code seens come obviously our core piece is gitlab C which we still love admire and contribute um every day um obviously with all the feature G brings um from scratch collaboration cicd um the registry as we heard and we were really busy the last few weeks but code Simons come is much more than that um we have topics around business partner but rer mentions we have various core apis from directory services to financial services and the most important thing what we going to talk today our llm services um we already mentioned it in the morning 70,000 users 79 countries 300,000 builds every day so it's not just an installation under like some desk um it's a big thing um with heavy usage and I think the big question is now why do we build that stuff thank you so why we are doing what we are doing so um in terms of especially why we are looking into open source llms in house hosting and deployment uh first of all because uh it's fun uh but there's also other reasons for that so when you are able to deploy an open source large language model uh API inhouse you can ensure that all the data stays internal right so we have um the different confidentiality ratings on the platforms within semens and we have the highest confidentiality rating for the intellectual property and all the code that is stored here so we'd like to keep it that way right the second reason is cost Effectiveness obviously you don't pay any license fees or there is no per user uh based subscription fees for example when you look at the uh co-pilot today there's a I think the last time I checked $20 per user per month and with the 70,000 users if you do the basic math you arrive at Millions already so that's another reason uh the other thing with the open source llms is the customizability right so you can actually train these llms uh based on your special needs or different kind of products that you are building so that's a big plus as well another thing that we'd like to mention is the the the rag applications right the we will give an example later on so this way you can actually tune the llms to answer your questions based on the context or generate uh code based on the context that you're giving and uh additionally of course it's a uh ecosystem question so if we are able to provide these apis to our developers within the same ecosystem that the code Simons com developers are used to that's a great last but not least another Advantage is the open source llms are actually catching up very fast so uh here you see a a little graph that we we have uh this is published by the company Arc invest research so they've been looking at the Trends on the closed source llms and open source llms right and what we see here the closed Source ones obviously they came early earlier right and when you look at the trend that's the rate of improvement in terms of the five shot mmu performance the metric they use and then the open source ones came out later right but when you look at the trend there it seems that the rate of improvement on the open source llms are actually going faster so if this trend holds we might actually see that uh the open source LMS are crossing the closed Source LMS at at some point and yeah so those are really the the motivation to our work that we've been investigating in this area and uh what we have done until now is going to say I will show exactly so now we have to who out of the way and the why um let me show you the how so how we exactly implemented all of that um so our main part is the core llm API what we call so it's API see it's SL llm really easy to remember name um but we did not stop just there as as mentioned we were in the enablement business so we built a lot of pieces around that and obviously we don't going to into the ny-g greedy details of our setup but what's really cool to see um the developers on the top right hand corner have various ways how to interact with a llm API maybe you're a data scientists because we are not um or you're just a user of code semc or gitlab or you want to do some weekend hacking and want to develop the next AI tool of the future um so that's why we offer different ways how to interact with our various LMS um back down you really have the core what AR already mentioned that's all hosted in house that's maintained by us so all the data being sent there will never leave um seens basically more important is this part here which will afterwards also introduce a little bit um what we kind of consider the sustainable AI um so in T at quarter here where we have Lake water cooling and lovely solar power so it's a beautiful day for that um we will um build up on premise compute power that we can really cheaply offer AI U workload so GPU based Runners for fine tuning modon already mentioned um interference so that you can actually run the llm or even some fine tuning so that's all stuff which we plan to have on site which then connects to API see come you can see this here and obviously um we love to use that stuff ourself as well so in our kind of first use case we came up with is uh support automation we had a lot of support request and as everyone know big companies especially seens the proxy is always a topic and just to automate away the proxy questions uh would save us a few hours and um that's exactly kind of the reason why we came up with that but we did not just implement the support B and then say hey that's our secret Source no we didn't stop there we pretty much made every single piece of this bot accessible so what you see is the open AI compatible API we built a chatbot which actually can answer the questions based on seen contexts um one of the future topic which eron will explain a little bit later is to kind of democr democratize this as well so you as a project can also jump on the bandwagon and then we also have um a gab integration so it's basically just a bot um and many many seens people actually um built similar things in the past um where we obviously shine is just accessibility so we can make this accessible to every seen employee and not just a single project or a single group and you can see that we have this whole ecosystem now and um a little bit like a neural network the neurons is the magic source so the more connections we have the more features us as well actually see at the end um how does this look like if you're a user that's probably the coolest part because big companies you kind of think slow or expensive or difficult um and that's kind of a reshine again so we have this uh lovely UI called mysc where you do business partner management Key Management so that's kind of the one uh stop shop to kind of get all your keys you just can go there select llm create the key and you're ready to rock and you can use all the open AI compatible tools sdks libraries and just kind of rock um airon will show some lovely examples for that as well and as you can also see there we obviously have a lot of documentation so SE specific stuff which you probably will not find out in the internet and then Integrations um on M one example will be continued there for their John also shows so that's kind of the variety soan can you tell me some stuff about Visual Studio code of course so yeah as we are in the development enablement business as the called Simmons com right I don't know if that's a term but let's say it developer enablement right so uh what the first thing we looked at is okay we have now an API endpoint open AI compatible API endpoint which we can provide but what can the developers how can they benefit from it right and we didn't have to search so far actually we came across this project called continue Dev and we are uh in contact with them now and hopefully we'll collaborate uh further in the future so this is essentially an open- Source autopilot for your uh Visual Studio code or jet brains idees right so what you need to do is you just get the API key you configure the the base API endpoint on your client and then basically you get a here a chat window inside your Visual Studio code you and you then get basically a lot of suggestions that you can choose and edit the code you can also say Okay select the code create unit tests for this one and then it will do that and there is also a new feature that has been just released uh is the tab completion so you get actually for example yeah calculate me Fibonacci numbers or whatever you like and then it will try to you know uh give you some suggestions uh is it GitHub copilot good maybe not yet but it's improving very fast and many of our colleagues and us we are using this on a day-to-day basis uh together with our API endpoints uh another example uh as we are kind of looking at this application or kind of as an ecosystem together that we have found is uh you can actually use easily with local uh graphical user interfaces as well chatbox is one example so this is simply you know an application you can download and install on various operating systems you again similarly configure the the API end point point and your API key and uh you're off off you go you start U chatting with um with our llm API endpoints right and this one also has the advantage that uh you can it can offer different profiles so for example if you're a software developer there's a preset prompt for that if you're a social media influencer or travel guide so there are presets of prompts that you can actually use and retune and modify as well so that's a pretty handy tool for our developers as well if you're interested just check it out and this is only one of the chat clients actually we have seen that there are many others cat GPT light is another one that also worked smoothly out of the box so that's also for your information if you like to try it out and uh with that there's another cool application uh that we'd like to talk about which is the code aiot thank you so muchan so I already mentioned it previously um having tools is nice but those tools should always integrate well with your workflow and um since we in the enablement and gitlab business and kind of maintain a big G platform and what's easier than creating a bot that actually talks with you so already today on code seens com there's this funky bot called code AI which you can mention and um based on um intent detection so we will actually try to understand what you want from it and um it will then respond accordingly So based on the questions and kind of the function which is used we um amend a lot of additional um context can either be the pipeline which is broken or the log output of the pipeline or it can be just a description itself and uh obviously this can go on and on and on um if you kind of work with epics or other guub Enterprise feature or other gup feature like the wiki this can be extended and extended and extended there are great libraries out there um which also some colleagues of our team maintain um and here you can see airron interacting with the bot and we really try to come up with good merch requests we all heard how important it is to create good patch sets in the morning um but sometimes we ourself are quite lazy when it comes to merch request descriptions so one of the first feature which we actually implemented and this was kind of a weekend project was to create a summary of a merch request just based on the div set so if you're just too lazy to update the merch request or you're not really that fluent in English or uh good like the AI you can actually ask it and the cool thing is uh you can already see the current functionalities it started out more as a proof of concept but the cool thing is that many many people from seens jumped on the bandwagon and contributed functions so cing code really good example um our colleagues from di digital Industries they contributed that had a lovely contribution Al about a discussion summary from our colleague NES um we have more contributions now from SI and I think the list keeps on going and cool part is it's really simplistic it's really easy it's inner source so it really keeps paced with the Innovation I think that's kind of the cool part so re um eron already teased it and I think now he will actually show you what this means and how to kind of do it thank you so much yeah so this is uh one last usage example that we'd like to share with you folks today so rag stands for retrieval augmented generation right so with this scheme what you can actually do is you can take a open source llm like me St code Lama whatever you like off the shelf no additional training or so but what you can do is you can uh ingest your code documentation internal project code or whatever things that you'd like to you know use as a context and lung chain is a popular open- Source library for this and we found out that it's actually pretty easy to to get started working with it so the way it works is that what we do at the moment is uh we take for example our uh Internal Documentation some code uh projects right and then we basically vectorize this into embeddings by tokenization right and essentially all your documentation and code is converted into bunch of uh floating Point numbers right and this is stored in a vector database separate from the llm itself right and when you ask your question basically this Vector database is you know consulted first included in the final prompt and the vanilla of the Shelf llm is using this information as well to create context uh based um answers for your questions so what we have done is at the moment basically have a internal bot which we have implemented in in a hecaton but at the moment it's already in a production and we are testing it heavily and it's it's performing pretty well of course it gets a bit confused if you start putting too much data in it documentation and so on there maybe we need to do some kind of Separation categorization of this kind of documentation or that kind of code and uh you know some kind of tuning would be necessary but it's very very promising and yeah what we'd like to also as Fabi mentioned is that we'd like to be able to enable this for our developers so that they can say okay you know include my project for the rag application you know I I'd like to use the llm with all the information that my project has and then you know uh you start basically having this specific llm use case uh for your for your development so that's uh kind of ongoing work yeah and uh with that let's talk about the legal aspects a little thank you very much so legal uh one of our favorite topics um so if you uh paid attention over last year's open source event we had our colleague from C legal in compliance uh Felix talking about the legal aspect of co-pilot and obviously this applies to every single llm no matter if it's Microsoft or self-made self-hosted um one big part is obviously that you cannot copyright something that the machine creates that's an important part but even more important is is the code you generate really generate a code or what it maybe just copyed from a kit hop project under lgpl or hpl license and you just kind of commit copyright infringement without knowing that and that's kind of the whole aspect of plagerism detection um that's a really um hard topic to completely solve Microsoft um does quite some good work there um with automated scanning um also kind of limit the size and uh the code which is generated um to kind of limit the originality and they also has have those protection rules in place that they will support you in case you kind of get sued obviously um we are not lawyers we cannot guarantee you um protection fund for that stuff but uh what we can do quite well is solve it with technology so already today every single code um part generated by our API is recorded will be sent to our legal and compliance colleagues and they will do a scan just to kind of prevent um like plagurism and what also have here it's kind of the the Future Part that we will do um the cool thing about open source llm is they two a big part open right um we already know an llm is not really open source until you have all the training data and the weights and you know all the kind of the procedure height was created but the cool thing is a lot of llms use public data called um the stack so that's a huge collection I think 60 terabyte with application of GitHub gitlab projects other stuff and this is all accessible and the cool thing is that if you generate something using the AI you can actually backtrack this to the original source and then you even know hey this was created by this project under this license and it does not affect you or if it affects you how you can deal with that stuff and this is one of kind of the ways how we try to kind of tackle this whole topic that we do this whole backtrack and if something applies and if we kind of see there is a not so cool license uh for Enterprise we can already prevent that inform the users and that's kind of the power of knowing what actually is part of the model so aan last topic of the day I think sustainable AI yeah do you like to yeah sure cool yeah um so um this is a topic that's kind of future work uh something that we'd like to do and we are kind of actively working on so uh the idea is that as Fabian mentioned we have uh main building at cement which is solar powered and if we can basically you know bring in some gpus there inhouse inside the the building uh you know get it U by basically running with the the the green completely green form right the green AI That's what we'd like to achieve because uh yeah so the typical AI offerings as you already know is charging you based on the the number of tokens that you're using right and which kind of gets expensive and uh what we've kind of did some uh calculations basic back of the envelope calculations within the team and uh we kind of uh have the feeling that the amortization of buying a physical Machine versus using the AWS G5 GPU machines it's about 3 to four months actually uh the break even point so which looks like a good investment and it would be also you know to to really have the full knowledge everything inhouse uh fully controlled so yeah just we would like to highlight that you know that the companies with serious plans for AI you know they have the hardware and so on inh house so that's kind of a uh a work that uh we we are looking at that's quite interesting and yeah that's uh we have already uh rooms uh in the in the main in the R&D Center and yeah let's see let's see how that goes and yes we have the helmets as well awesome um so um feedback from the community I think that kind of brings us to the end of this whole presentation basically um feedback is really really great so far I mean the most important thing about our work is really working with the community we don't build something just because we like it I mean a llms AI is a lot of fun but again we really enablement business and to kind of receive feedback to kind of co-develop stuff with the community and we already partook in a few heatons or even guided people where to use fine tuning where to use a rag where to kind apply this and that or people even forked our AI bot so that's the cool thing and together with the community we really want to push the topic further and that's basically the whole goal to kind of see where is demand where are gaps um other vendors don't fill and what how can we fill that and that pretty much uh is the whole presentation I think you already noticed we're not that that's not a data scientist talk we had here um if you take one thing with you it's really you can also democratize AI development for yourself all of the stuff we showed you can run locally there are amazing tools out there like AMA um AI will probably not replace all of us but I think if you kind of ignore AI it will certainly kind of be left alone in the dust so um yeah use it play with it and if you have any question U we outside if you want to talk about Ai and llm thanks [Applause] [Music]

2024-08-17

Show video