How to use coding AI assistants effectively with Ado Kukic
JASON: Hello, everyone, and welcome to another episode of Learn With Jason. Today on the show, we're bringing back Ado Kukic. How you doing? ADO: Good, good. How are you, Jason? JASON: I'm doing well. I just realized that -- did I get your name right? Did I say your last name right? ADO: You did. JASON: There's only like one area of Europe where I feel confident in the pronunciation.
It's like the Croatia, Slovenia, that kind of area. It fits the way I speak. I like to -- I'm kind of a mumbler. I feel like we were in Croatia, and I was learning some of the local language. Everyone is like your pronunciation is very good. I was like, great, mumbling works here.
(Laughter) ADO: I love that. Yeah, you know, my last name, the "C" at the end should technically have the thing above it for the "ch" sound, but the English language doesn't have it. So people are like cookie. JASON: That's a good Americanization of it. I'm Ado Cookie. (Laughter) ADO: Yep.
JASON: All right. So welcome back to the show. Good to have you back.
For folks who aren't familiar with you and your work, do you want to give us a bit of an update on who you are, what you do? ADO: Yeah, absolutely. So my name is Ado Kukic. Currently, I'm the director of developer advocacy at a company called Sourcegraph. Over the last decade, I've worked at MongoDB, Auth0, so I've been on the full spectrum. My career has been focused around education, helping developers learn more about identity and access management, learn about databases, and now with Sourcegraph, teaching developers how to search their code and also helping them be much more productive with AI and AI coding assistants. JASON: Nice, nice.
And this is something that I'm interested in digging in a little deeper. AI coding assistants are one of those things. People love them. There's a lot of conversation about them. And people seem to use them in different ways.
To be completely honest, I've used them very sparingly. My reasoning is -- I have a few reasons I haven't gotten deep into coding assistants. First and foremost, I always feel weird when somebody else writes code on my behalf. So one of the original iterations that I saw of these coding assistants is it would just do like type ahead for you. It would try to auto complete your code. I mean, partially because I do a lot of my code on stream, that was very distracting.
And sometimes it would do stuff that looked almost right. Then when I would try to use it, it wouldn't work. So that kind of made me nervous about it. But then I saw some other ways where it was sort of like pair programming, and you would kind of chat to it and it would know things about your code. I thought that was really interesting. For whatever reason, I didn't dig deeply into that, just kind of went on my own AI-free way.
I feel like it's time for me to start paying attention. Like, it seems pretty clear that these advancements aren't going back in the box. Like, this is here to stay. So even if it's not something that any of us intends on directly using, it's something that we should definitely understand. So that was why I was excited when you reached out because I think this is going to be fun to talk about.
So in your opinion, what -- I guess -- why don't we just start with a basic question. What is an AI coding assistant? ADO: Yeah, absolutely. So an AI coding assistant is a tool. It could be an extension. It could be a standalone piece of software that helps you write code, that helps you, you know, be more productive. It helps you answer technical questions.
It helps you understand your code better, all with the, you know, stated goal or stated purpose of helping you become a much more productive developer. I totally get your skepticism of getting the code, seeing code auto completed for you that you didn't write. A lot of the times, you know, it looks right or the underlying code is like, oh, I probably would have done it that way. Then you go to run it, and it's like this method doesn't exist or what you're trying to do -- like, you know, the system completely hallucinated and gave you something that looks kind of right but in reality, you know, couldn't be further from the fact.
JASON: Yeah, it hits the uncanny valley of it's almost plausible, but when you look at it a little closer, you're like, wait, that feels wrong. ADO: Yep, definitely. And I feel like this year, you know, ChatGPT 3.5 was released, like November of last year. I feel like over the last 12 months, we've had like a hundred generational breakthroughs in LLM technologies.
It feels like every week we're taking another giant leap forward with the LLMs being able to understand better, being able to have better context, better intent. So if you step out of the AI game for a couple weeks and come back, everything you thought you knew is completely outdated, and there's new stuff to learn and new processes and new ways to do things. It's very exciting, but at the same time, it can also be a little like holy crap. I don't blame you for just kind of sitting back and letting it play out and let the tools get to a really good point before diving in. JASON: Yeah, it just feels like, you know, it's been moving so fast. I see what people mean when they talk about the JavaScript ecosystem having a lot of churn and stress.
I'm so in it that I usually don't notice that, right. I'm like deep in this world and just see the updates. But AI is not a place where I'm paying attention. It's somewhere that I've been like casually watching but so far, I haven't found any good uses to really bake it into my workflow.
So I'm more of a casual observer, and it seems like it's changing so fast that I've sort of just decided, like, when something important enough happens, somebody that I respect is going to tell me. Until then, I'm just going to kind of let this do its thing in the background. ADO: I'm here to tell you that AI is ready. JASON: (Laughter). Well, and this is sort of one of the things I was interested in talking to you about. There are a lot of different ways that you can plug AI into different workflows.
So I've talked about with writing, for example, I think a good way to interface with AI is to treat it as a first-draft generator. You're probably not going to -- like, I've seen people publish AI copy, and it sounds and feels like AI. It's very lifeless and boring and whatever.
But AI is good at pulling out core concepts and creating an outline that you can write from so it sounds like a person wrote it, you, but you didn't have to do all the initial structuring work of thinking through what are my bullet points, what are the key concepts, et cetera. When you're looking at AI in the context of code, how do you fit it into your workflow? ADO: For sure. That is a great question. On the writing aspect of AI, you know, at the beginning I've tried to leverage OpenAI's APIs to see if you could build dev tooling that give you 95% ready blog posts or give you content you feel pretty confident in publishing. Like you said, it ends up being kind of a little lifeless. It ends up being very predictable, in a sense.
So it's like you can't really use it without having your input, and I think with coding, it's -- I think that same principle applies. For me, you know, I love using AI coding assistants when onboarding on to a new code base or onboarding on to a new library that I haven't used before. Just to kind of ask it some general questions of like, hey, how does this work? What are the key features? How do I get started? Just to get that base-level understanding. You know, I come from an educational background. So I've spent a lot of time writing tutorials, writing blog posts, recording videos, and that content is great, but it's always from my perspective.
So if I put out a tutorial on how to create a Next.js application, it's going to be from my perspective with my biases of, like, I think you should already know how to restructure. So I'm not going to cover that.
Or we're going to be building and integrating this API, but if you're trying to learn, you know, it's much -- it's better if you can have answers that are tailored to your exact query. If you're trying to integrate a particular API into your Next app, you can read my blog post on integrating Twilio. If you're trying to integrate something else, it's going to be completely different. So you either kind of have to put those pieces together yourself from various different blog posts, from the docs, from other tutorials, or you can just ask a code AI assistant to give you the exact steps for your exact use case, and these days it's getting to a point where for a majority of the use cases, it will give you a good enough answer to get you to the next step. JASON: Okay. Do you worry that this will, like -- I don't want to say this in an inflammatory way because I don't mean it in an inflammatory way.
I mean it in just a general like how we learn to do things. But do you think that being able to get that instant answer is going to slow the learning of somebody because they won't be -- let me say this as an anecdote instead of me trying to make some broad, sweeping comment. When I learn, the struggle is usually what makes the knowledge stick, right? So when I try something and it doesn't work, and then I try something and it does work, I see what the difference was, and it helps lock that lesson into my brain. Do you think that there's a risk that being able to say I want to do this thing and it just does that thing, like do you see it as something that's going to remove that type of knowledge from human developers? I guess the follow-up question is do you think that matters? ADO: Great -- both of them are really great questions. I do. I think -- and I agree with you fully.
I think the real learning happens in the struggle of trying to figure it out yourself. If you just kind of get spoon fed the answer, you didn't go through the struggle. You might not -- it might not stick as well as if you had done it yourself, tried ten different things that didn't work or pieced the knowledge together yourself. I do agree there that, you know, in those instances, if you're -- like, one of my projects that I'm trying to do over the holiday break is really get into Rust and kind of build a couple applications with Rust. You know, I know a little bit about Rust.
I've done the tour of Rust and things like that, but I've been using Cody for a lot of -- you know, to kind of fill in the gaps. I have noticed, you know, coming from a non-Rust background that I'm relying more and more on the tool to help me solve the issues rather than kind of having that intuition of, oh, you know, this isn't working, and I kind of have an idea of why it's not working so let me go dig into it a little bit and figure it out on my own. I just open the Cody chat window and I'm like, hey, this doesn't work, fix it for me.
It's like, oh, you forgot to do this import or you're not returning the right type. So to answer that second question, I think it's going to become less and less maybe you don't need to know exactly how everything works under the hood to be a productive developer, you know, to be a developer in a professional setting. I still think, you know, you should learn as much as possible on your own and be able to read and understand and write code, absolutely. But if you're trying to move fast and trying to launch an MVP or get to market faster, if you can leverage tools to help you get there, whether it's AI or low-code, no-code platforms that kind of abstract a lot of the code and syntax and algorithms from you, you know, I'm all for it.
Let's build some cool stuff and let's ship. JASON: Yeah, yeah. I think -- and you know, I think the important thing for me and for anybody who's skeptical of AI to remember is that this is a tool, like other tools, and to say don't use that tool because it'll make you bad at code is kind of like saying don't use a JavaScript framework because it'll make you bad at code. Abstractions are abstractions. Sometimes we're abstracting things that don't matter.
I think the question that I have and the question that I think we just need to answer as an industry is like where is the line for what doesn't matter when you're writing code. And I think, you know, I also think the answer might be different for somebody who enjoys coding versus somebody who needs code to get to whatever the next phase of their plan is. I think that as we're seeing more and more of the world become dependent on code, it's becoming less important that the code itself is like handcrafted by an expert coder and more that, you know, I'm a teacher and I want some way for kids to do some project.
If I can code that in a weekend by saying, hey, AI, tell me how to make an app that lets my kids do this, they get that app and the app works well enough. Like, they're not trying to productize it. It's not going out as like a software as a service thing. They were able to do a fun weekend project that made their life a little bit better. If the code is trash, who cares, because that wasn't the point.
But yeah, it is interesting. It's interesting because it sort of puts what we do into more of a commodity. Like when we shifted from all clothing being handmade and very expensive to most clothing being cheap crap and then you pay extra for the handmade clothing, most people don't.
It does seem like maybe we're going to enter the world of most code is a commodity. It's not really a luxury job anymore. Instead, you have your artisan coders for whatever, some specialty thing. But for the most part, you just kind of get your Instagram ad crap from AI and it doesn't matter because you needed to solve a problem. ADO: Exactly. Yeah, and I think when I joined initially, I was talking to the head of product, Chris Sells.
One of the things he told me is back in the '70s, '80s when we were moving from writing code on punch cards and starting to get compiled languages that would convert from C++ code into assembly code, there was a lot of skepticism of are we going to be able to ever trust a machine to take code that we write, turn it into something else, and then have that run and execute on the processor and be accurate. So at the beginning, there was a lot of skepticism. There was a lot of checking to see if the code that was converted from one language to another, like was it correct. At the beginning, sometimes it was, sometimes it wasn't.
I feel like that's kind of where we're at now with AI-generated code. It's like we still have to look at it because some of the times, it's going to look -- you know, it's going to have that uncanny value effect. It looks right, but when you go to compile it, when you go to run it, it completely crashes.
But a year, five years, ten years from now, we might not everyone care. We might just open up our AI assistant and say I need an authentication system, a log-in system, and AI will be at a stage where it can, you know, validate, just spit out a log-in system that we don't even have to check the code for. We're so confident in it that it's following the best practices, following, you know, handling all those edge cases, setting up password resets, verifications of the accounts.
I think we're still a ways away from that, but I think that's where the future is headed. Code is going to become more and more commoditized. We're going to have more and more processes and algorithms emerge where we can just be, as engineers, as developers, it's going to be on us to get the user experience correct and build new and novel applications and focus on, you know, the users focus on the product space more so than what the code looks like. Are we using tabs or spaces? Is the code using this library or that library? It's going to be more about what does the application do, less so than what language is it written in, is it performant, is it this, is it that.
JASON: You know, it's funny because I spend a huge amount of time telling people the code doesn't matter. The point is what you're building. You know, the outcome is that the person you're building for has a wonderful experience that they enjoy using.
Despite me saying that a million times, it doesn't fail to make me uneasy when I think about this idea of, like, we're barreling toward a future where the job that I do, that I enjoy is at risk of becoming a niche specialty because most people won't care or need it. It is what it is, right. Like, that's how things move forward.
And I am always curious, too, like is this the sort of thing that will work for the 80/20? If I want to build a personal site, great. I can tell AI to build me a personal site. If I want to build the Hilton happen for managing remote check-in, AI is probably not going to build that. So I do still think, to the best of my knowledge whenever we're generating something net new, we're still going to need people who understand code.
The other question I have, what I'm really curious about as we move into this world, is what does maintenance look like? Can you tell AI to maintain AI-generated code? Or are we turning ourselves into, like, the garbage pile tenders instead of the creators? These are my big philosophical questions as we start talking about this. If AI is going to generate all the code but it can't maintain its own code, what does that mean for this job? Like, do we become the equivalent of janitors? Like our job is to clean up after the robots? It's interesting. And I have very complicated feelings about it because I do want to avoid becoming one of those people who's like everything that was invented after I got comfortable is bad.
Because I think that is a pretty standard progression with humans and technology. At the same time, maybe some technology is going to cause more problems than it's worth. And how do you know which is which, right? Anyways, this wasn't intended to be a big philosophical rant. (Laughter) It does look like your audio de-synced again. I'm not sure what's going on.
We'll take a quick pause here. ADO: How about now? JASON: Yeah, that looks right. ADO: Is it? JASON: Yeah, we're back in sync. Cool.
Okay. So let's do this. I think I could sit here and talk about this forever, but I'd rather see it in action. So why don't I switch us over into the pair programming mode. It's going to be this screen here. There we go.
All right. Let me first make a quick shout out to our captioner. We've got Rachel here from White Coat Captioning making all of these words into readable words for people who need captions. Thank you for dealing with that extraordinarily clumsy sentence. That's made possible through the support of our sponsors. We've got Netlify and Vets Who Code kicking in to make this show more accessible to more people, which I very much appreciate.
Go check out Ado on Twitter, while you still can. Here is the link there. And then we're talking about Sourcegraph today, but more specifically, Sourcegraph's product called Cody, which is one of the AI assistants. So that is about as much as I know.
Like, what we've talked about. Then I know that Cody is the Sourcegraph flavor of this thing. So if I want to see this in action, I want to learn how this actually fits into my workflow, what should I do next? ADO: Yeah, absolutely. So just to give a little bit of background on how we got to Cody, Sourcegraph is a company -- you know, we've been around for about a decade now.
Sourcegraph got its start in universal code search, helping developers at very large organizations search through hundreds or thousands of different repositories and help developers understand code, find symbols. So for the first ten years of our existence, we've been building tools to help developers learn and understand their code, find issues faster, manage batch changes across all of the repos at once. So our roots are really in helping developers understand code, understand their code base. So building an AI code tool was almost like a natural evolution of our product. So in building Cody, we said let's use all of our graph-building knowledge, let's use all of the knowledge we have helping human developers understand their code and translate that to an AI assistant that could kind of be your pair programming BFF, that can sit there in your extension, in your editor. We don't want you to change your workflow for us.
We just want to integrate with what you're already doing. And Cody will just sit back and be available for you when you need it and when you're just kind of focusing on the hard stuff and you want Cody to get out of the way, it will. But yeah, here you're on our Cody landing page with our slogan, "code more, type less." If you scroll down a little bit, it has this overview of how Cody works with the multi-line autocompletion. As you start writing your code, Cody basically works in three different ways.
The first one is that AI-assisted autocomplete. So Cody has context of your entire code base. As you start adding new functions or as you start writing functions or calling methods, Cody will go and find those methods in your code base and make sure that it's passing in the right parameters, it's not hallucinating and thinking about, hey, you're creating a new person object, they should have these specific attributes. It goes and looks at your type for your person or a class for that person and only imports those specific methods. So that's the first way that people use Cody, using it as an autocomplete tool. The second way is using Cody as a chat, as kind of a pair programming buddy that you can talk to and ask it questions.
Again, Cody has context of your entire code base. So when you ask a question of Cody, rather than going to the internet, going to the LLMs and kind of coming up with a generic or kind of a broad technical answer, it goes and uses our natural language search to look at your code base, find relevant snippets relevant to the question you're asking, add those to the request to the LLM, and that gives us much more personalized responses for your specific code base, for your specific problem. Just like the autocomplete, it uses the context and the awareness of your code to give you better answers. JASON: Gotcha.
ADO: And the find way people use Cody is we support a number of built-in commands, as well as we give you the ability to add custom commands. The way the commands work is they're just kind of an abstraction on the chat interface. The built-in commands we have are for documenting your code, explaining your code, and generating unit tests. So you can just, you know, highlight a piece of code, run a command, like generate unit tests, and it's going to look at the highlighted code as well as context from your entire library, from your entire code base, to see potential issues and improvements that you can make on your code.
Then it'll allow you to easily, you know, copy and paste those improvements into your editor, into your open file. JASON: Got it. Okay. So I think I'm going to hold my questions until we're actually in a code base here. I thought that a good code base would actually be the Learn With Jason code base. Because if we look in here, I have a bunch of packages, and I have a bunch of sites that are all sharing code.
There's interconnected stuff. I have common things going through here. I figure this is about as complex of an app as I have available to me. So I think this will be a good way to put it through its paces and understand how we can make this work.
So if I want to make this work, let me get this here. I'm going to open up the Learn With Jason site. Let's close up the windows, and we can open up the explorer. Okay. So in here, I have all of my sites.
This is all the different sites I run out of the Learn With Jason system and all of the different little helper packages that I have, all running in a mono repo. If I want to start using Cody, what should I do first? ADO: Love it. I love the explorer on the right-hand side.
JASON: You know, people get salty about this, but this is the truth, man. Look, if I open some code and let's get just a page out here. All right. Then I want to -- see how that code is not jumping around as I toggle this open and shut? That's a game changer. ADO: You might have converted me. I mean, I go back to the old Visual Studio days where explorer was on the right by default.
You don't see it as much anymore. But that's actually a really good point. JASON: And that'll be true, too, when we pull up like a chat window. Having it here so it's not bouncing my code around when we pop it open to ask questions. ADO: Yeah, definitely. So to get started with Cody, you would just install the Visual Studio code extension.
So if you open up the extensions tab and type in Cody AI, it should be one of the first. And there it is. JASON: Okay. So here's Cody AI. I don't need a pre-release or anything? I can use the -- ADO: You can use the current, like the main version. We just went GA last week, last Thursday.
So Cody is ready to go. Once the extension is installed, if you open up the extension, it'll just ask you to log in with your Sourcegraph account. Then you'll be good to go. JASON: If I open up the extension, like extension settings? ADO: So by default, Cody is going to be installed on the -- not on the explorer window but kind of the very main window. I forget the name of it.
Yeah, the side bar. JASON: Oh, do I need to make that visible? ADO: Yeah. JASON: Nope, not that one. You? Nope. ADO: Let's see. I'm trying to find it.
Is it primary side bar? JASON: Primary side bar is open. I don't know what this would even be called. Anybody know what this thing is? Activity bar.
Ah, activity bar is what it's called. Okay. ADO: So it looks like since you might have had the extension already installed, it looks like you're already authenticated.
JASON: Okay. ADO: And ready to go. One thing I would do is hit the upgrade to pro button.
That's going to take you back to the Sourcegraph site and give you a limited access to Cody up until February. So we went GA last week, but we're giving everybody access to Cody Pro until middle of February for free. We don't require a credit card or anything like that.
It's our holiday gift for developers that are coding over the winter break and want to try out Cody. So it's fully free. With the unlimited plan, you get unlimited autocompletions, unlimited chats.
Like I said, we don't require a credit card or anything. So at the end of the two months, it's just going to kick you back down to the free tier. If you're loving Cody and you're getting value out of it, you can then upgrade for $9 a month. JASON: Okay. Cool.
All right. So I've got this open here. And I'm upgraded to pro, which I don't see the upgrade anymore.
So it looks like it took. ADO: Yeah, so you are now ready to go. With the Cody app in the side panel, you can see your commands in the top right.
We talked about those chat, document, smell. Below that, you have the chat panel, which gives you a list of your previous chats if you want to go back and reference them. Or you can click the "start a new chat" button icon. Natural language search is the next panel we have. This is still in beta. This allows you to search your locally opened code base with natural text.
So instead of knowing -- instead of having to know exactly what file you want to open, if you just search for authentication, you know, it kind of does a fuzzy search that can help you find relevant files in your code base. JASON: Umm... yeah, so this kind of gets into -- it's a little -- hmm. I think I'd maybe give it a C-minus on that search. (Laughter) ADO: Yeah, still in beta.
JASON: For sure. So I guess let's kind of put this through its paces. The first thing that I think of when I'm in something like this, I'm kind of thinking about, all right, I've been given a task. So I need to, let's say, update the RSS feed for the newsletter. So can I do something like, where are the files that manage the RSS feeds.
ADO: So before opening that, or as that query is running, if you click on the little star icon in the chat window, you'll see it highlighted and like pulsing. Right where you typed in the query. Then if you hit that "enable embeddings" button, that's going to enable the much better, enhanced context. So you can run it without embeddings, but the context isn't going to be as good as with embeddings. So we always recommend folks generate these with embeddings.
With that, it's going to work much, much better. If you've used Cody previously, you had to install the Cody desktop app. So you no longer have to do that. Everything is done in the Visual Studio code extension, and generating the embeddings takes, you know, 30 seconds to a couple minutes. JASON: Got it.
Okay. So question from the chat. Is this available for VIM? ADO: It is, yeah.
So we support Visual Studio code. Any of the Jet Brains IDEs, so you use Pie Charm or Web Storm or any of those, you can use it. We also have experimental support for Neo VIM.
We're looking at supporting other IDEs in the future as well. JASON: Okay. So it's Neo VIM is the flavor of VIM that you support? ADO: Mm-hmm, at the moment. JASON: Got it. I thought there was another question in here. Oh, bridgewater is asking, how well do AI assistants work with Common Lisp, COBOL, Forth, Smalltalk, and a couple I've never heard of.
ADO: Yeah, PL/B and RPG-IV it looks like. With Cody, we use both open source and proprietary LLMs. We don't build, at the moment, any of our own LLMs.
So we leverage OpenAI's GPT4, as well as Clod. We also use an open source LLM called Star Coder for the autocompletions. We just added another last weak when we went GA. There were some languages that were, you know, that had training data for.
Then it should work. I always say, you know, Cody works really well if you're using JavaScript, Python, Go, any of the really popular programming languages that are used that have a lot of open source libraries that have a lot of open source code. They'll work better. But that's not to say that it's not going to work in those older languages. JASON: Okay. Got it.
And then does this require you to be online? ADO: It does require you to be online. So we did some testing initially to see if we could get Cody to run entirely locally, being able to kind of bring your own local LLM and not be connected to the internet. The experience is just very slow because most people don't have a super powerful GPU to run LLM inference on. So while it does work, it's very, very slow, and our COO actually has an open PR, where if you want to download it and try it yourself, but it's not the greatest experience.
So we kind of decided to go the online route where we'll do all of the LLM inferencing and all of the co-generation remotely and send it to the client. JASON: Cool. Very cool. All right. So we poked in here. I asked it some questions about my RSS feed.
It did an okay job. It figured out where the RSS feeds were defined. It didn't find the files that actually generate them. But that's way better than it did without the embeddings.
(Laughter) So I mean, what are the right things -- what should we be looking at? Wait, why does it say the embeddings are incomplete? ADO: Looks like we might have had a time-out issue. JASON: Continue. Cody gateway request failed.
Did I get logged out or something? Or get rate limited? ADO: You shouldn't have gotten rate limited. Maybe if you close out VS Code and reopen it. Just in case when you upgraded to pro if there was any sort of funky issue. JASON: Okay. Here, here, try again. It does not like me.
What if I cancel? ADO: What if you hit the embeddings, that little star icon in the chat again? See if the embeddings are there. JASON: It says it's indexed. But then it says it's not fully indexed. ADO: Let me see here.
Because I have your Learn With Jason project cloned locally on my end. JASON: What the hell? Why does this have me hooked up as my old, dead IBM account. That's weird. Ummm... let's see.
Yeah, is there anywhere I can go to, like -- if I turn these off and back on again, do you think it'll work? ADO: It might. It may be a permissions issue from signing in a long time ago. I'm wondering if it is an issue with authentication. But if you'd like, I can share my screen, and I have the same exact repository with the full embeddings. So we could -- JASON: Yeah, sure. Why don't we do that.
So I'll take my screen off, and I'll look for yours. ADO: Okay. Let's see. And share. So now I'm actually running into that same issue.
Cody embeddings index is only 42% complete. JASON: Oh, no. ADO: Let me take a screenshot of this.
JASON: Did we get the ill-fated, like, we're doing a demo and now we're having an outage? (Laughter) ADO: Doing it live, right? Well, let's see. I'll keep trying to see if this one -- if we can still do it with 42% embeddings complete. The joys of doing a live demo, right? JASON: Love the live demo. ADO: So I guess we can switch back to your screen since we're running into the same issue.
But I'd rather have you play around with Cody. JASON: Where is -- oh, not that either. This one.
Sheesh. Okay. ADO: There we go. JASON: All right. So -- ADO: Like I said, one of the first things that I typically do when opening up a new repository that I'm not familiar with is, you know, I ask Cody what does this application do or what is this application, to just kind of get a general sense of where I'm at, just to kind of help me figure out am I in the right place, what can I expect from this application. Typically, what this is going to do is it's going to try to find a README file, try to interpret it, and give a summary of the high-level features for the application.
So it's telling us here that this is a mono repo built with Nx that contains multiple sites, a Remix site, an Astro site, a Sanity Studio project. It tells us a little bit about the tech stack and instructions for how to set up a local dev environment. So on and so forth. So this is kind of stuff that you could probably get from the README if you read the whole thing. I always kind of like getting the brief and quick summary of the project.
Would you say that answer was along the lines of how you would explain learnwithjason.dev? JASON: I'm confused where it's getting this. Because it's not a Remix site. I wonder -- also, am I the only one who has this? Like the days since we had to manually deploy button? (Laughter) Yeah, because we don't have any Remix code in here. Does it have Remix code out here? Does it maybe say that this is a Remix site? Oh, look at this.
Okay. So it just read the README. It didn't read the code.
ADO: Interesting. JASON: It summarized this. ADO: I'm wondering if you ask Cody more about Remix, like a Remix question in the code base or how to launch the Remix portion of the site, if it's going to hallucinate or if it's going to say, hey, you actually don't have any Remix. JASON: This is interesting. I'm honestly like pretty pumped to see how this handles all of this.
I'll put this over here so I can see what's going on. Whoops, no, you go over here. Stream demons. ADO: Stream demons. (Laughter) JASON: Okay.
So it says -- all right. So it's got correct information. Like, this is the correct -- well, that's not the correct command. So that should be WWW. Is it only pulling out of the README? It's only pulling out of the README.
I think it's not seeing the -- so, you know, this is incorrect code. Like, my thing is out of date here. But it doesn't seem to be catching the actual source code of what's going on. ADO: So one thing you could do is if you go into the chat panel where you asked that question, it will actually tell you the context that it read.
So you can see we looked at an MDX file, the package.json file, the index.html. JASON: It looked at like the wrong -- so this is the package JSON for Sanity. So yeah, it's interesting because it's definitely trying. But it seems to be grabbing interesting choices.
ADO: Definitely. I think it might be because, you know, you have multiple different sites in this mono repo. But I think it should do a better job here. One thing you could do is Cody also supports kind of bringing in specific files. So if you, in the chat window, start with the "at" symbol, you can say use this particular file for context.
So if you said package.json, the main one, and asked it how do I start this application, it should limit -- it should give a much more accurate response because it's going to get context from that particular file as kind of like the main one. But let's see. I love the comment of who updates READMEs. You could use Cody for it.
JASON: Okay. So this looks like what I wrote, which makes me believe that it read the README again. Where did the -- where did it get that from? ADO: I also wonder if not having the full embeddings 100% complete is -- JASON: I have a suspicion that's part of what's going on here because it was aware that it was an Nx site, but it wasn't -- it's not, like, using that.
So it didn't pick up on the fact this is an Nx site when I told it to check in here. And honestly, this could be just maybe mono repos aren't going to be usable with a coding assistant because there's too many parallel contexts. I would have to know so much about the mono repo to tell the robot not to include certain contexts that at that point I don't need the robot anymore.
You know, who knows. It's really hard to say when you get into these different use cases. LLMs are very good at gathering context.
They're not always great at understanding which context is relevant. If we're saying use this repository as the relevant information, when does it -- if I'm talking about show me stuff in the blog, how does it know what the blog is, right? Maybe that's a question to ask. So we can say can you show me how the blog works on this site.
Context limit reached. ADO: Oh, so if you open up a new chat window and ask the question, it'll -- because each chat gives you -- just kind of like with ChatGPT, it uses the context of your previous chats and kind of keeps the conversation going so you don't have to keep asking. You know, if you're having a conversation, you don't have to keep giving it full context. It's going to pull from previous questions and responses.
JASON: Got it. Oh, this is good. This is -- okay. So this is great. This is accurate.
All these pieces are correct. Yeah, this is dope. Oh, I thought it would let me click on these to jump to them. ADO: Oh, not yet. But good feedback for our engineering team. JASON: This is cool, though.
So this is doing what we want. And I feel like this is -- so this is kind of a, hey, welcome to the code base sort of set of questions. Where I think it gets really interesting is when we start digging into something more specific. So let's get into, say, one of these and say -- like, can I have it just explain this function? ADO: Mm-hmm. JASON: What just happened? This is like running a demo.
ADO: If you go back to that index.ts file, highlight the entire function code. Then run the explain command. I think it might have lost. JASON: Failed to create command. No active text editor found. ADO: Hmmm.
So with this, if you -- I think it's losing context when you close the file. If you keep the file open -- JASON: Okay. Let's try again. ADO: Yeah, because that explain command had no context. So it's just getting some totally random -- okay. JASON: Oh, here we go.
The purpose of the episode's function is to publish all episodes from a Sanity dataset and return them in a formatted set. Calls a Sanity fetch function, fetches all episodes where hidden is false. Okay.
This is a little bit of a word salad. Because these fields are abstracted up here. Then let's see. What else? Sorted by date, including all published fields. Can it figure out Groq? ADO: It should.
Let's see. JASON: Oh. Where is my load all episodes? This is good. This is good. Okay.
ADO: All right. JASON: Constant defines common fields, as well as YouTube links and related episodes. That looks like it's kind of getting confused there.
The order by date descending sorts the returned episodes with newest first. Good. This is good. Yeah, this little summary is actually kind of great. I like it.
This is like where I would see these coding assistants really being useful. If I've never used Groq before, having something that can say, like, this is what that order at the end means sorting the returned episodes. If you've never seen this before, this is weird, right? And Groq is one of those -- you know, the Sanity team literally lived that XKCD meme. There are 14 competing standards. I know, let's create a 15th. (Laughter) But this is -- okay.
I get this. This makes sense to me. ADO: Yeah, and, you know, this type of explanation, I think, is very valuable, especially when you're onboarding to a new codebase. For me, it can be very easy to get overwhelmed. There are standard conventions of how you write code, how stuff is structured, but I feel like every team does it slightly different, has certain conventions, like certain parameters they pass in, certain ways of defining functions or defining how those functions behave.
So having something like Cody select that function and be like, hey, give me an ELI5 of how this actually works and giving me the correct answer -- and hopefully it's the correct answer -- really gives you a leg up of helping piece together the bigger puzzle of how this application is actually meant to work, where it's going to be used, and how it's going to -- you know, how you can leverage it. JASON: Yeah, for sure. Question from Linda.
As you use Cody more in a specific codebase, does it learn a bit and understand the structure better? ADO: So, the more code you have in a code repository, the better the context will be. So if you're kind of starting out like a brand new project and let's say it's a Next application, by default, you know, it's going to look at best conventions for an Astro or best conventions for a Next.js application. As you start adding more files, more pages, more of your utility helper functions or your implementation details, then I'll say, quote/unquote, it's going to learn your style of coding and what your application does and kind of tailor itself to that more so than kind of just general Next or Astro or whatever library framework you're using. So in that case, it's going to learn. JASON: But it's not like -- it starts with the context of whatever the embeddings are.
Those embeddings don't shift, except when you change the code. ADO: Exactly. JASON: So it's not like if you ask it the same question three times, the third answer would be better because it's got practice or whatever.
It's starting with the same baseline information every time. It's not like an evolving system or anything. ADO: Right.
At the moment, it is not. But who knows, maybe in the future it'll continuously evolve. You know, you can use different LLMs.
One of the things we're really standing behind is giving the user choice on which LLM they want to use. If you open up a new chat window, I can kind of show you how that works. So at the top, you see by default Claude 2.0 is our default LLM, but you can use 2.1, Claude Instant, GPT 3.5, or Mixtral. So any of these LLMs, based on which one you use, you're going to get a slightly different response. Some LLMs work better with code generations.
Others work better with explanations. Rather than locking you into -- you know, we think Claude is the best, so we want you to use Claude. But we want to give you the option of finding an LLM that's most suited for your use case.
As we move into the future, I'm sure there's going to be specific LLMs that are just trained on JavaScript or just trained on Go or Rust that are much, much better at dealing with Rust code or Go code or different sets of programming challenges, different sets of problems that you're trying to solve. So we don't want to lock you into an LLM. We want to give you the option of choosing the best LLM for the job that you're trying to do or solve. JASON: Got it. So for somebody who has no idea what this means, which would be me, like what is the difference here? Is there anything that I'm going to notice as a user other than the answers are a little bit different? ADO: As a user, the only thing you will notice that's different is the answer. Like if I ask a question and the answer isn't up to par, up to what I expect, usually I'll switch to a different LLM and see if I get a better answer, if I get a better solution.
If not, then I kind of try to rephrase my query or rephrase how I'm asking the question because then it might be me. I might not be giving it enough context personally to give me a good enough answer. So yeah, at the moment, the difference between the LLMs is -- I don't want to say it's minimal, but it's kind of transparent to the user, you know, whether you're using GPT 4 or Claude 2.1.
The answers should still be fairly similar, but one thing that I think I've heard our engineering team talk about is that Claude 2.0 does much better at giving you kind of those textual responses. GPT 4 does a little better in giving you code generation. If you ask it to write a piece of code or update a function to do XYZ, the code generation aspects are going to be a little better with GPT 4 at the moment. At least that's what we found. JASON: You know, I think it's funny because what you were just talking about with phrasing the question or whatever, and Linda is saying here you have to learn to phrase your prompts well.
My favorite thing is all of engineering has been this idea, like this struggle toward eliminating meetings, eliminating the need to talk to other people. In the effort to do that, what we've made the pinnacle skill is good communication skills. I find that very ironically heartwarming. (Laughter) ADO: I couldn't agree more.
JASON: All it took was robots to teach the nerds to finally talk about communicating clearly. ADO: But at the moment, it definitely is -- you know, you have to ask the right prompt. You have to ask the right question to get the right response. I'm hoping in the future, you know, as somebody that loves a good user experience, like I don't want to put that on the user.
I don't want to tell you what question you have to ask or how you have to phrase it to get a response. You know, we should be -- JASON: I mean, I think no matter what, we're never going to make AI into mind readers, right. So I think that this -- it does point to the idea that for us to be able to ask questions that will get us to answers, we're still going to have to understand what's happening. You're never going to be able to say, like, hey, AI, make me a profitable business and it's just going to go and do that. Then later you can say, AI, we're profitable, but we need more profit. Make it more profitable.
And it's just going to do that. You're still going to have to understand things. You're going to have to know how things work. You have to ask clear questions and provide clear direction.
You're not replacing your need to think or communicate. You're just changing who you're thinking and communicating with. If you were the employer of a bunch of employees, you would also have to think about what you want done, understand the space that you're trying to solve, and then clearly communicate to your employees exactly what you want them to do and what the parameters are for success. Taking that into the context of AI, it's going to be exactly the same. You still have to foe what you want, have a clear vision of what the outcome looks like, and communicate that in a way that a robot can interpret and deliver on with a clear definition of done.
I think as with all solutions, this isn't a magic bullet, and the hardest problem with all of technology is being able to clearly articulate a vision of what you want to happen. It doesn't matter what the tool is, right. So for anybody who's thinking, ah, finally, I'll be able to work without all these people who keep me from getting stuff done, surprise, now you've got a slightly dumber person in the form of an AI that needs much clearer instructions for you to do the thing you want done. So remember, nerds, you still got to talk. Still got to think.
Still got to communicate. Still got to clarify. ADO: Yeah, and that's -- you know, I think that's a good segue into kind of trying out another feature of Cody, which is editing code. Instead of doing the autocomplete, one of the ways I find myself editing code a lot is I will highlight a function and run the Cody edit command, which will basically open up a prompt window, asking what you want to change about this code. So if you have a function that you're dying to update or change in any way, this is a pretty cool demo.
So with the edit command -- and this is kind of what you were talking about, giving it clear and concise commands, you'll get a much better response. But here, you just provide the instructions for it. The better the instructions, the better the update.
But you can even be vague, and it'll still try to infer what you're trying to do. JASON: So update this code to use valid TypeScript that will pass strict type checks. Because I didn't write this to be TypeScript.
This is pretty straightforward. Ahhh, it wrote bad code. ADO: So, let's see.
JASON: So this was right, but then it also added the types here, which aliased them all and got duplicate declarations and everything. Then down here, these will be never declared. ADO: Uh-huh. I see it. So you could try -- JASON: This is what we would need in order for that to work. But now it is valid TypeScript.
So it got close. Again, uncanny valley. ADO: But that's the thing.
You are a TypeScript developer. You've worked with TypeScript. You can very easily see what's wrong and fix it. But as a new developer, like if I was brand new to TypeScript and I asked it to do that and it gave me these errors, I would be like, well, what am I doing wrong? Where do I go next? Like, what is happening here? JASON: Let's follow up. Why doesn't this code work? ADO: Let's see.
JASON: Nope. So this missed entirely. Like not even close. Didn't get anywhere near the types, which are the only problem with this code. ADO: Yep, I see it. I wonder if you ask it -- and I was going to ask previously if you undo what you did with the code and re-run that edit function to make it valid TypeScript code, there is the option to re try.
It gives you multiple options. JASON: Okay. ADO: So I'd be curious to see if it would eventually get it right. JASON: Okay. Pass a strict type check. It's also doing something super weird with my white space.
You see this? It's like mixing. ADO: Uh-huh. Tabs and spaces it looks like.
JASON: So it missed this. Let's retry. Make sure the code actually works.
ADO: I read a -- what was it. Maybe Mashable. They published an article last week saying that if you offer to tip your code AI assistant in the prompt, it tends to get the answers right more of the time. So another way the tipping culture has gotten completely out of hand.
JASON: Geez. When it gets a working Venmo, that's when I'm going to be really concerned. It just pops up a QR code. Okay. So apparently insulting it worked. (Laughter) Okay.
So it did the thing, right? That is good. As far as I can tell, it didn't -- did you change something else? Did I change that? Does this -- interesting. I don't know if this was already changed or not. Either way, this is cool. This did what I expected it to do. Good that the second try worked.
I guess telling it to make sure the code works is good. This is useful. Again, I think it does highlight pretty quickly that this isn't the sort of thing that if you don't understand your code, you can just set this loose and roll. You're still going to have to understand how this works, at least for the near future. Like, these are going to continue to improve. The work that's being done now is significantly more impressive than the coding work that I saw done a year and a half ago.
So I can't imagine it's not going to look entirely different in another year and a half. But today, you need to be the -- you know, it's just like the automatic driving. If you let the car drive itself, you're going to kill somebody. So you have to be in the driver's seat and attentive the whole time.
So I think this is the same thing. It is cool. I like that it at least got me directionally where I wanted to go, right. I knew how to do that.
It's cool that it typed it for me. I can see that being really useful. Actually, what if I want to do a new function? So let me think of an example of something that I write all the time.
I need to write a utility function that will take a number out of a database and format it as currency. If I want to have Cody generate that for me, do I do that by writing a comment? Do I do that in the chat? What's the right way to go about this? ADO: So you could do it both ways. One way you could do it is you can open up a new chat window and kind of give it instructions. Say, hey, this is what I want.
This is what I need. Or what you can do is add a comment or even just start writing the function in your open file. You know, give it the function parameters and see if Cody can infer from the function name. JASON: Let's go amount and currency code. Do I have to tell it to do something? ADO: Nope.
Usually you just wait a second, and the autocomplete should -- if you hit enter then just wait like half a second, it should start -- JASON: It's entirely possible I manually disabled this. Where would I find this in the settings? ADO: So in the Cody settings, if you open up the Cody settings button and then code autocomplete. So now if you -- there we go. JASON: Oh, whoops. Oh, wait.
You were almost there. What happened? ADO: I think you hit a button to cycle through the options. So if you just redo it, it'll -- maybe one more new line. Just so it's -- JASON: That'll do it.
So then we would just need to refactor this and say, um, what did I want to do? Edit the code. Refactor this into valid TypeScript. Make sure the code actually works. That's going to become my superstitious addition to every code prompt. ADO: You can actually, in the settings, there's an option to add a pre-text or text after each prompt that's automatically injected. So maybe we could just create a Jason mode where it adds a string that says make sure it actually works.
JASON: Just a light insult to the computer every time you use it. Oh, somebody asked a question about my sweater. This sweater was a gift from the Slack team. It's fun. I'm happy with it.
Thanks, Slack. Okay. So there we go.
It took a little bit of work. Again, it's not going to do your job for you. You still got to pay attention. But this is pretty slick. It wrote the code for us. It refactored the code into TypeScript.
And for a utility function like this, I've written this code a thousand times. I don't need to write it myself. It's pretty great to have the robot remember, like, all of these bits and the specific names of things.
That's helpful. ADO: I would be curious, if you delete that function in its entirety and just add a comment saying create a function, a TypeScript function, that converts currency and just hit enter and give it a second. Will it give you the full solution? Maybe just start with export. Like typing export.
JASON: Pretty close, yeah. Same. ADO: Okay. So instead of allowing you to select the type of currency, it just defaulted to USD. JASON: There, that does it.
So I just had to add that the currency was passed in. So, you know, again, clear communication is going to be the secret to all of these. ADO: Yeah, that's what I was going to say. I think the more direct -- and I guess that also begs the question of if I spend as much time clearly communicating what I want, is it faster for me to type it out in natural language and let the computer autocomplete it for me? Or if I know what I'm doing and it's a function like this, is it faster for me to just type out the code? I guess that's a question to be -- it's like a personal choice. JASON: Yeah, and I think it definitely depends on where you are in your codebase and whether or not you remember the exact name of the thing.
One thing that gets me frequently is that on something like this, I always forget the parent object is called INTL. I just forget that it's there. So I know what it is, but I end up Googling format currency. Then I find INTL and remember how it works.
But it's not like -- you know, it doesn't stick in my memory. In that case, this was faster than me having to quickly Google what the name of this class is. And that -- so these are places where I can see this being really useful, even if what this spits out is not exactly what I want. This was time saving.
This saved me 30 seconds
2023-12-25 10:19