3 Python AI Projects for Beginners - Full Tutorial

3 Python AI Projects for Beginners - Full Tutorial

Show Video

Today you'll learn how to build three Python AI projects that are designed for beginners. I'll be walking you through each step by step and all you need is some basic Python experience to understand and complete them. Now, these three projects are going to be the following. We have number one, an AI agent, number two, a resume critiquer, and number three an image classifier. To

build these, you'll learn about Langraph, Langchain, Streamlit, TensorFlow, OpenCV, and more. And by the end of the video, you'll have three great projects that you can adapt and build upon. Now, all of the code for these projects and the timestamps for them are down below. And with that said, let's go ahead and get started after a quick word from the sponsor of today's video, Recraft. ReCraft is one of the most advanced design tools that I've ever used. It's a platform made for

professional designers, marketers, and creative teams, and it just got a huge upgrade. Now, what makes Recraft different is that it gives you creative control. You're not locked into templates or left wrestling with prompts. You get tools that scale your design process. Now, the new infinite style library is honestly one of the coolest parts. You can explore thousands of curated styles, illustrations, icons, 3D art, and even photorealistic images, and instantly apply them. Think of it

like a design inspiration engine, but fully usable. Now, you can save, reuse, and even mix these styles together. And the style mixing tool lets you blend reference images or style presets and actually adjust the influence of each one. It's like having Photoshop layers

for aesthetics. Now, this is a gamecher for brand consistency. You can lock in your own custom style and then apply it across all of your assets, no matter what you're making. And this isn't just about image generation. Recraft comes with a full toolkit. Background remover,

upscaler, mockup generator, infinite canvas, and live collaboration with your team. It's no surprise that it's trusted by designers at Netflix, Auglav, and HubSpot. Now, if you want to try it, hit the link below and use the code Tim11 to get $11 off any paid plan. Thanks to Recraft. Make sure to check them out. So, I'm on the computer now, and I'm going to give you a quick demo of the three projects. Again, you can choose any one that you want to complete by looking at the timestamps down below, or you can do all of them. So, the first

one here is a simple AI agent. This is kind of just like a chatbot that has access to some tools. So we can say something like you know hello world here give this a second hello how can I assist you today I'm going to say I need help with math because this chatbot for example has some tools related to doing math operations so I know this seems really basic but I can do something like can you you know add 674 and then it should be able to give us the answer here which is 774. I know

this seems super basic, but we actually give this agent access to a calculator. So, it can go and do these math operations, which LLMs typically are actually very bad at doing. By doing this project, you're going to learn how to give agents access to various tools. So, while this is simple, by building this out, you'll be able to build something much more complicated. Great. Let's move on to the next one. So, the

next project here is an AI ré critiqueer. Right. Now, this allows us to upload a résé using this nice kind of streamlit user interface which you see here, which obviously I'll show you how to build. We can then put in the type of role that this ré is for. So, in this

case, it's a software engineer ré. I just uploaded one here. And then we can press analyze. We can wait a second. It will actually read in the PDF, pass it to an LLM, and then it will give us some response. So, let's wait for that here. Okay. So you can see here that we have a bunch of information kind of analysis here on the resume saying hey you know this is what you could do better uh here's recommendations here's what I like here's what I don't like etc and there you go let's move on to the next one here which is an AI image classifier all right so this one allows you to upload any image that you want you can see I just uploaded this one of a dog and then you can press on classify image and it goes here and tells you the classification so it says you know 79% chance it's a beagle 12% chance it's this I don't know this particular breed, etc. Now, this doesn't just work for

dogs. This actually works for any image. You could use a car, you could use a cat, you can use a photo of you, whatever. As long as the model has seen something like this before, it should be able to give you a somewhat accurate classification. And we'll talk more about that when we get into that project. So, anyways, those are the three projects. Now, let's start

building them out. All right, so let's go ahead and get started here. Now, what I've done is I've opened up Visual Studio Code, and I've just opened a new folder. Now, you can work in any code editor that you want, but first step, just open up some folder where we'll put all of the code for this project. Now, what we're going to do here is we're going to use something called UV to install the various dependencies for our project. If you've ever worked with Python before, you may have heard of something called pip. Rather than using

pip, we're using this new tool called UV. So, I just want to make you aware of that because you do need to install this. So, if you don't already have UV, you can simply install it from the link that I'll leave in the description. And if you're having any issues with UV, I'm going to leave a link to this video here that I posted recently which explains how to use UV, how to install stuff with UV, etc. Okay, so just make sure that you install UV. It's pretty simple. If you go to installation here, you can see that there's commands for Mac and for Windows. Run these commands and then you

should have access to the UV command, which we're going to use here in 1 second. Okay. So, what we're going to do now is we're just going to make another folder inside of here. I'm going to call this project one. and I'm going to go into the project one directory. So I'm going to say cd into project one. So now

I'm going to start typing my commands here while I kind of initialize the uh dependencies or install the dependencies for the first project which is going to be our AI agent. So what I'm going to do is inside of this directory I'm going to type the command uvit and then dot. This means initialize a new uv project inside of this particular directory which is project one inside of here. So I'm going to go ahead and press enter. And then if you open this up here, you're going to see that you get a bunch of different files. We have get ignore Python version my project.l this readme file. We don't

need the readme file. So we can go ahead and delete that one. And then we can write our Python code inside of here in this main.py file. Okay. So I'm going to

clear out the main.py file. And now I'm going to install the dependencies that we need. Now like I said, what we're going to be building here is a simple AI agent. In order to do that, we need the

following. So, we're going to type uv add and then lang graph. Okay. Then we're going to bring in lang chain. Then we're going to bring in python.env. And then we're going to bring in uh one more here which is lang chain-openai. Okay. So we need these four uh packages. Sorry. So go ahead and

press enter. And then you should install all of them. And you'll notice when you use uv uh this is very fast. So again, langraph, langchain, python.env, and langchain openai. And I'll talk about how we use all of these in just 1 minute. So now that we have those

installed, we can start writing our code. However, we are going to need an OpenAI API key because what we're going to be using for these kind of AI projects is an LLM. Right? Now, an LLM is something like GPT4 or Claude 3.7 or

Grock or DeepSeek. You can use any one that you want, but the simplest one to set up and the one that most of you are probably familiar with is the OpenAI API or like the GPT type LLMs. So, that's the one that we're going to use here. Now, in order to use this LLM, we're going to have to use an API key, which we're going to get from the OpenAI website. So, we're going to go into our project here, and we're going to make a new file called env, which stands for environment variable. Inside of here,

we're going to put our kind of secret key. So we're going to make a variable here and we're going to call this openai_appi key in all capitals. And you need to make sure that you type it exactly like this. Okay? So make the enenv make the variable and then here we're going to paste that API key which I'm going to show you how to get. So to

get the API key, we're going to go to platform. So let's go like [Applause] this.openai.com/api keys. Now in order for this to work, you need to have an openi account and you do need to have some credit card on file. Now, this is just going to cost you like a cent or two cents. In fact, sometimes it's even

free depending on how much you use it. So, it's not like it's a very expensive thing, but in order to use the API, you do need to have an account. And again, I believe you need some credit card on file. So, let me sign in here and I'll show you how to get the key. Okay. So, once you're signed in and you go to this site, you should just be on this page where it says create a new secret key. So, I'm going to make a new one. I'm

just going to call this, you know, AI project or something. Put it in my default project and make the key. Obviously, you don't want to share this with anyone. So make sure you hide it. I will delete this one after the video. So what I'm going to do now is I'm going to paste this inside of here. Okay. So

where it says, you know, open AI API key, I'm going to paste in this variable. And now we will be able to use this from our main.py file. Okay. So from main.py, we're going to start writing some code now. So we're going to start here by saying from langchain_core messages and we're going to import this human message. Okay.

We're then going to say from langchain openai, we're going to import the chat open aai tool. Okay, that's because we're going to use this to connect to OpenAI and kind of have our LLM running. We're going to say from langchain.tto tools import tool in lowercase. All right, we're going to use this to register a tool that our AI agent can use. We're then going to say from lang graph pre-built we're going to import a create react agent like that. Okay. And I think

that I spelled this correctly. It should be fine. Okay. Then we're going to say from enenv import load.env. Okay. So let me

quickly explain these imports here. Now, Langchain is something that is kind of like a high-level framework that allows us to build AI applications. Lang graph is a little bit more of a complex framework that makes us allows us to build AI agents which I'm going to define in one second. And then the Langchain OpenAI obviously allows us to use OpenAI within kind of Langchain and Lang Graph which are kind of coupled together. Then we have this env module

and thisv module allows us to load these environment variable files from within our Python script. So that's why we're bringing that in. Now, like I said, we're building an AI agent. The

difference between just like a chatbot and an AI agent is that an AI agent has access to tools. So what I'm going to show you here how to do is how to make a simple Python tool that the agent can use so that it can go out and do something beyond just simply responding to your message. So in this case, we'll make a really simple tool, which is a calculator, but we could make any kind of tool that we want. a tool that saves a file to a node, a tool that calls an API, a tool that turns on or off your lights, like whatever you want to do, you can make a tool here. And that's the point of me showing you this project.

So, let's get into it now. So, first thing we're going to do is we're going to call this load.env function. Now, again, what this does is it just looks in the current directory for this.v file and then it loads in the variables so that we can use them later on in our program. Okay. Next, we're going to make a function. So we're going to say define main and inside of here we're going to initialize kind of our chatbot and our AI agent. So for our AI agent, we're

going to have an LLM that kind of acts like the brain. Now in this case, we're using OpenAI. So we're going to say model is equal to chat open AAI like that. And then we can pass the temperature of this equal to zero. Now the higher temperature you go here kind of the more random the model is going to be. When you go temperature zero, it

just means that you don't really want any randomness at all. Okay. Next, we're going to say tools is equal to an empty list. And that's because in a second, we're going to be able to fill this with tools that our agent can use. And then we're going to create an agent. And to

do that, we're going to say our agent executor is equal to a create. And then this is going to be the thing that we did. Create React agent. And we're going to pass in our model and our tools. Now,

what we're doing here is we're using this create react agent from Langraph, which is a pre-built agent that just takes some kind of model and some kind of tools and automatically handles kind of how to use these tools and how to utilize the model. So, we're not getting too complex here. Langraph allows us to build some really complex stuff, but we can just bring in this kind of pre-built agent framework. We can just plug into it a model, plug into it our tools, and then we can start using it. So, that's literally all that we're doing here. Okay. Okay. Now, beneath this, I'm just

going to copy in some print statements to save us a little bit of time. But you can see here that I'm just printing, you know, welcome. I'm your AI assistant. Type quit to exit. You can ask me to

perform calculations or chat with me. Okay. Now, you can type anything that you want here. And if you want to copy anything that I write, you can find it from the link in the description. I'll leave like a little GitHub link that has all of the code for the project. All right. So, after this, what we're going

to do is we're going to set up a while loop. Now the idea here is that we want to keep looping and allow the user to kind of interact with this like a chatbot. So it can just keep asking questions and then our model or our agent sorry can keep responding. So

first thing we're going to do is we're going to collect the user input. So we're going to say the user input is equal to input and then we'll just say u colon like this and then we're going to say strip.trip is just going to remove any of the leading or trailing white spaces. So if you type something like

this, then it will just give us this without the whites space at the beginning or the whites space at the end. Okay, that's all the dot strip does. Now we're also just going to put a back slashn before the u just so that it goes down to the next line so that we have a little bit of separation between our user input and whatever the kind of agent uh is saying to us. Now what we're going to do is we're going to check the user input and we're going to say if the user input is equal to quit okay because that's what we put right here. Then we're going to exit out of the while loop. So to exit out we can simply just

type break. Okay. So if they type quit then we will get out of this and we'll end the program. All right. Otherwise if they don't type quit then we want to use our AI agent. So to do that we're going

to do a print statement. We're going to put a back slashn. We're going to say assistant colon and then end is equal to an empty string. Now what I'm doing right here is I'm saying okay the assistant is about to start responding to us. We're going to end with nothing.

Now let me explain how this works. When you use a print statement in Python by default it's going to print this back slashn character at the end of whatever is printed. That's what makes the terminal go down to the next line. So what I'm doing is I'm overriding that in this print statement. I'm saying I don't want to do that. I don't want to put

this at the end of the line so that when we start printing something else, we can print it kind of after this assistant, not on the next line. Hopefully that makes sense. But we're just making sure that we're not kind of putting this new line character so we can print all of the stuff the assistant says kind of right after we say assistant. Okay. Then we're going to say for chunk in the agent executor. Okay, bear with me here for one second. and we're going to put a

dictionary. We're going to put messages colon and then we're going to say human message and we're going to say the content is equal to the user input. Now, I know this seems a little bit weird, but essentially what's going on here is we're able to stream the response from our LLM using this agent exeutor. Okay, so the agent executor is like our agent, right? If we want to actually use the agent, we need to kind of call it or give it some input. So we say okay we

want to stream whatever the output is based on our input. So we saystream and then the way that this works here is we need to pass in the messages to stream to our agent. So we say okay here are the messages and then we just have one message that we want it to respond to and this is a human message right coming from us rather than something like a system message and the human message says hey you know the content is this okay which is whatever the user typed in. So that's just how you kind of call this agent. All right. So, now that we've called that, we're going to put a colon and then inside of the for loop, we're going to do the following. So, we're going to say if the agent is in chunk, okay, and messages is in the chunk agent, then we're going to move to the next line.

Now, what I'm essentially saying here is, okay, I'm going to loop through all of the chunks. The chunks are essentially parts of a response coming from our agent. So, we're looping through all of those. We're going to check to see okay is there an agent response or like is the current chunk that we're looking at you know a response from the agent. If it is we're

going to check if there's any messages in that particular response. That's what this does right here. And then if there is what we're going to do is we're going to loop through them. So we're going to say for message in chunk and then we're going to say agent and then messages like that. Then

what we're going to do is we're going to print the message.content and we're going to say end is equal to an empty line. Now why are we getting this? It's cuz I said from when I want to say for that's the line that I meant to do. Okay. So

essentially what we're doing is we're looking at all of the chunks that we're getting from our agent exeutor. We're making sure that it comes from the agent. Okay. So if it comes from the agent and there's some messages, we're going to grab all of those messages. So

loop through them and we're going to print out all of the message content and then we're going to end with this kind of a what do you call it empty string so that we don't go to the next line. What this is going to do is allow us to stream these longer responses from the agent. So it looks like the agent's kind of typing word by word rather than just printing the entire message at once. We can kind of stream the response. So like how you see kind of chat GPT responding to you and you get it like typing word by word. That's what we'll see here as

well. Now after we do this we're just going to put a print statement so that we'll kind of print an empty line and then what we can do is we can say if underscore_ame is equal to underscore main then call the main function. Okay, now you're probably wondering what the heck is this? This is just a simple convention in Python that you use whenever you want to execute kind of a main function. All it says is that we're only going to call this function right here if we execute this Python file directly rather than if the Python file were to be say imported from some other file. Don't worry too much about it, but this is all you need. Say if this then we're going to call this main function.

This is the main function right here that we just wrote. In the main function, we initialize a model. Now, the reason why this model will work is because we've loaded in the environment variable, which is our OpenAI API key.

This key needs to exist in the environment variable in order for this model to load correctly. Okay, so we load chat openai. We then have our list of tools, which I haven't yet filled, but we'll do that in one second. We then create an agent executor with our model and our tools. We just print some basic

messages, and then we go into this while loop where we essentially ask the user for some input. We then send that user input to the agent. We take the response from the agent, we kind of stream it out, and we print it to the console. So, we're going to look at the tools in one second, but for now, I want to run the script and make sure that it works. So, in order to do that, we're going to open up our terminal. We're going to make

sure we're in the correct directory. In this case, I'm in project one. Okay? And then we're going to type uvun and then main.py. Now, you need to do this. You need to use uvun because we've installed all the dependencies with uv. So, when we do this, it will make sure that we're using the correct Python interpreter and that all of the dependencies are kind of initialized for this script. All right. All right. So, this is how UV works

again. UV run main.py. We're going to go ahead and hit enter. We're going to wait 1 second here. And then it says, okay,

welcome. I'm your assistant. You know, you can type to us. I'm going to say, you know, who created the world or something. And let's see what it says for that. And then wait one second. It

says assistant. And you see that we get the response. So, the concept of who created the world varies blah blah blah blah blah. Okay. And then if we type quit, it should exit. And there you go.

It sees or you see that we exit. Okay. Perfect. So that is that this is working. Now though we want to add a tool that our agent can use. Now a tool

is just some external service that the agent can call to and well utilize. So in order to create a tool it's actually quite simple to do that you do this at tool. Okay this is called a decorator in Python. You then can define a function. So in this case we can call this you know calculator. We then take in some

parameters. So I'm going to take in a and b. And for the parameters, you should define a type for them. So you can do this thing in Python called a type hint where you put a colon and then you specify the type that you want the parameter to be. So in this case, it can

be a float, but we could also do something like a string or an int or a boolean, right? So we can specify whatever we want the type to be. And then we're going to do another type int which specifies what we're going to return. So I'm going to return a string from this. Okay? So we said we're taking

in a float. Uh we're taking in another float called B. And then we're going to return some string. And then you should

specify what's known as a dock string. A dock string is essentially a description of the function or in this case a description of the tool so that our LLM or our agent knows when to use this tool. So it's going to look at the name of the tool which is the name of the function, the parameters and the return type as well as this description. So for the description I'm going to say this is a useful for performing basic arithmetic calculations with floats or with numbers. Okay, so we're just describing the tool so the agent knows when to use it. And then we can say return and I'm just going to return an fstring and I'm going to say the sum of a and b is and then a + b. Okay, so this

is a really simple calculator, but all this is doing is just adding two numbers together and then returning a string saying, you know, the sum of a and b is whatever the sum of a and b is. So this is one very simple tool. We can now use this tool by just putting the name of the function inside of this list. So we

specify the tool, we put the name of the function here, and now we will have access to this tool. And if we want to see if it's working, let's just do a print statement here and say tool has, you know, been called so that we can see if the tool is being called by our agent. So now our agent has access to this. So if we go back here and we rerun

our code. Uh we should be able to use this. So I can say can you add you know 7 and 9. Okay let's give this a second. It says tool has been called and it says the sum of 7 and 9 is 16. So the agent

will determine when to utilize the tool if it needs to or not. So for example if I say hello it's not going to use the tool right because there's no need to use it. But if I ask it for some arithmetic operation then it will determine you know we need to use this tool. it will go out and it will use it. Now, if you wanted to make another tool, you could just copy something like this. So, you could say tool and you could say, you know, say hello or something. And we're going to take in maybe some name and we're going to say useful for greeting a user. Okay. And then what

we're going to do is say, you know, tool has been called. And then rather than this we'll just say hello name I hope you are well today. Okay. Then we could put the tool right here. So we could say say hello. And now our agent has access to two tools that it can use.

Again like they're kind of useless but I'm just showing you quick examples. So if I rerun this now I can do something like you know greet Tim. And then it says tool has been called and it says I have greeted Tim. Okay that's kind of weird. It should have given us the

response, I guess. But either way, it did call the tool. I guess it just didn't really find this, you know, response useful. So, it didn't give it to us. The point is though that you can

make all these tools, which are Python functions that can do literally anything that you want, pass them to the AI agent and make some really cool applications. So, that was the point of what I wanted to show you here. This is how you set up a basic AI agent. You give it access to a few different tools. You can kind of

see if the tools running or not. Now, let's move on to the next project, which is going to be that réé critique. So, we're moving on now to project number two, which is the ré critiqueer. Now, for this project, we're actually going to make a user interface. The user interface is going to be made with something called Streamlit. Now,

Streamlit is a really easy library to use for making making kind of basic web applications with Python. I'll walk you through it step by step, but you'll see here that it's very simple to utilize and is great for working with AI type applications. Now what I've done is I've made a new folder here called project 2. So let's get into that folder. So I'm going to cd dot dot and cd into project 2 from my terminal. From here again I'm going to type uv init and then dot to make a new uv project. So let's go ahead

and do that. And then we're going to install the dependencies that we need. Now the dependencies we need for this one are going to be streamllet. Okay. So let's install that. We're going to need OpenAI because we're going to use OpenAI again for kind of an LLM for analyzing the resume. We're going to install PI PDF2. This is for loading our PDFs. And

then we're going to have Python.env. Okay. So, we're going to go ahead and press enter. It's going to install all of the stuff that we need. And now we're good to go. All right. So,

let's go into project 2 here. And actually, I'm just going to start by copying my environment variable file from the last project and pasting it in here. Now, if you didn't complete the last project, what you need to do in order for us to work with OpenAI and to have this kind of LLM working is you need to create this variable inside of AENV file called OpenAI API key. Okay, so that's what we've done right here. You then need to paste your OpenAI API key inside of this file as the variable so that we can utilize this and use this OpenAI LLM or like this chat GPT LLM. All right. All right. Now, in order to

get the API key, again, what you're going to do is go to platform.openai.com/api keys. I will leave the link below. You will need to have an account and I believe a credit card on file in order to do this, but it is very inexpensive, if not just completely free. It might cost you like

1 cent or something to use this. So, don't worry about the cost. It's more so you just need to have the credit card on file to be able to use this. Okay. So, now we have the environment variable file. We're going to open up our main.py file. We're going to clear this out and

we're going to start writing some code. Now, like I said, we're going to use Streamlit for the user interface. So, bear with me if there's some things you haven't seen before and I'll show you how Streamlit works. I also have an entire Streamlit tutorial series or tutorial, I guess, which I'll put on screen now and you can check out. All right, so I'm going to say import Streamlit as ST. I'm going to say import

pi PDF and then two. Again, we're going to use this in a sec for loading in our uh PDF files or our resumes. We're going to say import io. We're going to say

import OS. We're going to say from openai import openai. And then we're going to say from enenv import load.env. Okay. Now, first thing we're going to do is call this load.env function. What this is going to do is load the environment variable from our env file so that we can utilize it from within this Python script. Now I'm going to start typing a few things related to streamllet. I'm

going to show you how we run the streamllet application and then all of this will make sense. Okay, so first things first, we're going to say st this is streamllet, right? And we're going to say set_page_config. Okay. Now, this allows us to configure kind of the name of our tab or of our page like you would in HTML for example. So, we can say the

page title is equal to and then we can say AI resume and then you know critique or whatever we want to call it. So, this will be like the name of our website effectively that's what we're setting. Then we can do something like set the page icon. So we can say page icon is equal to this is like the fave icon or the fav icon and we can put in an emoji.

So I'm going to go with something like page. Okay. So I'll put that in here. And then we can say the layout is equal to and for our layout here we're just going to go with centered which means we're going to put everything in the center of the screen. You don't need to put this line here but it just uh kind of configures a few things to make the page look a little bit nicer. Now the way that Streamllet works is that we can effectively write anything that we want onto this page and Streamllet will automatically handle rendering it correctly. So we can write a graph, a data frame, some text, a title, some markdown, pretty much anything that we want and Streamllet will just automatically handle writing it or we can tell Streamllet the way we want it to be written. So what I'm going to do is I'm going to say st.title and all this means is we're

just going to write a title onto the screen. Okay. So, for the title, I'm just going to say this is AI ré critique like that. Okay. Then beneath that, I'm going to say st.mmarkdown. And here, I'm just going

to put some text. I'm just going to copy it in to save us a little bit of time to kind of tell the user what they need to do. So, I'm going to say upload your resume and get AI powered feedback tailored to your needs. Okay. So I'm

saying I want to write some title text which is this. And I want to write some markdown text which is just this text right here. That's it. Okay. So I want to show you how we run this before we go any further. So you can kind of see what it looks like and how the writing kind of works with Streamlit. So when we have this Streamlit application in Python, the way that we run it is the following. We need to run the command which is Streamllet run and then the name of our file which in this case is main. py.

When we do this, it's going to use Streamlit to load in this Python file and like kind of render the user interface for us in our web browser. But because we're using UV, we need to use the command UV run and then streamlet run main.py. So whenever you're using UV, you always run the command UV run.

And then what we want to do is what we want to run the streamlet command. So we're going to say streamllet run main. py. So it's UV running the streamllet command which is running our Python file. Hopefully that makes sense. But because we're using UV, that's uh kind of how we get this going. So, we're going to go ahead and press enter. When

we do that, you're going to see that it should open up a web browser, and you should just get the text that you wrote. You can see here it shows you, you know, where your Streamlit app is running. In this case, it's on localhost port 85001. If it didn't open that up, you could just copy this into your web browser. And you can see here that our

Streamlit application is now running. Now anytime anything changes in our application here, so whether state changes or whether we refresh the page, this entire Python file will be reran. It's kind of a weird way of thinking about application development, but that's just the way that streamlet works. So if I come here and I do something like st, you know, text and then hello or something and I come back here and I refresh, you're going to see that now hello appears on the screen.

Okay. So again, anytime we refresh the page or like any state changes, the whole Python script will be rerun and then it will just update whatever's on our website. All right, so let's close that for now and let's start writing some more code and you'll learn kind of more about stream how Streamlet works, but don't worry too much about the details. This is a pretty simple UI that we're making. All right, so next what we're going to do is we're going to load in our OpenAI API key. So I'm going to say open AI API_key is equal to OS.get env. And then we're going to put

the name which is open AAI API_key. Okay. So we've just loaded that in. Now uh the reason why we're doing that is we're going to pass this to this open AAI module in just 1 second to kind of initialize our LLM. This works a little bit different than the AI agent we were using previously because this isn't an AI agent. We're just using the LLM to kind of analyze a resume. All right. So, now that we've loaded in this

key, we're also going to create some elements on the screen for doing things like loading our resume. So, we're going to say our uploaded file is equal to st.file uploader. Okay. And here we're going to put some text. So, we're going to say

upload your resume. And then we'll just say that this can be in PDF or txt format. Then we can specify the type that we want to be allowed. So we're going to say type is PDF or txt. So the way that this works whenever

you have an input in streamllet is you could just write the input field that you want. In this case, it's a file uploader. And then you can have some variable which will store whatever that input is. So if we go back to our site here and we refresh, you can see now that we have this kind of file uploader. If we press on browse files and then we were to select one that file would kind of get stored inside of this variable called uploaded file and then we'd be able to access it later on. So it's very very simple. That's why I like using

streamlet. Now beneath this we're going to say the job ro is equal to st.ext input. Okay. So rather than file we're just getting some text and we're going to say enter the ro or the job ro you are targeting. Okay. Okay. And then we'll just say that this is optional. So

this way they can just specify, hey, you know, I'm going for software engineer, I'm going for a mechanical engineer, whatever it is, so that we can have some better context for our LLM when it's giving responses. All right. So now what we want to do is we want to make sure that uh the user has kind of uploaded this file before we allow them to start analyzing their resume. So I'm going to make a button here and I'm going to call this analyze. And this is going to be stbutton. Okay. And we're going to say

this is analyze resume. Now the way that the kind of buttons work here is that when you press the button, this variable will change to true. So then we can check if this variable is true, which will tell us if the button was pressed. So if I do something like if analyze then as soon as the button is pressed, this will become true. This will run and then we can do something like st.ight button pressed.

Let me uh kind of show you how this works so it makes sense. So if I go here and I refresh, you see that we have this button. If I press the button, it says button pressed. Okay. So essentially what ends up happening here is any time any of these inputs kind of get a new value, the entire Python script is reran from the very beginning. But these

values are stored in the state of the page. Okay. So as soon as I upload a file, this uh whole Python script gets reran from the very beginning. But the file that I uploaded gets stored in this variable. Same thing with job roll. As soon as I enter anything, the entire Python script gets reran. But the job

roll gets updated in the next rerun to store whatever it was I typed in here. When I press the button, same thing. The entire script gets reran, but the state of the button now this variable becomes equal to true. So now when I check this if statement, it prints out button pressed. Because if you think about it, right, when I first run this script, well, analyze is going to be equal to false because I haven't yet pressed the button. So then when I press the button,

we need to rerun the whole script again. So then when I check if analyze, it will then be true. And then I can go to this next part of the script. So again, it's a very kind of weird way of doing things if you haven't seen this before, but effectively again, you rerun the script every time. And then the state kind of gets stored in these variables and you can check like is this button pressed? If it is then you can do something. So

what we're going to do is we're going to say if analyze and uploaded file. Now this just means that we're going to have some content in the uploaded file. Right? So this is not none like there's something here. Then what we're going to do is we're going to try to load in our file content. So we're going to have a try and we're going to say our file content is equal to and we're going to load this in. Now, in order to load this in, we're going to need a function. So,

I'm going to make a function here, and I'm going to call this extract text from file. Okay? And this is going to take in the uploaded file. Now, the idea is here that our LLM can't actually accept a file, or at least the way that we're passing it to it, it can't take the file. So, we want to take all of the text out of our PDF, kind of read it, and then pass it to our LLM. So we're going to say if the uploaded file type is equal to application/ PDF. So we're going to need two equal signs here.

Sorry to check if it is equal to. Then what we're going to do is we're going to return and we're going to call another function which is going to extract the text from our PDF. So we're going to say define extract text from PDF. We're going to take in the PDF file. And what we're going to do here is say our PDF reader is equal to pi PDF2. PDF reader and we're going to take

in the PDF file. We're then going to say text is equal to an empty string. And we're going to say for page in the PDF reader dot pages, we're going to say text plus equal to the page.ext extract text plus the back slashn character and then we're going to return text. Okay, so how does this work? Essentially, we have this function. We take in some PDF file which we'll pass here in 1 second. Uh we then say okay,

we're going to load in this PDF file using this pi PDF module that we installed. Don't ask me how this works. We're just using the module. It's able to load in the PDF file for us. We then

just make this empty text string. We're going to now add all the text from the pages. So we say for page in PDF reader.pages. So loop through all of the

pages, grab all of the text that's on those pages and just add it to this text variable. We then return text. Okay. So now from this extract text from file function, we're going to put the extract text from PDF if this is a PDF and we're going to say IO.tes io. Okay. And then uploaded file read. All right. I know this seems a bit weird. Essentially what we're doing is

we're taking this uploaded file which is PDF file. We're reading it in. We're converting the readed information or the read information sorry into this byte object which is then going to be able to be loaded by this PDF reader. And then we're extracting all the text from it and we'll use that here. Okay. Then we're going to go here and we're going to say return uploaded file read. Okay. Dot decode and then UTF-8. So what we're doing is we're checking all right is this a PDF? If it's not a PDF then it must be a text file. So we can just read

it and decode it as UTF8. UTF8 is the standard text encoding. Uh and then now when we use this function from here. So extract text from file. We will get the text from our uploaded file whether it is a text file or a PDF file. So if it's PDF we go here and we use our PDF reader. If it's text file we can just

directly read it in. And now we have the file content. Okay. Now we need to make sure that there is actually some content in the file. So we're going to say if not file content dot strip. So remove again all the leading and trailing white spaces. So it has like any empty

characters. We'll remove that. And we're going to do st. And we're going to say you know file does not have any content. Okay. Dot dot dot. And then we're going

to say st.stop. All right. Now, what this is going to do is essentially if there's no content inside of the file, we're going to show an error message to our user and we're just going to stop the program right here. Okay. Now, if this is not the case, right, so if we don't stop the program, then what we want to do is we want to create a prompt that contains the text from our PDF file. We want to pass this to an LLM, right, to this AI model, and we want it to give us some kind of uh critiques or recommendations for our resume. So let me just copy in this prompt here and then I'll explain to you how it works. Okay. So this is

our prompt. So we say prompt is equal to please analyze this resume and provide constructive feedback. Focus on the following aspects content, skills, experience, specific improvements. And then we put inside of here job ro if the job role does exist. Otherwise we say

general job applications. Okay. We then say the resume content is our file content and please provide your analysis in a clear structured format etc. We're using an fstring here which allows us to embed variables inside of the string.

You do that using these braces. You can use any kind of prompt that you want here. You can copy this one directly from the code link in the description or you can just write your own. Okay, I'm

just showing you a basic prompt. I just figure there's not a much not a lot value, sorry, in me typing this out line by line. Okay, now that we have that, we're going to say client is equal to open AAI and we're going to say the API key is equal to the open API or OpenAI sorry API key. So we're creating a client here so we can access the OpenAI LLMs. Okay. And then what we can do is we can say response is equal to client.comp

completions.create. Now again this is different than the AI agent we used previously because the AI agent had access to tools. So it was a little bit more complex here. We can just directly invoke the LLM. So we can just use this open AI module. Okay. So we need to

specify the model that we want to use here when we generate this response. So I'm going to go with GPT 40 and then mini. But you can use any model name that you want that's available. We're

then going to say messages are equal to and we're going to put a list. Now, inside of here, we're going to have two messages. The first is going to be a role of system. A system message is something that it's not going to be directly replying to, but it kind of gives context to the model. So, for the system message, I'm just going to tell it the following. Okay. So, I'm going to have content and I'm going to paste this in and I'm going to say, "You are an expert ré reviewer with years of experience in HR and recruitment." Okay?

So, it kind of knows what it's meant to be doing and how it should be responding. I'm then going to have another message and I'm going to say roll and this time it's going to be user and the content is simply going to be the prompt. Okay. So, we tell it, hey, you know, act like a réé reviewer. And then we give it this message right here that we created, which is our prompt that contains the file content and the job role the user is applying for. If they have the job ro, then we can specify a few things here. So, we can say like temperature is equal to 0.7 and we can say the max tokens is

equal to like a,000 for example, so that it never ends up costing us too much money. All right. So this is going to generate a response from us from the uh OpenAI LLM or from you know GPD40 Mini or whatever LLM that we use. And then we need to print out this response. So beneath this we're going to say ST domarkdown and we're going to do kind of three pound signs here. So it'll be like a title three and we're going to say the analysis uh results like this. And then we're going to say st domarkdown and we're going to say response dot choices zero dot message dot content. All right.

The reason why we're doing this is because you can potentially get multiple messages coming back from this. In this case, we know we're just going to get one message. So we say response dot choices. That's kind of where it's stored. We get the first response. We

then get the message from that. And we get the content. And then we render this as markdown because we'll get a markdown response from the LLM. So it will be able to show us kind of all of the nice formatting, the lines, the bold, the highlight, etc. All right. Now, last thing we need to do, you might remember that up here we had this try statement because there could be an issue with what happens here. So then down here,

we're going to make an except we're just going to say except exception as e. And then if there is an exception, we're going to say st. And we're going to put an fstring and say an error occurred. And then we're just going to show what the error is, which will be the string of E. Okay, that's it. We're just doing this to make sure uh you know, hey, if there's an error, we'll tell the user what it is and we won't crash the program. So, let me zoom out a bit. Let's go over what we just did and then we will run the code and make sure that it works. All right, so we start with

all of our imports here. We load in our environment variable file. We set some configuration for the page, so like kind of the title of the page. We set the title and the markdown or we just write that onto the page at least. So we say hey you know this is the title this is some basic text the user should know. We

then load in our open AAI API key. We have two fields or two inputs here. So the file uploader and the text input. We have a button. We have two functions

just for kind of loading the text from the PDF or from a text file. We then are saying okay if the button was pressed and you did upload a file then what we're going to do is try to get all of the content from that file. If the file doesn't have any content we show an error message. If it does, then we have this prompt which just tells the LLM what we want it to do. We pass it our file content. We then create an OpenAI

client so we can interact with our LLM. We generate a response from the LLM by specifying the model and our messages. And then we just print out what that response is. Okay, so let's go back here. Let's make sure this is still running. I believe that it is. We can go

here. We can refresh. Okay, we should get our updated user interface. Let's grab a resume now. Okay, so here's one actually from one of the students in my mentorship program. By the way, if you guys are interested in potentially joining that, I have a wait list open right now. I'll leave a link to it in the description where if you want guidance directly from me and expert software engineers, you can join the Dev Launch mentorship program. Again, sign

up for the weight list below. Now, this person specifically is targeting web development roles. So, let's analyze their resume by pressing the button and see what it tells us. Okay, so we get the results here and we can scroll through this. Now, because this is for

one of my current students, I don't want to show all of the content just in the name of privacy. Uh, but the point is it kind of gives us all this. We can see the analysis and then we could run this as many times as we want for as many résumés as we want. And I think this is

a pretty interesting AI project. So, there you go, guys. That wraps up project number two. Of course, you could

make this better. You could improve it. You could change it however you want. But hopefully this showed you how to use Streamlit and how to interact with an LLM. Now let's move on to the final project, which is our image classifier.

All right, so now we're moving on to project number three, which is our image classifier. Now, in order to do this project, again, I'm going to create a new folder, which I've done here, project 3. I'm going to CD into that project 3 directory, and I'm going to start setting it up with UV. Again, if

you're unfamiliar with UV, go back to that first project, watch the beginning. I explain what UV is and kind of how to use it and some other resources in case this video isn't getting you there. Anyways, what we're going to do is we're going to type UVIT dot. Okay, that's

going to initialize UV inside of the project 3 folder. We're then going to say UV add and we're going to add the dependencies for this project. Now, like before, we're going to use streamllet.

We're going to use TensorFlow and we're going to use Open CV Python. Okay, now don't worry if you're unfamiliar with OpenCV or TensorFlow. TensorFlow is going to allow us to build in or to bring in sorry, a pre-trained machine learning model that can look at an image and classify it. And OpenCV is going to allow us to do some manipulations on the image that we load so that we can pass it to TensorFlow. Okay, so we're going

to install all of those dependencies. It might take a second because these are fairly large. And there we go. They have been installed. Okay, so now we're going to go to project 3. We're going to open up main uh pi, sorry. We're going to clear that out and we're going to start writing some code. So, as always, let's

start with our imports. So, we're going to say import cv2, which is open cvv effectively. We're going to say import numpy as np. Numpy is installed by default when you install TensorFlow. So, we have it. We're going to import streamllet as st. We're going to say

from TensorFlow. Okay. And this one is going to be a little bit long.net_v2 import. And then we're going to import a few things. And I'm going to

put these in parenthesis. We're going to import mobile net v2. We're going to import the precessorinput and the decode predictions. Okay. Then we are going to say from pil import image which is an image library in Python again that is installed with cv2 by default. Okay. So let me explain uh what's kind of about to happen here because there's a lot of stuff that we're using. Now TensorFlow and Kara specifically contains a lot of pre-trained machine learning models. Now

typically a machine learning model needs to be trained right you need to pass it a bunch of data. it needs to learn kind of the information about, you know, what makes a dog, what makes a cat, etc. In this case, we're not going to train a machine learning model from scratch. We're just going to use a model that already exists. Now, a popular model

that comes from TensorFlow is called Mobile Net V2. This is a very lightweight model, which means you can run it even on something like a laptop as opposed to like something that has a big GPU or something. And again, it's just called Mobile Net V2. So, that's the model that we're going to use. it's

already trained. We can just download it to our computer which will happen the first time that we run it and then we can simply pass it images in the correct format and it can then classify those images for us. Now obviously if you wanted a more advanced project you could train the machine learning model yourself. But I don't want to do that in this video because that's going to take a very long time and get into some much more complex stuff that's not meant for a beginner tutorial. So this is going to show you essentially how you use this pre-trained model and then how we can present it with Streamlit for example and how we can do some basic image manipulation. So we're going to write a function here called load model. Now all

this is going to do is load in this mobile net model for us. So we're going to say the model is equal to the mobile net v2 and then we need to specify the weights that we want to use for this model which is going to be the image net weights. Now again, you don't need to worry too much about what's going on here because we're using a pre-trained model and we're just going to return the model here. Now, what I will say is

mobile net v2 is what's known as a convolutional neural network. This means that it employs a specific technique and architecture to essentially be able to look at images. Okay. Now, for this particular architecture, you have these weights. Okay. These weights are like

the learned values uh that essentially make the model work how it works. So when this model was trained previously uh all of these weights which are essentially these unique numbers were determined to you know equal some particular output. So we're

2025-05-13 15:44

Show Video

Other news

What will it take to create the quantum internet of the future? 2025-05-14 22:35
What's old is new again, resto-modding the BlackBerry Classic! 2025-05-14 10:08
IBM Pledges $150B US Investment, Apple Shakes Up AI Unit | Bloomberg Technology 2025-05-05 07:34