Intro to AI | 6th Cohort

Intro to AI | 6th Cohort

Show Video

thank you and uh welcome everyone happy to see you Nuka heart of VCA uh some familiar faces already and with some of you I already talked on one of one sessions and uh yeah you're also uh welcome to schedule one uh if you haven't uh really enjoy uh having some conversations will be um what the residence so yeah let's start uh let's start with a few words about myself so I'm I'm coming from a background that combines Ai and photography and I've been doing both of the being in those two fields for quite some time I uh I had an analytical background where I started by studying mathematics um then doing a master's in mathematics and afterwards doing a PhD in a computer vision and artificial intelligence at polish Japanese Academy of Information Technology so this was my very analytical and technical side but at the same um at the same time I also pursued my passion in photography and I've actually been also drink that whole time working as a part-time event and documentary photographer so there has been a lot of different experiences that led me to discovery of this new field that for me AI art and actually and and even speaking broader Joy in some ways the technology and art and different different tools different methods different computer tools that would allow me to express myself creatively this rallyed me to the practice that I'm doing today uh so uh so yes uh today I will uh talk uh do some intros to the general understanding of AI what are the common misconceptions about AI what are the most popular types of AI uh some of the words I'm sure you have heard of but maybe they are just buzzwords for you don't really uh be afraid if some of the stuff sounds technical I will try to explain us as easily as possible but of course if anything Sinclair you're free to ask questions and my goal essentially is to shed some light onto what lies behind all of those buzzwords that we currently hear gen Ai and llms and uh and and also the second goal would be to maybe Inspire some of you the ones especially who are more based in traditional art practice and strategic uh to maybe uh go on and pursue some more experiments with this technology and maybe find some tools that could be an interesting um interesting path for you to explore so first of all what is artificial intelligence and what is it not more importantly uh we hear this term mentioned so much but one thing to realize is that what we call AI is is not really as as scary as it seems it's definitely not something as as mysterious and as as scary as many of the media Outlets like to put um as artificial intelligence essentially nowadays we call all the methods that use neural networks so neural networks don't sound as scary as artificial intelligence but also they they don't uh really uh sound as buzzworthy so maybe this is why the the newsletters and articles don't uh don't like to use simple terms they like to scare us quite a lot but of course not to say the results that those methods from Deep learning and machine learning especially with the breakthroughs in the last two years what they're able to achieve it's absolutely astonishing and impressive and the technology is definitely evolving at such a quick Pace that is of course important to uh take a step and take a moment to just think about the tools we have on our hands and also on how we can use them so this is true for all AI tools that are out there whether it's Vision tools text sound but AI tools will only be as good as data and what artificial intelligence does under all those complex mathematical formulas is actually just learned from data there is even one way of describing AI models as very smart compression machines when you think about it Delhi or my journey is a compression mechanism for kind of all the information that was available on the internet um up till the point when authors trained the models and it just compressed to two or three gigabytes of storage so that's definitely impressive but this also means that first of all the AI models have no knowledge outside of the data they are able to create absolutely original connections within the data they had the sort of low uh this sort of creativity that is not a Capital C creativity but it's um but it's somewhat interesting to how it combines different concepts how it is allow us to also explore different themes and um AI models will only be as good as the training data so whenever you engage with AI model I would usually advise to think about what data does this AI model use because this question will give you an answer to what is possible to do with this model essentially and maybe even give you some creative directions so if you're training your own model the first step I would always be think about the data think about the data you can gather think about the data you have or maybe the data you don't have also when you think about the data think about the problems that are in this data as this data copyright free is this data clean maybe it's messy maybe it has some noise it has some biases in real life data is always messy there are very very few exceptions where this data is super clean and it is actually very often that in artistic projects where artists have the time and the capacity to create those smaller data sets that are so much more created and so much more clean than it is often in in research or um an industry and another thing very important one to realize when working with AI is that the rules are something that is a magic sauce that happens when AI model is learning so this magical data that I'm mentioning all the time serves as learning material and then models aim to figure out the rules this means that we don't just say to a guy well this is how the world looks like uh this is the human has two hands and one head but we just allow AI to kind of look at the images and figure out on its own so this is also why for example those mistakes happen with people who have six fingers or three fingers or different numbers of them is that we never tell AI that well human can only have you know five fingers on one hand but by looking at the images there is very very often different types of occlusions you can see handshakes where different fingers from different people kind of collide and it's all in these examples that AI is trying to build a statistical model of the world and kind of say okay so most commonly people have five fingers but there's a handshake something maybe I might see seven fingers right or or maybe someone is holding a flower and I can only see four fingers so there are many different use cases that neural networks see in the data and then afterwards those rules are fuzzy rules that emerge on kind of its own when the model's been learning to generate this vision of the world which in no ways is an exact replica of the training data that it has seen but it is the sort of the compressed way of seeing this whole world of data that we've shown during training and another um another important distinction uh actually about AI methods um and also a very common misconception that I hear um as llms become so popular and also delima journey and diffusion are the tools that are most spoken about it is a very common mistake to think that all AI uses text but it is of course not true there are different kinds of AI tools and sometimes they're so different among each other more more than similar among each other so when we are talking about using AI tools for visual signal generation such as image generation there are two main categories of AI tools that we can speak about so uh the one on the right are the clip-based tools so those that use text and the ones on the left are the gun based methods so these would not be using language in any ways and as you can see there are multiple methods multiple models that employ different ways of how to build an architecture that would do that so how can we build a model that would understand text or just so many different ways and and also so many different ways on how we can build AI that doesn't use text but just looks at the visual signal and tries to replicate that so so yeah so uh now we will talk a little bit more detail about two of those main categories and we'll start with the most popular one so all the text to image tools are actually building upon a very powerful language and image understanding model which is called clip and what this model is able to do uh might sound very uh simple to us um as as it's just connecting text and images so when we have a image of a cat and we have a cat written in English language and three letters it's it's quite natural to us to understand that those two they relate to the same concept but of course uh up till clip there were usually an AI different models that worked with text and very different models that worked with images so that was kind of a huge breakthrough when openai was able to create this huge and Powerful model that suddenly uh was able to very successfully connect text and images uh in such a way that suddenly it it was possible to build models that would have those kind of connections and would be able to go from text to image or going from image to text and this clip method I had some very interesting technical details that authors um introduced so that was a great breakthrough but also a huge part of this power comes from the fact that they were able to train an absolutely immense network with a huge number of parameters on a huge data set that had almost half a million of pairs of images and text so it was again just a case of showing data showing examples of words and images and telling AI look this is a word this is an image connected to on a very conceptual level it's it's a simple method but of course the technicalities are quite quite difficult to grasp but they are not really that important for us and afterwards uh it was just uh just natural to go to the next step of creating images from text as of course creating text from images was a task that was somewhat sold solved already and was not as interesting but creating new images from text was a fascinating task that openai started solving first with Delhi and then was the more popular version Delhi 2 and um another um another uh invention that they applied to this method that is also used in all the other tools that you see is is called diffusion and the way diffusion works is that it tries to reconstruct image from an absolute noise and what it also means is that when you even when you use initial images they are treated as noise and it is um it is then going in a step-by-step process applying some more details and trying to get back to the full quality it also means that the task the task of reconstructing images from an absolute noise of of course a task that is kind of ill-defined when we see an absolute noise there are many possibilities of how this image could have looked like so this is why of course it's a process that never really has a one correct answer and this is why the the space of all the reconstructed images is quite a huge one so and uh when you're using a tool such as Delhi 2 or or any other text to image tools there is actually no training of the model happening so when you go to those websites um you're not training the model but you are engaging with the model that somebody else has trained and even if you're using an initial image even if you're using um a few initial images changing the parameters and you wait for a long time for the image to generate the fact is that what is happening behind the scenes this is just inference so there is no model training and the weights have been frozen so the authors of this tool trained their AI on a huge collection of images and texts and and now they allow others to engage with this model which also means that it's not really possible to engage to add any new information to the model that was not present there before this is why for example the knowledge there is this knowledge cut off so when you're trying to generate very recent events of course those models would have no knowledge of those as they've been trained on data sets that have been collected up till 2020 or 21. or uh or any other cutoffs and my journey is actually a proprietary app proprietary models so uh the uh there is not as much shared about the the model details Dali uh is not an open source model but at least the authors have published a white paper that explains in uh in quite a lot of details how they built um the tool this is why uh actually a stable diffusion was able to be created because uh they built upon on the information that was made Open Source by open AI when they did clip and Delhi judging from the outputs that the journey is creating there is a few um there's a few facts that about the model training uh that can be deduced so first of all this is also a diffusion model as it works in a very similar method to the alien stable diffusion it first has this noise and then iteratively in each step we have less and less noise for a number of steps as long as you go of course when you yeah when you allow it to inference for too long too many steps don't always mean necessarily better quality but it means that model is trying to add something new to the image my journey also has a very particular data set I guess as they have uh they have quite nice textures and also they used a lot more artistic focused imagery my guess is they used a lot of photographs a lot of um paintings and they really curated their data set not to have too many General looking photographs taken from the internet but having more artistic quality of course it doesn't mean that the outputs are always of higher just equality every tool has its shortcomings because with some of the newer mid Journey model at least subjectively for me it seems the textures are even too smooth and the outputs look too polished too airbrushed but of course many of those things are quite subjective and as an artist you're um yeah of course encouraged to try all the tools and see how you can make them work to your liking similarly there is no training the optimization happens on directly on the uh on the trained model so stable diffusion uh is the most open source tool and uh the whites are made open source the code is made open source uh so it's something that has been very broadly uh experimented with in the artistic and creative coding Community as of course many people were able to train their own versions of civil diffusion I'm sure you're familiar with many new variants many new models that became popular within the stable diffusion so um so yeah that's that's amazing but but of course um the the initial model you still need to be mindful on what it was trained on as uh there is this most popular data set that in the beginning Deli and stable diffusion used that contained a lot of copyrighted information I'm not sure if the further versions of the data set um still have for example Getty Images or have they removed that but I guess that with every new iteration of the model they will have it created better and better so going moving on to the next uh subtype of AI models is Gan which is uh a tool method that was introduced way before diffusion and text to image and generative adore Samuel networks were actually invented in 2014 and this is a way of showing neural networks examples of a data signal and then trying aiming to replicate that so there is a nice comic by not radic that explains intuitively how guns work so in every gun there are two networks one that is called the generator or colloquially forger artist the one that creates the images and the other one is called discriminator or critic or policeman its role is to judge the images that were created and to tell if they're good or bad so as the training progresses the generator is creating some random images some better some worse but it's uh it gets a feedback from the discriminator uh that is saying well this one is is terrible but but this one is is maybe better actually this part in the left corner you got it quite right so it gets this kind of signal back from discriminator that essentially allows the generator to learn better and with every iteration uh to create something that is more and more similar to the original data of course as generator gets better the uh policeman's job is also more difficult as at some point it is not even able to distinguish the real examples and the generated examples so it also has to improve and it has to look more into details it has to look more into the logic behind the image and this is why as the training progresses we are able to capture the initial data distribution quite well uh guns are usually trained on smaller data sets which is why they're more friendly for artists and there are some different types of style gun that require as little as few hundred or a few thousand images as a reminder text to image tools usually require like a half a million to be trained from scratch so so that's a huge difference of course there are ways to to fine-tune models with smaller amounts of data but uh but yeah but essentially there is this huge data Gap required for both kinds of AI models and um when when you've tried uh your first experiments with the AI when you've tried um maybe running uh some some tools uh the next question that often comes to mind is well maybe I should train uh my own AI model and especially again as it requires less data so um when proceeding to train your own model I would suggest to start with the questions especially to start thinking about what is your goal what is your theme in your data and also about what are you the shortcomings possible negative aspects of your data that you might want to avoid as creating the data going back to the beginning of the stock is the most important part of course uh so uh first of all when you want to train a new model when you want to train on your images that are very different from everything else that you've seen in different models do I would usually suggest to start with a bigger data set of let's say one thousand three thousand images that might be coming from your archives from your artistic practice through the years and then when you have this first model what is actually pretty cool is that for the next models you don't really need that much data as you can do what is called fine tuning and take an older model and just train it on a much smaller data set to kind of adapt the previous model to the new data and for that you usually need like 100 to 300 images of course those images need to be well created from a visual perspective as sometimes for example here we have a concept actually Google Images result search related to Africa and when we look at it it's it's to us it's of course uh quite obvious that there is one theme within this collection of data but for for AI tools it's it's usually a mix of different themes as the the methods the the visual methods they usually work by establishing patterns by establishing color palettes first by establishing textures and in such an example if we were to train an AI model on Africa we would get an absolute uh absolute mix of something that doesn't look like anything at all because certain model would try to make snaps with photographs with portraits with landscapes everything in between so I actually uh provided two examples of data sets that are very that were very well created coming from two of my collaborations uh that also shown uh have shown a very distinctive visual characteristics so the first one uh coming from Flora Solaris series as you can see the color palette was very restricted also uh focusing on close-ups of the flowers and even if we have a b and the upper right corner uh the AI model would just assume it's some kind of part of organic growth and not really get confused by that and in the second example it was street photography coming from neon close-ups so here again we see that there we actually mixed two different viewpoints some of the images contain uh close-ups well some of them are taken further from the street level um but again um we just had two different perspectives uh not more and only focused on images taken during the night and with some bright vivid colors which meant that it was a task for AI that was quite easy to learn and very quickly and was able uh to grasp how how it um how to recreate the mouse characteristic patterns and when you train your own model and you want to train your first one and you don't really have anything uh what is actually really useful to do is to start with some pre-trained model and uh Nvidia which is the author of the style gun um gun subtypes actually shares quite a lot of different pre-trained models and those contain different plants but also trained on paintings and sells images so if your data set has at least some similarity to any of the pre-trained models what works very well is to start with this kind of model so if you have let's say drawings your own drawings of portraits of people you could start with a model that already knows how to create faces and it would quite quickly converge to your data so this is always a good idea uh yeah so actually uh I also wanted to show a bit more about how gun training works you know um particular example of a project that I did so uh this is like a very short video that I will uh no run so um every every gun training starts with photography uh so you saw some examples of photographs that I used for members continued uh project and then after uh when a guy is training it's actually um quite boring to watch just a bunch of numbers and what is also interesting to do you kind of force AI to forget by showing it a different data so you can see here as one visual data transforms into a very different one that is more abstract so this is essentially how can training works when you're doing fine tuning and then when you have a trained model you're able to um prompt it for for different images and as it's not using uh any text you're just moving in different directions in the latent space uh or one way of understanding that would be that in 3D space we can move left right or uh forward and backward and every step would give us a very different image so this is essentially of the way of working with with gun training and also creating the images and animations something I I do very often so yeah if if anyone would was interested in working with animation medium um yeah so so now just uh I'd like to also quickly mention that uh not many people realize but AI is not just images and text but they're also uh tools for generative sound and there are some tools that don't require coding uh that you can try the one that I like to play with is Beethoven it just allows to choose different moods different kind of instruments it's not very Advanced but if any of you are interested in sound generation uh there um there are some more advanced tools also on technical level uh the ones that allow you to train for example your own um your own neural network on your library of sounds so uh yeah it's it's not a popular field but I would be very happy to talk to someone if someone is curious to explore this field and also what is really nice about um sound is that you can combine uh you can actually do both you can do audio visual you can do audio reactive visuals and um so for example you can control the pulse motion uh you can input an audio track and a trained uh gun model and then get a video that reacts to sound I'm not sure if the sound is playing but then afterwards you can also look at this and see the sample result of how it looks like thank you yeah so essentially it allows for a lot of interesting experimentations and creating uh visuals with gun models and playing with sound and yeah definitely a very cool to check out it was called Lucid Sonic dreams if anyone's interested if you had a gun model and soundtrack and you wanted to combine the two and uh finally some tools for uh super resolution is actually a separate field for upscaling images it's a separate AI domain even that very often has uh separate AI models uh to doing that and the most popular ones currently are Esther gun and Sweeney are and other tools such as gigapixel Tapas let's enhance IO they essentially have some fine-tuned versions of those models that work quite well on natural photographs and paintings and and even on some generated images so so now there are also diffusion based uh methods that work uh quite well and this is what you would see and those tools like mid-journey or diffusion where it asks you about upscaling so this is usually applying a different model to enhance the image uh behind the curtains and one thing I've noticed which is actually quite curious is that gun outputs they don't always look good with upscaling methods but one trick to do that is to um diffusify the gun output so let's say use it as an initial image and only allow one percent of diffusion to happen on top so have the diffusion work with like only slightly and for human eye it's the difference would be not really visible as the image just changes very lightly on the texture level but then such images are great input to those to those tools like astragan sweet and ir and latent super resolution and yeah finally some links uh so that for those of you who are interested in Gant training I would recommend using Runway ml if you don't have any coding experience you're you don't want to look at code and you just want to do your first experiment it's a great platform to just drop a bunch of images select a few settings with sliders and have your first gun model trained uh if you're a more uh Advanced uh user then of course going to style gun uh 2 or Saigon free repository um from PI torch is probably the most easy to use and just um running your your own training but for that of course you would either need a cloud or your own more powerful computer style gun is of course known as resource consuming as as diffusion so uh so if so if you're able to run the fusion locally you would be of course absolutely able to run style gun as well uh play forum is an older Tool uh also for the old school uh tool deep dream generator some of those tools are not as frequently used but I found that many people uh really like to go on and explore the new methods the next best tools but not always those next best tools are the best and sometimes you can get really interesting results when using some older tools and really exploring a particular theme or topic and not with the goal of finding a perfect replica of your data set but more of us exploring some common patterns or features that happened within your data so this is of course a very interesting question and and theme on its own uh yes so some uh some guidelines well if you want to train your own model by not using runwayml or not using a model that somebody else has trained and you want to gain this technical knowledge so what you essentially need is um is to learn some basic knowledge of command line uh usually you just need to run a few lines for training on those models do not give the gate GitHub install libraries um most of the stuff in AI is written in Python uh fortunately python is probably the friendliest coding language out there and uh it's it's really quite natural to understand and probably the easiest out of those so if you wanted to learn coding um and you wanted to do some AI I would recommend maybe learn some basics of python and then there are some libraries that Implement all the difficult blogs of how is the optimization happening how are those architectures implemented so the things like tensorflow pytorch numpy opencv they are all libraries that Implement all those difficult building blocks behind the scenes and also if you don't have a GPU card there are tools such as Google collaboratory avws stage maker and others that allow you to just pay-per-use and don't invest in Hardware uh finally I'd like to finish with some of the thoughts on AI art community etiquette that I I believe somehow emerged through uh for the years uh and uh those are some good guidelines uh that I've noticed people doing and they are very encouraged especially uh nowadays as AI imagery is becoming so prominent I feel that um if you're using a model that is not your own model but something that you're using out of the box it is seen as a nice gesture if you credited the model especially if you're using something that is open source so a lot of the AI research is built upon uh the um the open source that has published before so uh this is how a lot of the quick pace and progress has become possible also creating data sets um if you don't have your own data there is actually so many available copyright free data sets that you can use uh be careful to read the Fine Line uh on platforms such as pixels and and the other copyright free images because sometimes they disallow AI training I think they introduced that just recently um as I've become so popular so yeah be careful to see if the data is possible to be used for AI training but for example for a recent project I did I used Hubble Telescope images which were just astonishingly amazing high-res photographs of Galaxy and they are free to use under free license commercially however you like so um yeah there is so many amazing data sets that you can find uh curation I believe is also quite important uh within the usage of AI tools especially when we are using text to image we can generate so many images and I believe that the role of creation is becoming even more important it doesn't even mean that you just need to cherry pick the outputs that you like most the curation can also happen on a deeper level when you're creating a model when you're creating the parameters maybe you are not maybe you're even doing a long form AI model uh which means of course the decoration would be even more difficult as you want to make sure that most of the outputs are like 99 of the outputs are what you like and yeah the data again be be aware of potential biases of course uh the AI works by augmenting stereotypes there has been some studies done that showed for example that there's like 20 bias in training data and the generated data it will be 40 so it really augments the biases whether it's human biases or maybe just as simple as color biases texture biases uh yes so um if you create a data set and you think that you include a little bit of some images they might not even be picked up by model and uh yes so I guess this is all I also include some links I will uh share the link to the slides for anyone interested and uh thank you so much um for listening I'm open to any questions about AI about General practices maybe your own experiments and I'll just post the link now foreign yes so uh we have a question from Dave about training a text to video model so yes this is this is uh what's available also on Runway uh the tool I mentioned for Gantt training but it's of course so much more than that uh they just introduced uh their second generation of text video model in my uh very subjective view those models are still quite Limited in a way that if you if you have the narrative it is it is still quite basic and it requires a lot of processing to be able to achieve really good results uh also there is one problem with most of the text to video models is that they use diffusion and diffusion is not really great for video so when I explained in the beginning how diffusion works is that it takes noise and then generates something that is high-res and in video it would start every frame with different noise or the same noise but it would converge to different things so what happens is that the frames they get shaky so this is the fact that you very often see in uh AI generated videos with the diffusion is that from frame to frame some details might change which is not really coherent one of course it's possible to do great videos and reduce this effect to minimum or maybe amplify this effect and assume this is the artistic quality but diffusion is not really made for videos from a very low uh low level how the architecture is built of course there are some workarounds you can make the video smoother you can do frame interpolation to reduce this kind of effects and this is what they are doing at Runway and right now they're producing state-of-the-art text video so this is what you should be looking at um if you want to explore that um Pavel thank you so much uh for the uh for the feedback and uh there's a question about compute power how much power did you use uh uh yes so essentially essentially I train my models on on cloud um most of them using one two four Tesla t4s uh it's for training then for the inference I also would use one GPU but for the inference it would just run for like um one second for one image and then when I create videos with gas it's like three minutes on one GPU so uh and and how long to train the model it would actually be very different for different kinds of models uh a vanilla style gun Ada would give you good results in like one day on one GPU so it's it's not really bad and it's not really expensive uh definitely not comparable to training your own diffusion that would take you like a month for on several um like several dozens of cards of course there are tools to fine-tune diffusion which is different which is what for example dreambooth is doing but what it does it takes a train diffusion and it doesn't really Infuse that with your text it just tries to find the images that are most similar to the ones that you have in the model that was trained already uh uh I have another question from Espinosa do you think artists using majority is problematic or potentially problematic uh well so I I mean it really depends one problem I usually see is when people just bluntly use an artist's name this is something that is pretty straightforward uh it is similar to Pastiche where you just try to copy somebody else's Talent you don't even need my journey to do that if you took all paints and copied somebody's style it would be a painting but it would not really be art and right now the difference is that it's really just so much easier to do these kind of pastitious but if you use my journey creatively if you find it as if you use it as part of your practice or maybe you use some elaborate prompts and you develop a style that is so much different than what other people are doing um and you're not copying anyone else's style or maybe using a long long list of different artists that essentially in the output no it's not like single one of them is easily distinguishable then of course it's a different question so yeah 3D answer is it depends and of course it can be both problematic or or not really be problematic this is why you need to be careful as an artist when you use those tools make sure what kind of data is there what kind of prompts are using are you not maybe by chance copying someone it's also possible those tools that you're not even using somebody or some other an artist's name but your work looks like they're working and now you have accidentally kind of copied somebody so yeah just be careful another question from Emily have you played with uh Mirai uh so with this one uh I I haven't really played uh with this particular one uh Devi uh yeah I like Debbie's work in general uh so I'm sure she she had uh some great results with this tool um but um but yeah there are just so many of those tools that come out all the time um it's good to check them out to see if they fit within your practice um in my case it didn't really fit my practice of training on my own images and then having a very more abstract like animations but if it fits your case I would definitely encourage you to check it out uh the it looks lovely okay thank you Dave for feedback regarding Runway and um yes there are definitely some results and limitations but uh yeah it's also possible to uh definitely have have a nice uh results okay great uh it's it's true with every tool of course and I believe that it's important to know the shortcomings of the tools so that you use it well and you use it where it's strong uh this is how you're able to achieve really interesting results foreign what are project artists have you seen recently that you're excited about now um that's a great question uh to be honest I'm super excited so many new people coming from different backgrounds into Ai and it really feels and it's easy to see when someone who has um this understanding of Art and of different mediums comes into AI art and they're able to transform their understanding of the world around them and their emotions and really to feed that uh so uh I won't name any name says I would feel quite bad that I omit some of them but uh what I did actually is I did a few open calls for Nifty Gateway AI art show and I was just amazed to to see there's so many names I didn't know and I haven't seen before and to really have those new names appear and some new people coming to the practice uh and and really from such a diverse backgrounds as well uh this is what's what's most interesting I guess and what's next for AI art uh that's a very difficult question of course um those methods will get better there will be exponentially more AI methods exponentially more to choose uh I guess we might come to the point where AI models would be like camera types and some people will have their favorites some people would be experimenting with new ones all the time experimenting with next uh best thing but it will also not even be possible to try every AI tool that is out there so they might become more focused so you will have those artists specialized in let's say these kind of kinds of AI models and different artists specialized in different kinds of AI models so so maybe you will even have like different names for that so there will be like diffusion artists and then like uh optimization artists and not even using AI tool anymore as it becomes so Broad in a similar away how we no longer use the term computer Artist as computer art can be everything it can be generative art it can be JavaScript art it can be Adobe art it can be AI you know there is just so many subtypes of that so I guess that might be the next thing of how it grows uh another comment from Ivan uh about Delhi 2 and Adobe Firefly um so your goal is to create reference images for you as a model so that you could draw them by hand uh yes interesting create me a computer made of seashells in isometric View uh yes so that's so that's the problem also with many of the text image tools and AI today in general is that we see those fascinating use cases but they're often quite broad right when we think of a cat on a skateboard and AI generates something it corresponds quite well to our query but when we want something super specific super super specific down to the details uh once you experiment all those tools you realize it's not that easy to do and to get down to details without you know using external tools post-processing Photoshop Etc so uh for Ivan actually I think what might be interesting for you is to use control net so control net is um the kind of also twin method that allows you to go from one distribution to another so for example you might be going from drawings uh to paintings or you can go from isometric views to photographs and and then when you train such a model you would be able to get something that would be more useful in your creative practice it's it's a somewhat similar to pics to picks that you just train AI to transform one kind of imagery to a different kind of imagery uh yeah so so maybe um try checking that um um yes is there any okay thank you so much for uh for that uh so those amazing comments uh our time is running out so I will uh just answer the last one uh from Carolina and uh yes the control net Yvonne is is this one uh you can also try Googling for user friendly um implementations uh but yeah it's essentially this this kind of controlling of diffusion models and the last question from Carolina that I'd like to end with is whether you can use so what is blend comment in my journey and is it like making a minigun uh so the answer is no it's a very common misconception about guns and diffusion models uh in general so yeah what what happens in my journey is you're not training anything what happens when you use blend is that it takes different images and it finds their represent in the trained model so it's not really training anything under the hood but it's trying to blend on what it has learned so it has already the knowledge but it's maybe trying to figure out is if there is something inside this huge knowledge that would work or the images that you give to Blind comment Warriors if you train uh again you would uh you would learn you would train an AI that knows nothing else apart from your images so the whole world is your images and they would extract this information from your images so those are uh yeah essentially different processes and I hope I I explained it a bit uh yes so thank you so much everyone for attending the class and also uh for your engaging comments uh that was very lovely to see and I hope you all have a beautiful rest of the day and evening ahead

2023-06-23 03:04

Show Video

Other news