Friends, hello everyone! Oleg and the stabledif.ru channel are with you. Today, take a look at the current best LTX Video circuit with STG support, as well as the Upscale Video circuit based on Supir. This is a fast Supir, it does upscale quite quickly and accurately. LTX Video is a technology for fast and high-quality video generation based on image and description. There are two LTX schemes: Text to Video and Image to Video. Today we will talk about the Image to Video scheme, since this is a more promising scheme. Here we can influence
the original image and description in order to assign some actions to our image. Regarding STG, it is a spatio-temporal control to improve video diffusion sampling. STG, unlike CFG, which we usually use in our generations, takes into account the time parameter for diffusion sampling, and this affects the compilation of video frames and their clarity. Now you see a comparison of CFG and STG technology. With this scheme, you can generate three-second,
ten-second, and even longer videos. This can be done in various ways: directly generating a large video or breaking it into stages. On the RTX 4090 video card, generating a ten-second video takes 2.2 minutes, and a five-second video is generated twice as fast - that's just over one minute. On an RTX 3060 graphics card with 12 GB of video memory, a five-second video is generated in 3 minutes, and a ten-second video is generated in about 6 minutes. Using this scheme, you can generate a video in stages, change the hint at each stage and select a more successful seed. But it will be possible to generate
an entire large video at once - 10 seconds or more. And all you need to do is switch to a tiled decoder. But the tiled decoder degrades the video quality, and therefore it would be more correct to generate it step by step and receive a large video in full quality. If you are interested in Automatic Forge Confi web interfaces, then I have systematic sequential courses for you on these web interfaces with my support and weekly streams. The Confi course
has been updated and a module on GGML has been added with the most popular schemes at the moment. You will find more detailed information on the website stablel.ru. The link will be in the description under this video. There will also be contact information for the administrator, if you still have any questions, be sure to ask them to the administrator. I'm waiting for everything on the course. Now let's continue. To work with LTX Video you will need to install ComfyUI. You can find out how to do this in this video. Here the classic version of Portable
ComfyUI with PyTorch 3.1.1 is installed. At the moment this is a better solution. If there are any changes, I will definitely write it for these schemes. We still need to download and install some models. All the models that I will talk about now can be
found under this video on Boosty. Now let me show you which models and where they need to be placed. Open the installed ComfyUI and go to the ComfyUI folder. Now go to the modules folder and go to the checkpoints folder. Here you can create a separate LTX Video folder and place the LTX Video 2b 0.9.1 saftensors model here. This model weighs 5.5 GB. Now let's go
back to the modules folder and go to the Clip folder. Here we will need to place the T5 fp16 model. This time I recommend that you install the larger fp16 model, which weighs 9 GB. You can, of course, give the fp8 model a try, but it seemed to me that here the fp16 interprets the hint a little better. In terms of model operation , they will also work if you install the bootloader. But I will show this further. The next models that I will show are models for upscale. Now let's go back
to checkpoints and here you can create another folder called Supir, and place a model like this Super v0 Q fp16 here. This model weighs 2.5 GB and it should be located here, in this checkpoints folder. We also need the SDXL model. Let's go back a little and go here again to checkpoints in the SDXL folder. You can also create an SDXL folder and put checkpoint Real Vis XL V4 in it. It is the V4 model, I will give a link to it, because the
V5 model, in my opinion, works worse. I recommend downloading the V4 model. This model is not Lightning, not Hyper, it's just a regular classic SDXL model, a full-fledged model. The Lightning model like this is not suitable for us, because with it the upscale is much worse. Now let's go back to the models folder and here we will find the loras folder, go to this folder and go to the lcm folder. Again, you don’t have to create this lcm folder, but I recommend
sticking to some kind of structure just so you don’t lose your loras. I put this lora in the lcm, and it’s called lora SDXL Lightning 8 Step. I’ll give you a link to it again on Boosty. As the name suggests, this lora allows generation in eight steps, meaning we can significantly reduce the generation steps to speed up the generation of each video frame. And there are a lot of video frames, so it is very important to install
the LCM model. This is a fast generation lcm model. The Florence model, which we will also need, and sm2, these models will be downloaded automatically when you launch the ComfyUI scheme for the first time. Now under this video on Boosty you will again find these two diagrams. The first circuit is for LTX video generation, it is called, and the second circuit is called Super upscale for video. We will need this circuit later after we generate it, first of all, throw this circuit into the LTX workspace, and you will eventually have a circuit like this. Some nodes, of course, may be red, and you may even receive a message that says some nodes are not installed or found. In this case, click on the manager button
and go here to the Install missing Custom nodes tab. Here you will have some nodes that are not installed. Here you can click on the Install button. And now I advise you not to click the square just like that, but to click the Install button. And now, when you click on the Install button for each node, you will see a window in which you will be asked to select the version of a particular node. Let me show you with
an example. Now I have all the nodes installed here, but now I’ll just click on some node and install it. Let's select some node like this and click Install. Look, and this window appears. Now in the new manager, in the updated manager, just such a window is displayed. Here you can select the version of these nodes, and
the nightly version, which is installed by default, does not always work correctly. If your nodes are not installed, try removing this folder from the Custom nodes folder, directly deleting it, restarting ComfyUI and try installing these nodes this way and selecting, for example , the latest digital version of these nodes, that is, not Last, for example, and here’s the penultimate one. The version is obtained, click Select and these nodes are installed. After that, click restart, it reloads the web interface, and you should already have these nodes installed. If this doesn't work, click Last and install nodes like these. If that doesn't work, try clicking on this version of, for example, these nodes and installing them. That is, you now have a choice of which version of these nodes to install. After rebooting
the web interface, you will already have this normal scheme without any red nodes. Let's see what's in this diagram. In this diagram we have some groups. This entire upper part is precisely video generation. This part is video stitching, that is, for example, if you generate two different videos in succession, you can upload one here, upload the second one here, stitch them into one video and interpolate using this node and get a single video interpolate. Why might this be needed? Look, the first group is image loading.
Here we upload the image. The image must be of high quality, and in my opinion, it should still preferably be one to one. That's how much I tried, it seems to me that the one-to-one format still works a little better. Next, in the resize image node, we select the resolution of our image 896 by 896. In my opinion, this is the extreme image size at which you can get maximum detail. If you increase it, the detail does not improve much. If you lower it, the detail decreases. But if you don't have enough video memory,
be sure to lower it. There are small tips here, so you can read for yourself what you can choose here and what you can’t. So pay attention to these tips. Well, next we have a model loading block like this Load Model. Here we have the T5 fp16 model loaded, but you can also select the fp8 gguf here. It is at this node that you cannot select. If you want to choose gguf specifically, then you need to install gguf nodes. This is done through the manager. Go to the Custom nodes manager, enter in the search and find nodes like these,
click inst, install them. After that, here in the workspace you simply double- click, you have these different options, and here you will have to select the Clip Model option, this option. Here you choose accordingly the model you need, for example, Q4 is a small, sufficient model. Here you must choose LTX, there is such a choice here too, and reconnect like this node to all these places, respectively. This node is either Ctrl B, or you simply delete it. I’ll return everything back now, I don’t need this node.
But you now know how to use gguf models. They reduce video memory consumption. If your video memory consumption is critical, then in this way you can reduce video memory consumption by choosing either fp8 or gguf. gguf reduces video memory consumption more dramatically. Next we have the 091 LTX model selected here. This is the latest model, it has the best shapes. This is why we use exactly this. It’s not just some third-party Fine tuning, we connect
this one. Now let's go down here a little lower, and here we are getting a hint from our image. From the image we submit the connection here, and in Florence we get a hint that is quite voluminous. M detail. We select some words that appear in this hint, we change them: image on video and so on. This is to get more dynamics in the scene so that we get a good video.
Now let's move this here to the right, and here we have two large groups like this. Look, these two groups are absolutely the same group. She's just cloned. The fact is that you can generate fairly small short videos. Well, for example, 97 frames, maybe someone will only get 73 frames from video memory, and then after that take the last frame and build up this video many times. Here it is increased twice, but you can
just go ahead and copy this entire group Ctrl C, here you press Ctrl Shift Ctrl Shift, exactly V, so that all your connections are also copied. Here we raise it, and from here, here, from here, we feed it here, here, here, here is the connection from the last image, from the last frame. And now we will have a triple video. But again, this is not necessary. If you don’t have enough video memory, then as a rule it’s missing at the decoding stage, at the stage when the latent space is converted into an image. That is, this is the final stage before output to video format, that is, before video recording of the sequence. And here
this option is provided. Here you have the opportunity to select a decoder lingo. This decoder was taken from the yuan, there are videos like this. Now 8 gib unan, eight point tle, this is not the normal Wi decode tle we use for images. This is a special e-decoder for video video format. It’s as if all your generated images are in one image, so large, so they need to be dissected. And in this VI decode, this is exactly what is being dissected. In this case, it is dissected at 48 frames. That is, if you, for example, make 200 frames, 200 frames arrive at your final
decoder, and all 200 frames need to be recoded from the latent space into an image using this talin decoder. You can cut this video into 48 frames and then decode it. But there is a downside to this decoder; it degrades the image quality. Here, by the way, another yy fune is used, 09 Y is used, which is in the model here, 091. It is not supported by this one, it is not supported by this decoder. Therefore, it won’t work with the help of it, but it will work with the help of this node. Therefore, a separate va is connected, I will definitely give a link to it too. Well, also the size of the tiles. It is also indicated here, I indicate the size of the tile directly level. And the main emphasis here is on
this very moment of cutting frames, that is, our tape is cut along 48 frames. You can now see an example of picture deterioration here. But here there may not be exactly the best example, in fact , right where there is greater photorealism, this grayness and a decrease in a little detail are more noticeable, and there is also a small temporary varnish. It is when we cut the tape,
that is, on the fortieth frame. We have a small varnish like this, this little little shimmer. For the rest, it’s quite possible to use this option. You can make a very large video , literally with one such group, there are 200 frames from, say, 10 seconds. You can almost make it easy
to switch between these groups. If you have them both turned on like this, then this option without tiling will work. That is, even if they are turned on at the same time. But if this group is turned off, that is, like this. Yes, you can use these switches to switch. That is, if this is turned off, then this group will already work. I suggest you still stick to a regular decoder, it is of better quality. The image
is richer and more detailed. If there is simply not enough video memory, then switch to this option of tiled decoding. In each generation group like this, you can indicate your own hint. Here there is an addition to the hint, for example, a woman straightens her hair and changes her pose. But the main clue is still formed by the Florence model. It is quite large, it looks massive and describes the image in great detail. Here it is very important to describe a detailed image,
otherwise the image will simply begin to float. That is, when you have strong changes, it will already change very much and will not return back to its original state. This hint that you write will always be at the very beginning, and the hint that is generated by lora will always be at the very end.
Regarding the length of frames, this is indicated in this node. Here 121 frames are indicated. It's about a five second video somewhere. If you don’t have enough video memory, you can lower it a little, set it to 97 frames or even 73 frames. But I tried it on an RTX 3060 12 GB, 121 frames are very easy, that is, there are no problems. Despite the fact that there are only 12 GB, and not 24 like mine. You can read about the influence of frame rate on the dynamics in a scene here
. But in principle, 25 frames is the golden mean. You don’t have to worry too much, don’t experiment, this is a really good value. Leave him alone. Regarding the number of generation steps, the more of them, in theory, the better the generation. But the generation time increases greatly, and the quality does not change very much. Therefore, 25 frames is also quite normal. Regarding the scheduler, the norm is a completely normal scheduler. Here you can select a symbol, and there won’t be much difference. Other planners don’t show themselves better, so you do
n’t even have to waste time. Regarding the sampler, sampler L shows the optimal value. Another interesting value is shown by lcm, but it produces a slightly twitchy image. But the image quality seems to be higher, but there is no smoothness. Therefore, either this value, or you can choose L7, or ler. In principle, other sampler options do not perform better. Regarding CFG STG, you can read here. But this is the optimal value, I’ll tell you right away. You most likely won’t see any changes for the better here. I tried a lot of different options here
, and to be honest, the generation didn’t get any better. Regarding the quality of generation, I also need to say right away that you should not choose an image that has a very detailed background. That is, it is better to choose either where the background is blurry, or where the background is completely simple, monochromatic. That is, well, this background is normal. That is, if the background is detailed, then such an image will be quite more difficult to generate. If the object itself is very complex, there is something to hold in your hands, generation will also be more difficult to obtain. My seed is always fixed. I usually change it
one by one, so that I can always return to the previous or previous generation without losing it. Because some generations are successful, some are unsuccessful. Sometimes it’s quite difficult to find something successful and return it later. As a result, you get a video like this. You can immediately save this video and then throw it into askeri, for example, this video separately and this video separately. Then your video memory in askeri will also be consumed less. You will be able to get a more detailed image, because the more video frames you have, the more difficult it will be for upscale to work naturally. And stitching these images together is very
easy using this diagram. Here you can simply upload one image, the second, and then interpret the images only at the final stage after stitching. Now I have launched the generation so that all our images are displayed. You can now look at the generation time. 80 seconds generates 121 frames, and in the second group I've chosen 97
seconds here. Here I have another industrial activity. And here I have this industrial product. Here I have this industrial product. That is, you can try to set various actions here. At least with the seed, again, select a more successful generation option. And so this group at the very bottom, which I also mentioned, here we display the last frame. Look, here I am displaying 121 frames. Why?
Because here I have 121 frames selected, that is, accordingly, the last frame is displayed here . And from this frame I begin to continue generation. In this group, that is, I submit this frame as input to this group instead of this image, and it turns out that this video simply continues. This frame here, of course, it will not be as detailed as the previous one, and the video will gradually deteriorate. And so be sure to look at this last frame. If it turns out to be unsuccessful, somewhat blurry, then in this case it is better to change the seed and generate some other version of this video so that the last frame is of the highest quality. If you somehow don’t succeed, but for example you really liked the video, it’s cool. In this case, you can select some other frame, for example the ninety- seventh frame there, for example, and then submit it here. But just when you stitch it all together,
be sure to have 97 frames in this video. Otherwise there will be an incorrect jerk. How can this be taken into account? Yes, here, when you stitch this video, you can select, for example, here Frame Load Cap 97 frames. And in this case, your remaining frames will simply be cut off. You can do this. But if you want to do it automatically, there is no such choice here. That is, you see, here we simply get one sequence, a second sequence, and we automatically stitch all this into one single video. Therefore, you will then have to save this
video separately, right-click Save preview, click, select the desktop there. That is, saved it on the desktop, clicked here. We also selected Save preview with the right key and saved it as a second video. And after that, this circuit was unbalanced, this circuit was accordingly bypassed so that it does not interfere with you, and does not generate the same thing 10 times. Here you click here cho video
and select the first video there. Here you click select second video, for example, and select the second video. Here, accordingly, select how many frames you need there, for example, 97, not 121. That is, if it costs nothing, just like that, zero, this means that all frames are taken. Then you just don't count anything. That is, all the frames are from this video, all the frames are from this video. But
if you put 97 here, here it stitches these two videos into a single video. After this, the interpolator works and it interpolates this video up to 50 frames in this case. Look here, there are 12 frames indicated here. And here 50 frames are indicated, provided that four frames are obtained instead of one. That is, an additional three frames are being built up. This is done so that we can use it after we are in supi propskel. After the Surah, exactly
a twelve-frame video is released, but I will now tell you these nuances when I talk about the scheme of the Surahs. Let's save this group now, and in principle here I think that everything should be clear. Yes, that is, this is where we get it from. Let me rob this group. Look, if
you can’t find these icons, these icons can be turned on here, click settings, click RG fre, RG fre is a special set of nodes. If you don't have them, install them through the manager, just like I showed you. Now click on this button and go down a little lower, turn on this checkbox and check these two checkboxes. We all click Save and close this window, click the cross, and all these icons in the corner will appear. The first icon means
to start generating this entire group. If you click here, then start generating this group accordingly. But everything that will be necessary for the work of this group will also be worked out naturally. That is, these groups will also be launched in any case, because they are connected with this group. But this icon here is a bypass. That is, if you press it, then all these nodes turn off. That is, they are simply connected by a direct wire, a direct connection, roughly speaking, and they will not work if you press this Q button. Let's take another look at this diagram. Look, first group. Yes,
we have here the last image connected to the input image. This is the root that we are feeding here for stitching into the final video. Well, I repeat once again, look, if you don’t have enough video memory and won’t have enough for upscale, then save each video separately and then embroider it there. If you have a lot of video memory like I do, it’s enough, then it will be much more convenient to save this big one video at once. We pressed the right key, clicked Save preview and saved it somewhere on the desktop for further processing in the surah. Now let's move on to the surah diagram. This is the scheme of surahs, you also throw it
into the workspace like this. All nodes that will be red, install the same through the manager Install missing Custom nodes. If some nodes are not installed there, you already know them, you just need to delete them from the Custom nodes folder, go to Custom not Manager, find this node and install it through the Install button and select the desired version. Now let's look at this upscale. Look here, you are uploading a video, the first group is uploading videos. Here we click, select the desired video, for example click. This video is selected, and here we indicate that we
select only every second frame. That is, we will not upscale every frame, but every second one. Why? Because it's just faster. This will not affect the quality much; on the contrary, your video will be smoother. I tried different options, but again you can always skate all the frames, it will just be twice as large. This is the optimal option. You can try,
if you don’t believe me, to check what will happen if you skater every frame or not. I assure you that you will not see any strong results. The point is that we will also use an interpolator. Therefore, the video will be smooth in this case too. In another node here, we set the size of the final video in width or height. Whatever suits you. Zero means that the second side of the proportions is simply taken. And look, this size doesn’t mean that your video will be of higher quality there. If you are here larger than the size of the upscale,
we will now display it in another place, I will show you. Here we have the number of frames here at the output. Why 109? Yes, because we only upscale every second frame. In fact, here we have 218 frames, but we just take every second one. Therefore, 109. Be sure to read the description. Everything is written here. Let's move on now to this group, its robbery. For what? This group is needed, this group is needed so that you can see how your asker is configured. When you expand this group, your scaler will work with only one frame,
and you can see whether this upscale suits you or not. Yes, it was like this, it became like this. If it suits you, then you bypass this group like this, and in the end you All 109 frames will already work. This is just to avoid wasting your time 109 frames, not upskating, and then look and say: “No, something’s not right.” Yes, no, you can just do it in one frame. And only then, after this, the configured asker already passes through the entire array of frames.
Now let's go a little lower. Here we have a group again, Florence. Florence is used almost everywhere to describe an image. Here we get such a big description. All this happens automatically. We will not describe anything manually . We will not describe each frame separately, only one frame is described, and this hint is given to all frames. Because describing each frame is just a waste of time, it is not necessary. Well, then we have the most interesting part. We have two groups here. The first group we have
is a mask from an image, and then after that it creates a video like this with a mask. That is, we have a frame within a frame, and a mask is assigned to each frame. After that, this group takes it from us and cuts our video into squares like this along our mask. Why is this being done? This is done in order to reduce the upscale area of our image and maximize quality. You don't need to wash out the blurred background because as I said,
better video usually comes out with a blurred background. To avoid upscaling the background and not waste video memory, which by the way is more important, because it may not be enough for some large upscales. This gives both efficiency in detailing and also speed in upscale. Significant speed, especially if your main object is generally small. Imagine it's small. You don’t need all this image upscaling
by 1/3, probably cropped, and our speed will increase by 1/3. In general, this group is responsible for pruning. This group is responsible for creating the mask. If you don’t need to do this, if you just want to propscale the entire image, you can do that here too. Just turn off both of these groups like this, and the scheme will automatically work on the entire image . That is, your final image will be 108 pixels in size, and in the end it will work super at this resolution. Let's now turn it all back on and look, here we have an upscale option for our mask in this box of our mask. Up to 1224,
this can be significantly higher in resolution than this area, taking into account our size. And this does not mean that here you need to set a correspondingly larger size of 1.80. Here this size will be enough. It will simply affect the quality of the upscale. That is, in any case, if you increase the size here, set it to 1.500 there, then the detail of this final video will be even if it is 1.80 here, it will still be much better. But you most likely don’t have enough video memory; 24 GB, for example, is not enough for 1,500 points, provided that I have 109 frames. But if I had split this video, as I told you, into two videos, and not one, and pushed it here, then in this case I would have been able to set it to 1,500, and maybe even more, and the detail could have been increased much more. But the speed of such an upscale will, of course,
also be much greater. That is, the higher the resolution you set here, the longer the upscale will be and the greater the video memory consumption. This is important. After this, our image is fed into the supscale circuit. Here’s the diagram, it’s open, and here the Real Vis XL V4 model is selected, namely the V4 regular model, not Lightning, not Hyper. No just a regular model, and the lcm lora model is selected for it. This is the eight-step one, and with its help we can
significantly reduce the number of steps. And thus maintain good quality. Of course, without this lora the quality will be higher, but the generation time will be much longer. Why? Because the number of steps will have to be increased to 25 to 30. And here we will have the number of steps during generation
of only 10. This significantly reduces, approximately triples, the upscale time. Here we select the super model. Be sure to reselect all these models, otherwise you will simply have an error. Because I have folders, as you saw, some nested, and the models are called there in its own way. That is, you most likely will not have this model by default. This is how it is if
you leave everything. Now the next moment, look here we have all sorts of decoders and encoders. But this is the Neuer, yes, and here there are these checkboxes, which are called use Til TW Til. It should be turned on only if you do not have enough video memory at this stage. Directly turn on this checkbox, in this case your work time for this node will increase greatly. It will work several times longer, but you will most likely have enough video memory in any case. If you have enough video memory, then leave it all turned off like this.
The sampler settings are like this, no tiles are included here, there is no need to keep the model in memory either. It will consume video memory, you definitely don’t need it. There is no need to set the tile option here , it won’t do anything in principle, but it will greatly increase the upscale time. So leave these values like this. This decoder is very important. Most likely, many of you will not have enough video memory for this very thing. If this happens to you, then you have several options to solve this problem. Well, first of all, the first option is to simply
reduce the size of the upscale. Well then you will lose in the detail of the upscale. The second option is to split your video, that is, take two separate shorter videos from it, so that there are fewer frames. The greater the number of frames, the more the video memory is clogged. And the third option is simpler. You just click this checkbox, turn on Til dec Wi, and in this case
you will definitely have enough video memory. But this decoding time will be very long. That is, it will probably increase about five times, maybe even more. Therefore, only do this as a last resort . It seems to me that it would be better to split the video into two parts. In this case, you won’t lose in quality , and you won’t lose in time either. Therefore, it will probably just be a little awkward to glue them together later. But it will be much faster than waiting even for this decoder. Therefore, in this case, it is better to leave the use ST T decoder in the false option. It will be much faster. But
if you have no options, there’s no other way to do it, then just turn it on, and then you’ll definitely have enough memory. Well, after that we move on to this last group. Our images appear here, we see our video is already upscaled. The frame rate is 12. Why is it 12 and not 25? But 25 cannot be divided in half. We take 24 and divide it in half, we get 12 frames. As a result, we have this video with a reduced frame rate. But we don’t save this video, we feed
this sequence here into the interpolator. Here we select two frames, respectively, from one frame we make two, and here we get 24 frames as an output. That is, then in this case we will need to put 24 here, provided that we put two here. If we want an even smoother video, then here we can set four frames extended from one. That is, three
additional frames are added, and accordingly, here is the frame rate so that the video speed remains the same. Yes, here we then select 50 frames, and in the end we get such a smooth video. And I also didn’t say anything about these nodes that are turned off. Yes, this is just to watch the intermediate frames. There will simply be a large set of frames here, 109 frames there. That is, you can see what's going on here. They are with bypass. Why? In order not to waste video memory, because a large number of images that are displayed like this at the same time will naturally also occupy video memory. So if you need to look at something
, remove the bypass. Look at testing. Yes, and then it’s better to turn it off again so as not to waste video memory. In general, this is the video we ended up with. About LTX generation LTX is developing very well. Added STG, which significantly improved the quality of generation. And the advantage of this technology is that you can try it on almost any map. An RTX 3060 twelve-gigabyte card will easily handle such schemes. At least I
worked on such a card too, and the generation time is also acceptable. Even on the RTX 3060 it is quite possible to generate video. This is fine. If we take some more advanced video generators, then the generation time is much longer. There it takes up to half an hour on the RTX 3060, and it’s almost impossible to wait. This is all quite comfortable in terms of generation time. That is, be sure to try it. This works great on even weak video cards. Friends, you can find more detailed instructions, all the diagrams and all the links that I talked about today on Boosty. I also invite you to
my courses on Automatic and Confi with my support with my weekly streams, which take place on Friday at 19:00. There we look at a lot of interesting things, all sorts of new products that come out, and questions that you have while taking the course. The course is constantly changing. Why? Because ComfyUI is changing. It is constantly changing,
some additions have to be made. I made the last addition literally on New Year's holidays. I also added a whole module for FLUX. There are the most popular circuits for the FLUX model. Also, friends, be sure to come to my big Telegram chat. There is a very large community there. You can share your thoughts, ask questions, and chat with people of similar interests. And also visit the Telegram channel. All links
will be in the description under the video. And with that, good luck to everyone and good mood. Bye everyone.
2025-01-19 05:41