PS5 Pro Technical Seminar at SIE HQ

PS5 Pro Technical Seminar at SIE HQ

Show Video

hi I'm Mark cery today I'd like to do a deep dive into the technology behind our latest console PlayStation 5 Pro now to be clear this is a bits and bites talk with no game footage at all principally I'll be explaining what we put into the PlayStation 5 Pro GPU and why historically every seven or so years there's enough interesting technological advances that we release a new generation of console like PS3 or PS4 or 5 these introduce broad improvements like a more powerful CPU and GPU and also significant new capabilities like compute shaders on PS4 or the SSD and 3D audio on PS5 games can then be created with the whole console feature set in mind it allows for a tremendous Step Up In what the player experiences it does take a certain amount of time for game creators to get up to speed with that new console but we're all prepared for that because the benefits of moving to that new generation are so high recently there have also been console releases during a generation like PS4 Pro and now PS5 Pro these are much more tightly focused typically on the GPU and what developers are making are improved versions of games never dedicated games so targets for pro consoles are very different first the work that needs to be done by the game creators for the pro console needs to be kept to an absolute minimum they already have a lot of pieces of Hardware they're supporting we don't want to add much to that that burden and second those tightly focused improvements need to be pretty significant the games have to play noticeably better one of the trickier aspects of console design is that creating a console is roughly a 4-year journey in order to launch PS5 Pro in 2024 we were actually trying to work out the key features that in 2020 in other words at a time before PlayStation 5 had even been released and of course what we came up with was the set of improvements we've been calling the The Big Three first there's that larger GPU the idea is simple pretty much anything the game is rendering on PlayStation 5 should get a lot faster on PS5 Pro second there are the upgrades to the rate racing Hardware those games that move to the new architecture should get a substantial additional speed boost and finally there's AI driven upscaling the upscaling technology is a combination of custom hardware for machine learning and an AI Library called PlayStation spectral super resolution pssr for short that runs on top of that Hardware pssr analyzes the game images as it upscales them and can add quite a bit of crispness and detail now to do all that we needed a larger and more capable GPU this is what we used on the original PS5 it's a GPU from our partner AMD more specifically it's an rdna2 GPU meaning that it used the second generation of amd's rdna2 Technology the GPU has subunits called workgroup processors or wgp's PlayStation 5 has 18 of them the GPU on PS5 Pro is much larger it has 30 workg group processors it's also what I'm calling a hybrid rdna GPU which is to say it combines multiple generations of rdna Technology the base technology for PS5 Pro is somewhere between rdna 2 and rdna 3 I'm calling it rdna 2.x as all L explain that choice makes it much easier for game developers to Port their games to the new console Ray tracing uses what I'm calling future rdna technology it's roadmap rdna that's well past the feature set today it's showing up here first and machine learning is custom or to be more specific it's custom enhancements to rdna and just to be clear I may say machine learning or ml or AI today these are just different words for the same topics now to support that GPU and the overall plan for PS5 Pro we needed faster memory and we needed more memory the faster part is pretty simple the system memory on PS5 pro has bandwidth 576 GB per second which is 28% higher than PlayStation 5 more memory is needed for a variety of usage scenarios on PS5 Pro integrating pssr takes memory a few hundred megabytes are needed for for its internal buffers adding Ray tracing takes memory Ray tracing uses data in the form of an acceleration structure that can easily be a few hundred megabytes in size and if the game is targeting higher resolution that can take memory as well that can be just a little memory perhaps if the maximum rendering resolution is being increased a bit or it can be a lot of memory for example if the game is targeting 8K so we Supply over a gigabyte of extra memory to the games and we do this in the same way we did on PS4 Pro which is to say we added a hidden slower Ram we used ddr5 for that and moved a lot of the operating system into it that keeps games which need high bandwidth in fast memory getting back to the hybrid GPU I'd like to take you through each of the three aspects of our strategy beginning with the choice of rdna 2.x as the base technology AMD is continuously updating the GPU technology rdna 3 has more functionality and is more performant than rdna 2 there's even a chance to bring in future rdna Technologies like we did with rate tracing if we're making a new generation of console of course we want the latest and greatest but with a mid generation release like PS5 Pro we also have to consider that a single game package needs to support both PS5 and PS5 Pro that limited the degree to which we could adopt rdna 3 technology Oles for example games have something called Shader programs that execute on the GPU a game might have over a 100,000 of them if we adopted rdna 3 Technologies to the extent that code compiled to run on PS5 Pro wouldn't run on PS5 that would mean creating two versions of each executable piece of code one for PS5 another for PS5 Pro that's a massive complication the game package needs to be patched to include that second version and then the game needs to either selectively load just the appropriate version or find room for both versions in system memory it's a big burden for the developers consequently PS5 Pro uses a version of rdna that I'm calling rdna 2.x which is

bringing in a number of features from rdna 3 but not anything that would cause that degree of complications for example aspects of vertex and primitive processing are faster on PS5 Pro that's from bringing in parts of the geometry p from rdna 3 that are powerful but either trivial for the game to adopt or better yet invisible to the game program one thing I'd like to clear up is the erroneous 33.5 teraflop number that's been circulating for PS5 Pro that number isn't anywhere in our developer docs it comes from a misunderstanding by someone commenting on leaked PS5 Pro technical information part of the confusion comes from rdna 3 architectures having double the flops of rdna2 architectures now to quote digital Foundry on this topic it's a nice little bonus to have twice the flops but it doesn't do anything like double real world performance so there's a certain amount of flop flation going on here we did not bring in the doubled floating Point math from rdna A3 because achieving that bonus and performance would require a recompile for PS5 Pro and as I said having two versions of each compiled piece of code would create more work than we're comfortable asking the developers to do here then are the correct stats for PS5 Pro it's pretty simple PS5 pro has 30 workg group processors which is 67% more than PS5 has so the flops should be 67% higher as well if we assume a pretty common operating frequency of 2.17 GHz the math works out to 16.7 teraflops on PS5 Pro of course teraflop numbers are pretty meaningless what isn't meaningless is the performance of the PS5 Pro GPU 67% more work group processors means that we can create synthetic tests that show 67% faster processing in practice though there are a lot of factors involved such as memory bandwidth or even how a particular game engine responds to the details of the new architecture so a game team might be looking for something more like a 45% increase in rendering speed that's still a huge improvement though at that performance it means that if a game is running at 60 FPS and is taking 16 milliseconds to render on PS5 then that same frame could be rendered in 11 milliseconds on PS5 Pro that leaves 5 milliseconds to do something new and exciting like adding rate tracing which is the second of our three key improvements on PS5 Pro there's a passion in the game development Community for rate tracing even in 2020 before the launch of PlayStation 5 we could see creators using Ray tracing to add Reflections and improved lighting to their games at the same time calculation costs for the raise were pretty high so when we kicked off development of PS5 Pro later that year one of our top priorities was Finding ways to accelerate that computation those conversations with AMD led to our very nice feature set that's showing up here first note that there's two factors increasing the performance it's not just that there are 67% more work group processors thanks to the new feature set each one is more capable it's difficult to quote an exact speed up because it's very dependent on specifics of usage but we commonly see the calculation of the Rays occurring at double or triple the speed of PlayStation 5 the most impactful new features in PS5 Pro relate to a new acceleration structure and stack Management in Hardware there's a lot to unpack here first let's talk about the improvements related to the acceleration structure in order to use Ray tracing on PlayStation 5 you you need to have data and system memory that describes your geometry say a million triangles worth then there's something called an intersection engine inside each of the work group processors that lets you check to see if array hit any of those million triangles and which it hit first it would be too slow to test each Ray individually against all million of those triangles so there are also boxes in the data structure these boxes let the ray tracing Hardware more efficiently home in on the triangles that might be intersect for example we can see that the ray misses that upper left box so there's no need to test the ray against any of the triangles contained within it the boxes are actually in a hierarchy starting with big ones and progressively reducing in size every time we hit a box we test against the boxes nested within it until ultimately we reach some triangles we can test against together those triangles and boxes are called the acceleration structure on the original PlayStation 5 we used a type of acceleration structure called a bvh 4 bvh stands for bounding volume hierarchy meaning hierarchy of boxes and the four indicates that the boxes are in groups of up to four the intersection engine can then check array against up to four boxes a cycle or one triangle generally speaking there's a lot more checking against boxes that's what primarily determines the performance of the ray calculations PS5 Pro adds a bv8 option for the acceleration structure where the boxes are efficiently encoded in groups of eight and the intersection engine runs twice as fast a ray can be tested against eight boxes a cycle or two triangles that doubling of the ray intersection speed has a great theoretical impact on Ray tracing performance but real world cases also need a solution to the problem of Divergence that's what led us to our second big feature which is stack Management in Hardware before I can explain that feature though I have to explain Divergence the work group processors handle groups of 32 or 64 items at once they could be pixels or vertices or Rays this strategy is called simd single instruction multiple data so simd 32 means the same operations are being performed on 32 items this works very well when all 32 items are getting the same treatment for example 32 pixels from a triangle are all reading from locations in the same texture and then those 32 pixels all need a lighting calculation this is called coherent processing there's a difficulty that arises with Divergent processing where some of the pixels need one action taken and others need something else in this case it's quite possible that the processing takes twice as long in the limit if all 32 items need different handling it's possible to be dozens of times slower so when 32 rays are being processed together the degree of Divergence has a big impact on performance Ray tracing can be fairly coherent when we compute simple Shadows from the Sun the rays are all parallel but Ray tracing can also get extremely Divergent if rays are bouncing off of a curved surface or a bumpy object then potentially they're all heading in different directions the Shader code needed to handle Divergent Rays on PS5 is reasonably complicated part of what the code has to do is manage a stack the internal structure of the BBH is quite complex and each of the 32 Rays can be traversing it in a different Fashion on PS5 Pro stack management is in Hardware which greatly simplifies the Shader program it's shorter which means it's faster and since it's handling fewer cases there's less Divergence which further increases the speed of execution you may have noticed that there are now two versions of the code that are needed a longer one for PlayStation 5 and a simplified one for PS5 Pro but this need for two versions only applies to Shader programs that are calculating how Rays travel through the scene there's typically not too many of those in fact with some games it's just a single Shader program that needs the two versions putting that all together it's great to have the higher performance but even better we're seeing more consistently high performance on PS5 Pro that consistency comes from the improved handling of Divergence performance testing on PS5 Pro tends to show a good boost for the coherent cases like Shadows or Reflections off of flat surfaces but a much nicer boost for the Divergent cases the stack management Hardware is really helping there having more consistent performance across a broad set of use cases will go a long way towards easing adoption of Ray tracing the final improvements to the GPU are for machine learning there are a lot of uses for machine learning or AI whichever term you prefer large language models and generative AI are quite interesting Tech but with ML it's also possible to go after a very specific Target which is to give the games a graphical boost one of the key ways that can work is that the game renders less there's 8 million pixels on a 4K TV if the game renders sparsely say a quarter of those pixels it can do it a lot faster and then the right neural network can Intuit it how to fill in those gaps and make a highquality image another way to think about this which is not quite as accurate is that the game renders a smaller lower resolution image and then uses the neural network to upscale that image this is called super resolution and it's part of a whole family of strategies that reduce the work involved in rendering the game images there's also frame generation or frame extrapolation where a neural network inserts additional frames between the ones the game renders that can really reduce the choppiness of lowf frame rate games neural networks can also be used to turn noisy stati images into smooth ones which is an issue that crops up frequently particularly with optimized rate tracing having said that super resolution is definitely the focus of our current efforts it's important to note that highquality upscaling changes the way we should be thinking about game rendering resolution let's imagine three games that are rendering at various resolutions a reductionist view on this is that the 1440p game engine is the best and that the 1080p game is clearly flawed but after a super resolution pass these are all ending up at 4K resolution for display the conversation really needs to be about what's important image quality when game creators improve their lighting or materials or add Ray tracing then rendering each pixel can get more expensive and the resolution will drop they move up on this chart and that's perfectly fine as long as the upscaling technology is ensuring that the result is a crisp beautiful image and not a blurry one a different way to say that is that highquality super resolution lets game creators focus on fewer richer pixels and significantly improve the resulting image quality it's a world where internal rendering resolution is not the primary concern that's the world we want to be in when we're using these strategies for graphics the work isn't exclusively ml the neural networks tend to be preceded by conventional processing and followed by some as well well it's the piece in the middle that's the neural network more specifically it's a type of neural network called a CNN which stands for convolutional neural network here is a simplified CNN for super resolution it's not quite the one we use in pssr but close enough for the purposes of this conversation you can see a lot of images in the language of machine learning they're called tensors but basically they're images with many bytes of data per pixel the colored arrows are the layers of the network they are processing those images there's quite a few of them as well let's zoom in on the first layer its input is a game image 4K RGB perhaps it's a quicky upscale of what the game rendered the first layer does lots of Matrix math and then outputs another 4K image now with substantially more information per pixel maybe 16 bytes describing edges and the like that the neural network found in that input game image the second layer then picks up the output of the first layer and does a lot more Matrix math the resulting image might reflect some deeper understanding about what the game rendered there's also layers that reduce the size of the images downsizing to 540p or even 270p lets the neural network efficiently analyze larger scale structures in the input game image as you can tell there's a phenomenal amount of math going on here easily 10,000 operations are being performed on every input pixel and we need to do that math in something like a millisecond consequently ml Hardware needs very high performance typically hundreds of trillions of operations per second note that we don't call those teraflops because they're integer operations instead we say TOS there were a number of early decisions we had to make the very first was deciding where in the hardware we would put whatever would do all of that Matrix math generally speaking there's two options one option is to put ml capabilities into a GPU of course that's additional logic so the GPU gets larger or one can add an npu a neural processing unit npus are brilliant at executing neural networks but perhaps not so good with the pre-processing and post-processing surrounding the CNN the deciding factor was the order of Graphics processing within a frame with this approach most rendering in a frame is done at low resolution maybe 1080p then machine learning is used to upscale to 4K and there's no more rendering that can happen until that upscale finishes so we need to process that neural network as quickly as possible which is to say we need very powerful ml Hardware the more powerful the better that need for power is what pushed us towards using an enhanced GPU the choice was either adding a large npu or making more moderate enhancements to the GPU the next big decision was where all of this technology was going to come from when we were starting the PS5 pro project in 2020 we knew that we would need performant ml hardware and a highquality neural network for super resolution but we're not looking for ML Hardware that's generically high performance we need something that's optimal for our specific kinds of workloads and our typical workload is a lightweight CNN something that can run in a millisecond or so and has a lot of little layers broadly speak speaking you can license Tech or purchase Tech or build Tech but once you're licensing technology that's what you're doing forever so in 2020 despite the degree of effort required we decided to build our own hardware and software technology I'll start with the hardware we made a set of targeted enhancements to the rdna Shader core and the surrounding memory systems we're calling it custom rdna as it is custom Hardware created to our design specifications but within the overall rdna architecture and of course implemented by the rdna experts at AMD our Target for the peak computational capabilities was 38 bit tops which is to say 300 trillion operations a second using bytes as input there's a lot of thought that needs to go into the details of exactly how that math functions but adding that amount of raw performance is not terribly hard to do the difficulty is memory access PS5 pro has 5 76 GB a second of bandwidth to system memory when we compare that bandwidth with the computational capability of 300 tops it's clear that it's easy to be bandwidth limited let me give two examples if we do a computation where we read a bite as input and then eventually write a bite that's two bytes on the bus and the balance point of the system is about a thousand operations on that bite if we're doing more than that we have a well-designed system the 300 tops is being meaningfully utilized if we do less than that though we are bandwidth bound and we're wasting some of that machine learning capability a thousand operations is a lot alternatively to understand this issue we can take a look at one of the layers of the network say the second layer from the example I showed before let's imagine we have to read that input image and that it has 16 bytes of information for every pixel that's about 128 megabytes on the System bus then we do our math say a pointwise olution and write the output image which is another 128 MB on the System bus we are completely bandwidth bound we're only using something like 3% of our potential 300 tops so we're throwing out 97% of our performance and those reads and wres are going to take half a millisecond just for this one layer that's almost half of our budget for the entire CNN one strategy for getting around these bandwidth issues is to fuse layers the idea is to read up that input image process the first layer and then stick the results somewhere maybe in fast on chip memory where the second layer can quickly and easily get access to them as a result we're reading from system memory ones and writing ones but now we're processing two layers of the CNN and using something like 6% of our 300 TOS still terrible but an improvement what we really want here is a fully fused Network that's the Holy Grail of neural network implementation with a fully fused Network you're reading the input game image from system memory at the very start processing all of the layers of the CNN internally on chip and then writing the results back to system memory at the very end with bandwidth that low that 300 tops number is finally meaningful there's two problems we need to solve though the first relates to the amount of onchip memory required there's 8 million pixels in a 4K image if each pixel needs 16 bytes that's about 128 mbes in terms of onchip memory that's a lot luckily we don't need to process the whole screen at once we can subdivide the screen and take just a piece of it at a time through the neural network let's call that piece a tile problem solved right the difficulty we encounter is that as we are processing the tile bad data Creeps in from the edges so we have to throw out part of our results the smaller the tile is the higher the proportion of data that has to be discarded there are therefore effective limits to how small we can make the tile and correspondingly there's a certain amount of fast onchip memory that's key if we are to achieve that goal of a fully fused Network the other problem we need to solve relates to the bandwidth of that om chip memory our targets are incredibly High we'd like many many terabytes per second when you think in those terms everything seems small for example we could increase the size of the gpu's L2 cache and try to use that for the onchip memory but unfortunately the L2 bandwidth is just a few terabytes a second this memory problem was the starting point for our Custom Design from there it's been almost a 4-year Journey I'll hit a few high points of the hardware architecture beginning with the memory we ended up using it turns out we do have fast onchip Ram in the rdna architecture with an aggregate bandwidth of 200 terabytes per second we just need to change our mindset what we're doing on PS5 Pro is using the vector registers in the workg group processors as that Ram each workg group processor has four sets of registers each 128k in size and with a bandwidth of over a terabyte per second 30 work group processors therefore give us 15 megabytes of memory at a combined bandwidth of 200 terabytes per second which is to say several 100 times faster than system memory of course the roadmap rdna architecture and instruction set required some modifications to take better advantage of that registered Ram we ended up adding 44 new Shader instructions those instructions take that Freer approach to register Ram access and also implement the math needed for the cnns which is primarily done in 8bit Precision these instructions are specifically designed to operate in a takeover mode where each wgp processes the CNN for a single screen tile by the way the 300 tops number has been a real mystery since it leaked early this year no one on the outside has been able to derive that number from the work group processor count and the GPU frequency the secret is that there are instructions that perform 3x3 convolutions those use nine multiplies and nine adds for a total of 18 operations and at that pretty common GPU frequency of 2.17 GHz the performance

really does work out to 300 tops here is the math the CNN's also need 16-bit math so there's a number of instructions for that these instructions tend to be a bit simpler and more straightforward we kept the chip area and the cost low for the 16-bit math simply by targeting lower 16-bit performance because most of the processing in these cnns can be done with 8bit operations as for 32-bit math nothing in the CNN's particularly seems to need it so we just left it as is our custom rdna solution also involved a number of additional features which I'm going to skip over so I can get to the other half of what we built which is the neural network for super resolution that we created to run on top of that custom rdna architecture pssr is an original PlayStation design the full name of course is PlayStation spectral super resolution and that spectral is branding it doesn't refer to any particular aspect of the algorithm just like we have Tempest for audiotech we're using spectral for our ml libraries for graphics one of the project goals for pssr is ease of adoption so it uses essentially the same set of inputs as FSR or dlss or xss those strategies use the pixel color of the current frame but also depth information and motion vectors that give the flow of the pixels between the previous frame and the current frame pssr is not quite a drop-in replacement for the other strategies but it's close having said that pssr is designed for consoles so its primary use case is a little different from the others PC games tend to render it a fixed resolution and with frame rate that varies based on scene complexity gaming monitors can handle that variable frame rate so a typical PC game scenario is render at fixed resolution upscale by a fixed 2:1 ratio display at fixed resolution in contrast console games tend to have a frame rate that's fixed because they're justplaying on a 60fps TV what varies is the rendering resolution if the scene is complex then the rendering resolution is lower if the scene is simpler the rendering resolution is higher since the display resolution is usually fixed at 4K pssr needs to handle a continuously changing upscaling ratio that scenario is primarily what we design for and train for of course PC games are increasingly supporting variable rendering resolution and all of these upscaling strategies can handle fixed upscaling ratios and variable upscaling ratios I'm just pointing out that the focus with the pssr project has been a little bit different so with those goals in mind starting in 2021 we considered a lot of types of neural networks they were all recurrent networks which is to say they feed some of the results back in as inputs for what it's worth we looked at flat networks that just run at the display resolution networks that run at the lower rendering resolution with a final bump up to display resolution autoencoders that step down the resolution and step it back up and units that do the same but with different connectivity and that's where we ended up pssr is a recurrent unit we also learned just how much work remains after a network is chosen we did a lot of training and then did beta releases to select developers and got to see all kinds of issues cropping up once pssr was actually integrated into games and that required yet more training passes some of those issues were trivial we found out that one game used a perfect blue in its sky and pssr had never seen perfect blue in its training it had no idea what to do with it of course some of the issues we encountered were much more complex looking back at the four years since we started this project I'm so glad that we made the time intensive decision to build our own technology results are good and just as importantly we've learned so much about how AI can improve improv game Graphics it can only make our future brighter so that was the background and details of our improvements in these three key areas on PS5 Pro the larger GPU the advanced Ray tracing and the aid driven upscaling I'm going to restate those three somewhat and then I'd like to take a moment to do something we very rarely do which is talk about the future specifically I'd like to talk about the future potential in each of these three key areas first there's rasterized rendering by which I mean the conventional rendering strategies that were all we had up through PS4 Pro or so there's not a whole lot of growth left here it mostly has to come from making the GPU bigger or memory faster Ray tracing is different it's still early days for the technology and I suspect we're in for several Quantum leaps and performance over the next decade machine learning though has the greatest potential for growth and that's an area we're beginning to focus on some of that growth in machine learning will come from more performant and more efficient Hardware architectures the ml architecture and PS5 Pro is quite good but we did not in fact achieve that Holy Grail of a fully fused Network when running pssr it's close but pssr can't quite keep all of its intermediate data on chip and therefore does to some degree bottleneck on system memory access we see definite room for improvement in future ml Hardware in addition source of future growth will come from more sophisticated neural networks when fewer higher quality pixels are combined with the right neural network the result is richer Graphics one way to look at this is supportable upscaling ratio if we're able to create quality imagery with a 2:1 upscale and can then improve the neural network and reach the same image quality with a 3:1 upscale then the effective power of the GPU has roughly doubled and that stacks on top of whatever is being done to speed up rasterized rendering or Ray tracing there's enormous potential here we also hope to be heading towards multiple uses of these CNN within a frame not just super resolution but also some of the other targets I was talking about such as the denoising that's needed when doing optimized rate tracing through PS5 Pro we've developed some good understanding of Hardware design for machine learning as well as neural network design and we intend to continue this work with a pinpoint focus on games of course as part of their broader strategy AMD is pursuing many of the same goals and so I have some very exciting news to share we have begun a deeper collaboration with AMD for the project name we're taking a hint from amd's red and PlayStations blue the code name is amethyst with amethyst we've started on another long journey and are combining our expertise with two goals in mind the first goal is a more ideal architecture for machine learning something capable of generalized processing of neural networks but particularly good at the lightweight cnns needed for game graphics and something focused around achieving that Holy Grail of fully fused networks in going after this we're combining the lessons AMD has learned from its multigeneration rdna road map and siie has learned from the custom work in PS5 Pro but ml use in Games should and can't be restricted to Graphics libraries we're also working towards a democratization of machine learning something accessible that allows direct work in Ai and ml by game developers both for graphics and for gameplay amethyst is not about proprietary technology for PlayStation in fact it's the exact opposite through this technology collaboration we're looking to support broad work and machine learning across a variety of devices the other goal is to develop in parallel a set of highquality cnns for game Gra Graphics both siie and AMD will independently have the ability to draw from this collection of network architectures and training strategies and these components should be key in increasing the richness of game Graphics as well as enabling more extensive use of Ray tracing and path tracing we're looking forward to keeping you posted throughout what we anticipate to be a multi-year collaboration let me get back to PS5 Pro for one final moment you've now heard a bit about are fairly intense last few years building this console and developing pssr there's been so much learning for us as we delve into these new technologies but the payoff as I said in my PS5 Tech video a few years back is in the games and by now we know to expect the unexpected it's an absolute guarantee that the development Community will grab a hold of this technology and move in a direction that we never could have anticipated personally I can't wait to see what they do with this thank you for your time today

2024-12-21 00:44

Show Video

Other news

The Ryoko Scam is Back (and Worse?) - Krazy Ken’s Tech Talk 2025-01-15 23:50
Let Fusion Cook - Use All Computers Together! 2025-01-16 15:49
How Farmers Harvest Rye by Machine: Cultivation Technology and Processing | Farming Documentary 2025-01-16 03:51